Articles, Blog

Efficiency of Quick Sort

December 6, 2019


As it turns, out the efficiency of quick
sort is actually pretty complicated. First, let’s just focus
on the worst case. What would that look like? The magic of this algorithm
is that it cuts the number of comparisons you need to do. By splitting the array in half,
pretty much every time. So, the worst case for us would be if
we couldn’t split the array in half and had to do all of
the comparisons every time. You’ll end up doing all of the
comparisons if the pivots are actually, already in the right place. Since 13 is already the biggest element, we’ll end up comparing it to
everything else on the first go. And realizing that it
doesn’t need to move. This is a lot of comparisons
to do at once, but the real problem happens when
the next value is also the largest. Again, we end up comparing to
everything smaller than it and we’re not saving any steps. Hopefully, the number of comparisons
here reminds you of bubble sort. Remember in Bubble sort,
we would have to compare each element to the one next to it,
again and again and again and again. Eventually, we could leave off
the ones that were at the end, because we knew they
were in the right place. The worst case of Quick Sort is exactly
the same which means that the worst case of Quick Sort is
actually big O of n squared. For something called a Quick Sort,
that’s a really terrible efficiency. However, Quick Sort is useful for
two main reasons. First of all, the average and best
case complexity are actually nlogin. In a good case, the pivot will
move down to the middle and we’ll get to divide the array
in half every time. With our pivot in the middle, we can
look at the other halves of the array and move their pivots to the middle too. Since these pivots are sorted, we know
that everything else is sorted, so we’re done really quickly. Here, because we’re cutting
the array in half every time, it’ll end up looking
a lot like merge sort. So again,
that’s why are efficiency is at log n. The average case is actually
going to look a lot like this. We’ll pick a random number, it’ll move
close to the middle and so on and so on. However, if we know we’re going to be
getting a raise that are near sorted. We don’t want to use Quick Sort, since that will end up being
the worse case every time. The second point is that you can do
some optimizations with Quick Sort that will make it run faster. For example, when you split your array, you can configure your program such that
it runs both halves at the same time. It will end up using the same
amount of computing power, but it will eat up less time. Also, rather than selecting
the last element as a pivot. You could look at
the last few elements and select the median of them as the pivot. Selecting the median will give
you a better sense of what’s in the middle of the array overall. So, you have a better chance of
moving your element in the middle and having that best case scenario. Also this version of
Quick Sort is in place, so we aren’t using any extra space. Our space complexity is constant.

You Might Also Like

2 Comments

  • Reply guapisimopro August 2, 2018 at 10:50 am

    spanish please

  • Reply Yudi Guzman April 17, 2019 at 6:07 am

    Your explanation was pretty awesome. Thanks.

  • Leave a Reply