consider l and r to be walls (i.e., exclusive not inclusive) around my target value
that way can just set l or r to be the guess/median value
also to get the median, do floor + (distance between divided by two)
def searchInsert(self, nums, target):
"""
:type nums: List[int]
:type target: int
:rtype: int
"""
l, r = -1, len(nums)
while l + 1 < r:
m = l + ((r - l) / 2)
curr = nums[m]
if target == curr:
return m
elif target > curr:
l = m
else:
r = m
return l + 1
could also do:
while l <= r:
m = l + (r - l) / 2
curr = nums[n]
if target == curr:
return m
elif target > curr:
l = m + 1
else:
r = m - 1
return -1
Bubble sort and selection sort suck. Both O(n^2).
Merge sort and quick sort are both good and, in the average case, O(n Log(n)).
Merge sort:
Worse case is O(n Log(n)).
From Algorithms class:
like quicksort in that it recursively splits array and builds it back in right order (both also divide-and-conquer)
divide array into 2 halves, recursively sort halves, merge results
recursive calls then sorting action (contra quicksort); "on the way up"; only way sorting occurs is through these "on the way up" merges
pro: guaranteed N log N performance
con: extra space proportional to N (aux array for merging)
time efficiency: average, best, and worst are all O(N log N); does require some aux space
All the work happens in the merging where consider both arrays and pick the smallest element to copy to main array each time. Base case is merging two 1-element arrays (or null arrays) because these are sorted by definition.
Simplified pseudocode:
if len(list) < 2: return list
sorted_first = mergesort(first_half)
sorted_second = mergesort(second_half)
return merge(sorted_first, sorted_second)
Most important part is merging the sorted lists. For this, just add smallest element each time until one of the lists is empty, and then copy any remaining elements over.
Usually stable.
Quicksort:
Worse case is O(n^2) when array already sorted, but small constant factor and can be in-place. Can also mitigate / check for the worst degenerate case.
From Algorithms class:
array rearranged such that, when two subarrays sorted, the whole array is ordered (contra merge where array is broken into 2 subarrays to be sorted and then combined to make whole ordered array)
recursive calls happen after working on whole array
partition/pivot not necessarily in middle Or necessarily the median value, leading to the worst case performance.
improvements: insertion sort for tiny arrays, median of 3, randomize beforehand
pros: average case N log N, in-place so small usage of memory (small aux stack), shorter inner loop so fast in practice as well
cons: worst case is quadratic, happens with already sorted array (where pivot = first item)
time efficiency: average & best are O(N log N), worst is O(N^2), small constant factor
To partition, have two pointers at each end working toward each other. Stop pointers when each reaches a value out of place. Swap them.
This results in all the smaller (than some x) values on the left side and all the larger values on the right side—though each side is not itself sorted yet.
Simplified pseudocode:
pivot = partition(list)
quicksort(list[:pivot])
quicksort(list[pivot:])
Usually not stable.
A good resource is here: #link http://www.algolist.net/Algorithms/Sorting/Quicksort
And also, of course, visualgo: http://visualgo.net/
Difference is that in mergesort, the sorting action happens on the way up (when ordered subarrays are merged), whereas in quicksort, the sorting happens on the way down when the (sub)array is split and the low and high elements are separated and the pivot element is placed in its correct final position.
"The difference between the two is basically which linear operation they leverage. QuickSort is efficient because you can partition a list into items that come before and items that come after a given item in linear time. MergeSort is efficient because given two already sorted lists you can combine the two, and maintain sorted order, in linear time."
No comments