1)General:

General:

.To do:

    • Repeat the problem out loud, slowly but confidently.
    • Ask questions, check assumptions, know what the constraints are.
    • Talk out the problem!
    • Work out examples on the whiteboard, devising some kind of brute force algorithm along the way.
      Don't stay silent here.
    • Start with naive simplest brute force solution.
    • Check edge cases and do null checks.
    • Work out examples using the code/algorithm; keep track of state by hand.
    • When debugging, don't rush. Don't guess and check—make sure I know exactly what went wrong and how to fix it. Preventatively: don't try to be too clever.
    • Know time/space complexity: http://bigocheatsheet.com/

  • Heuristics and guidelines:

    • Always consider hash tables (dictionaries) with their O(1)-ness.
      • "Tip: using a dictionary is the most common way to get from a brute force approach to something more clever. It should always be your first thought."
      • Sets are also extremely useful if all I care about is unique elements and membership testing.
      • Don't be afraid of using one even for graph traversals. Worry about optimizing space later.
    • If at all array-related, try sorting first.
    • If search-related, consider binary search.
      • "Binary search teaches us that when a list is sorted or mostly sorted:
      • The value at a given index tells us a lot about what's to the left and what's to the right.
      • We don't have to look at every item in the list. By inspecting the middle item, we can "rule out" half of the list.
      • We can use this approach over and over, cutting the problem in half until we have the answer. This is sometimes called "divide and conquer.""
      • For binary search, consider floor and ceiling as exclusive walls.
    • Start with a brute force solution, look for repeat work in that solution, and modify it to only do that work once.
      • "Instead, we started off by coming up with a slow (but correct) brute force solution and trying to improve from there. We looked at what our solution actually calculated, step by step, and found some repeat work. Our final answer came from brainstorming ways to avoid doing that repeat work."
    • Space-time trade-off! That is, for better time complexity, try using auxiliary data structures. E.g., do something in a single pass over an array—O(N) time—by using a hash table—O(N) space—vs. doing something in nested passes—O(N^2)—without using any extra space—O(1).
      • What information can I store to save time? Don't be scared of using tuples.
      • Counting sort (where the items are stored as keys/indices and the frequency is the value) is a big example here.
      • Another example: O(1) get_max method for a Stack class stores extra information (the max at and below each element) to save time (instead of iterating through the stack O(N)).
    • Remember that I can use two pointers:
      • E.g.: to get the midpoint in a linked list by having one pointer go twice as fast, or in a sum array problem by having the pointers work inward from either end, or to test if a string is a palindrome.
      • Also, the "stick" approach where to get the first node in a cycle, you start one pointer n nodes ahead of the other, where n is the length of the cycle.
      • Don't forget that I can also start at the center and expand outward, like in the palindromic substring problem: https://leetcode.com/problems/longest-palindromic-substring/description/.
    • Try a greedy solution:
      • Iterate through the problem space taking the best solution "so far" (running sum etc.) until the end.
      • Optimal if the problem has "optimal substructure," which means stitching together optimal solutions to subproblems yields an optimal solution.
      • Also usually necessary for O(N) solutions.
      • "Suppose we could come up with the answer in one pass through the input, by simply updating the 'best answer so far' as we went. What additional values would we need to keep updated as we looked at each item in our input, in order to be able to update the 'best answer so far' in constant time?"
    • Does solving the problem for size (N – 1) make solving it for size N any easier? If so, try to solve recursively and/or with dynamic programming.
      • Using the max/min function can help a lot in recursive or dynamic programming problems.
      • To use, always think about starting from the end, i.e., if I had an answer for everything but the last something (element, cell, subproblem, etc.), what more do I need? Basically the n+1 step of an induction proof.
        • Think: at step n, what happens if I *do* include this thing? What happens if I don't? What do I need to compare and take the max of? And how do I use my answer to step n-1?
      • Start with a simple cache! Then move on to a tabular building-from-bottom-up approach.
      • Always think recursion for binary trees. But remember that iterative solutions are usually better because they avoid possible stack overflows and can be more space-efficient.
    • A lot of problems can be treated as graph problems and/or use breadth-first or depth-first traversal. And if the problem involves parsing or reversal in some way, consider using a stack.
    • Any time you repeatedly have to take the min or max of a dynamic collection, think heaps. (If you don’t need to insert random elements, prefer a sorted array.)
      • Python has a useful heapq class (min-heap with 0-based indexing).
      • Particularly useful if there are multiple such possible min/max values and some of them are in the "past" (precluding only having variables for the lowest 1-2 elements).
    • If you have a lot of strings, try putting them in a prefix tree / trie.
    • To improve time complexity, also consider how various complexities match to solution structures and try working backwards from a target runtime:
      • "If we're going to do better than O(N^2), we're probably going to do it in either O(N lg N) or O(N). O(N lg N) comes up in sorting and searching algorithms where we're recursively cutting the list in half. It's not obvious that we can save time by cutting the list in half here. Let's first see how well we can do by looping through the list only once."
      • Remember that time complexity is asymptotic, so don't worry about iterating through a list multiple times—it's still O(N).
      • "Starting with a target runtime and working backward from there can be a powerful strategy for all kinds of coding interview questions.":
        • We started with an O(N^2) brute force solution and wondered if we could do better.
        • We knew to beat O(N^2), we'd probably do O(N) or O(N lg N), so we started thinking of ways we might get an O(N lg N) runtime.
        • lg(N) usually comes from iteratively cutting stuff in half, so we arrived at the final algorithm by exploring that idea.
    • Try simplifying or otherwise restating what the question is asking for. Sometimes an explicit restatement (instead of relying on implicit assumptions about a plan of attack) makes things much easier.
      • "The requirement of "the difference between the depths of any two leaf nodes is no greater than 1" implies that we'll have to compare the depths of all possible pairs of leaves. That'd be expensive—if there are n leaves, there are O(N^2) possible pairs of leaves.
      • But we can simplify this requirement to require less work. For example, we could equivalently say:
        • "The difference between the min leaf depth and the max leaf depth is 1 or less"
        • "There are at most two distinct leaf depths, and they are at most 1 apart""
    • Break up a problem into independent cases and draw out sample inputs for the cases.
      • "Breaking things down into cases is another strategy that really helped us here.
      • Notice how simple finding the second largest node got when we divided it into two cases:
        • The largest node has a left subtree.
        • The largest node does not have a left subtree.
      • Whenever a problem is starting to feel complicated, try breaking it down into cases.
      • It can be really helpful to actually draw out sample inputs for those cases. This'll keep your thinking organized and also help get your interviewer on the same page."
    • Not quite the same as N-1, but sometimes a divide-and-conquer approach is what is necessary. If I know the answer for exclusive parts of the problem, can I somehow combine to get the final answer?
      • Avoid passing copies of subarrays, instead pass low and high indices.
    • For puzzle problems or anything where we can enumerate all possible solutions and there's a notion of a partial candidate solution, consider backtracking.
      • "Backtracking is an important tool for solving constraint satisfaction problems, such as crosswords, verbal arithmetic, Sudoku, and many other puzzles. It is often the most convenient (if not the most efficient) technique for parsing, for the knapsack problem and other combinatorial optimization problems."
      • Backtracking is a powerful technique. It allows us to try all possible configurations/solutions, but in an optimal and incremental order, so that we can eliminate wholesale bad approaches. Basically we generate partial solutions at each iteration: if it passes the constraint check, we recur deeper (DFS style); if not, we continue / "prune" that entire branch and backtrack up the search tree to an unexplored branch. We generate new solutions through a loop breadth-wise and explore them further through DFS recurrence. The n queens problem is a typical example.
      • Just remember, we need a quick test to see if our candidate solution so far satisfies our constraints, and then we use DFS to explore further.
      • Choose, explore, uncooked. The unchoose part is critical.
    • If it is anything to do with even/odd-ness, consider "canceling" even elements out. Don't need to track exact numbers.
    • Especially if a boolean function, think about how we can short-circuit and return early.
      • Also important for breaking out of loops early.
    • For backtracking (not "real" backtracking) or reconstructing going backward, think about what additional information we'd need.
      • "The tricky part was backtracking to assemble the path we used to reach our end_node. In general, it's helpful to think of backtracking as two steps:
        • Figuring out what additional information we need to store in order to rebuild our path at the end (how_we_reached_nodes, in this case).
        • Figuring out how to reconstruct the path from that information."

No comments

darkmode