O(1) < O(log n) < O(n) < O(n log n) < O(n^2) < O(n^3) < O(2^n) < O(n!) lg n usually means base 2 but doesn't matter because same sans constants / in simplified form.
Time complexity of recursive algorithms that make multiple calls per call is often exponential, think of the fib algorithm: f(n) = f(n-1) + f(n-2).
This isn't just O(2n) since 2 calls are being made for each call, not 2 calls for the whole thing. So it's O(2^n) instead.
On a similar note, even if a function doesn't take any extra space with local variables, if it makes a recursive call, that state has to be added to the call stack. (The caller function "pauses" and the callee is pushed to the call stack. When the callee is done executing, it's popped and control is returned to the caller.)
If the recursion is tail recursive (last call of function is the recursive call), however, then can be tail call optimized so that the last call acts as a GOTO and there's no space overhead—i.e., don't need to push frame to call stack.
No comments