Write the Fibonacci sequence… in 4 different computational complexities!
Write a function that gives you the first n fibonacci numbers. Yes, you think, I remember that one! Didn’t it involve… fibs(n — 1) + fibs(n — 2)? You write it out, forget the base case, your stack overflows all over the place, but no fear! You tack on that base case, and you’ve got your recursive solution.
Here it is in Javascript:
function fib(n){
if (n === 1) return 0;
if (n === 2) return 1;
return fib(n — 1) + fib(n — 2);
}
So simple, so sweet.
The Fibonacci pattern begins with 0 and then 1, and each successive number is the sum of the previous two numbers.
0, 1, 1, 2, 3, 5, 8, 13, 21, 34…
Because the sequence always begins with 0 and then 1, you should always start out by “hardcoding” those values into your solution, which is expected and the correct approach to this problem. (These values will be hardcoded for every solution we discuss.)
If you ever forget exactly how fibs(n — 1) + fibs(n — 2) goes, just remember: you are trying to get the *sum*, hence the plus, of the two /previous/ numbers in the sequence.
This is the recursive solution and it has one of the worst time complexities because there is so much duplication of effort. This recursive call has an exponential time complexity. With time complexity, the question to ask yourself is: how many function calls occur based on the size of the input? In this case, as the input n increases, the time complexity increases exponentially, O(2^N). If you try to run this recursive solution with an n higher than 40 or 45 or 50 your compiler will start to time out and you’ll get a runtime error. Even fib(4) has so many calls!
fib(4) CallStack
/ \ fib(4)
fib(3) + fib(2) fib(3)
/ \ / \ fib(2)
fib(2) + fib(1) fib(1) + fib(0) fib(2)
/ \ fib(1)
fib(1) + fib(0) fib(1)
fib(0)
fib(1)
fib(0)
Every time the function calls itself, it pushes a new stack frame onto the call stack. What an expensive function! Now the follow-up: can you solve the problem in anything other than the second to-worst time complexity?
Constant < Logarithmic < Linear < Polynomial < Exponential < Factorial
When developing an algorithm, we try to find ways to move towards the left of this spectrum. How can we do better? We’ve discussed exponential complexity with recursion. Let’s discuss how to achieve linear time and space complexity with memoization.
Memoization is an optimization technique used to speed up programs by storing the results of expensive function calls in a cache. Let’s see an example, and then discuss.
function fibs(n, cache = {1: 0, 2: 1}) {
if (n in cache) return cache[n];
let counter = 3;
for (let i = counter; i <=n ; i++){
cache[i] = cache[i-1] + cache[i-2]
}
return cache[n]
}
In this solution, we initialize the first two numbers of our cache with 0 and 1, similar to handling those numbers with our base case in our recursive solution. Notice the cache is just a javascript object. If the return statement is not executed, we are going to to begin to build the sequence one step at a time in the cache. The counter is 3 because the first time we are going to need to start building out the cache is when the input n is greater than 2.
This solution is linear, or O(n). An O(n) algorithm’s performance will grow linearly and in direct proportion to the size of the input data set.
There are two types of complexity, time complexity and space complexity. Time complexity is how much time a function takes compared to the size of its input, while space complexity is how much space in memory is taken up compared to the size of its input.
The cache solution has wonderful linear time complexity, but since we are keeping the entire cache stored in memory, the space complexity is also linear, O(n).
Let’s see if we can write a solution that’s even better.
This solution, with a bit of destructing assignment for fun, is in linear O(n) time complexity, and constant O(1) space complexity. The number of loops it takes to calculate the nth fib number will still increase at exactly the same rate (a linear rate!) as n increases, but we are overriding the previous numbers in the sequence as we build it out, making the space complexity constant for any input n.
function fibs(n){
let [a, b] = [0, 1]
while (n > 0){
[a, b] = [b, a + b]
n -= 1
}
return a
}
It’s n > 0, because when n = 0, we can just return a (hardcoded to 0) and when n = 1, a = b, so when we return a, it’ll be 1.
What a beautiful solution with linear time and space complexity! Space is O(1) because there is no cache we are building as part of the solution.
There is another solution with linear time and space, and perhaps a bit surprisingly, it will takes us back to recursion.
In traditional recursion, you perform the recursive call, take the return value of that call and calculate the results. You get the results of your calculation only until you’ve returned from every recursive call, requiring a lot of space on the call stack. But recursive solutions aren’t necessarily exponential.
In tail recursion, you calculate the results first, and pass the results of your current call onto the next recursive call.
function fib(n, a = 0, b = 1){
if (n > 0) {
return fib(n - 1, b, a + b)
}
return a
}
Now, when you are ready to calculate the next call, you don’t need the current stack frame. You are calculating the results and passing them as parameters to your next call. No more worries about stack overflow!
This tail recursive solution is O(n) time complexity.
If you get the nth fibonacci sequence question in your interview, the conversation about improving the solution’s time and space complexity will likely be the next topic. Understanding these solutions helps demonstrate your understanding of Big O, and your ability to problem solve in different ways.
Practice working through these different solutions and enjoy the beauty of the Fibonacci sequence!