Since the dawn of time (at least since computer science became a field), hiring teams have been asking interviewees about how to concoct code to create a Fibonacci sequence. Hiring teams wanted to know if an engineer had access to the concepts utilized in Dynamic Programming, and could write algorithms that scaled well. They wanted to separate the wheat from the chaff.
The Fibonacci sequence (for all you pieces of chaff out there) is a series of numbers in which each number (Fibonacci number) is the sum of the two preceding numbers. A tiny sample of this series 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55 etc. This sequence is found in geometry all over the place even in nature!
For the purposes of this blog posts, let’s say you are applying to a company that makes floor plans for houses shaped like pineapples. To make these houses you have to generate a number that helps them determine how many boxes of nails it will take to build a substructure of the building based on its height.
So how do you know if you are using preformant code? You benchmark it. For our purposes we will run the code for our solutions out to the: 10th, 20th, 30th, and 35th position in the Fibonacci sequence.
We will capture the amount of time it takes to process the method using Ruby’s Benchmark library.
The Complex Way
The first way we will try to find solutions to this sequence is dirty, complex, and painful. To do it we will use a method that utilizes recursion.
This works okay for 10 or less numbers in the sequence, but if we go higher the Benchmark numbers start to look gnarly. Seems like our code might be getting to complex if we go much higher than 30…
Why is it so complex?
Man it takes a while to calculate out to the 35th position.
It has a complex Big O problem we would like to avoid, O(2^n).
You might be saying to yourself “Eeep, that boggles my tiny little chaff brain”, and “How will I ever get to be as smart as these computer science guys? Can we figure out how many times this method is being called so we can show how big of a problem it is?” Don’t worry we can figure it out, it only took me to about 30 years to find the time to sit down and do it. If I can, you can~!
The higher the number in the Fibonacci sequence, the more the method needs to run. Each step higher in the sequence requires the method to run many more times than the last. Because its recursive running Fibonacci to the 5th number in the sequence, you have to run fib(4), which needs to run fib(3) and fib(2)… and so on. To conceptualize O(n^2), if you increase n by 1, then you roughly added 2 * n units of time to the runtime.
So how much of a problem is O^2n going to causes our application? How do I get the precise number of times calculateFibAt needs to run to get a given number in the Fibonacci sequence? I can add a Fibonacci counter to tabulate the number of times the method gets called!
To get to the 3rd place in the Fibonacci sequence it takes 5 iterations of the mehod, to get to 10th 453 iterations of the method, to get to 32 it would be … 18,454,894…. ooh snap this isnt scaling well…
If this were code in production you would have some ‘splanin’ to do…
Well lets say someone did push the code above to production a year ago. You used to only build pineapples up to 50 feet which required 32 steps in the sequence. Now your boss just came to you. He is upset because everything was working fine until some smarty pants, probably named Chad, put 55 into the form that uses this code, and it took forever for the page to load the final value in the sequence.
How might we optimize this code so that its more preformant?
We need to sort out this Big O problem and make it O(n). We want to be able to access a value from an array using its index is always O(1). You remember that blog post you read on using Dynamic Programming. This is possible if we can break the problem down into subproblems and then store there answer in an optimized substructure.
To do this we will want to store the value we calculate… We need to remember the solution to a subproblem (Fibonacci(1), Fibonacci(2), Fibonacci(3), Fibonacci(4), Fibonacci(5), etc.) so we don’t have to calculate it again and again recursively.
Woah, sweet memory store in
FibFaster#fastercalculateFibAt. Now we don’t have to recursively calculate each number in the sequence, we can store it here once and move on!
Looks at much quicker those Benchmark numbers are. Especially as we get further up the sequence
You might be thinking “hey wait a minute you have a loop variable in there too!” and yes, that’s true, but with Big-O you’re more concerned about the nature of algorithm. In this case it’s simply O(n).
Your boss is happy that you sorted this out, but he’s always looking for way to save money by minimizing the use of the AWS your company uses.
You might also be thinking, why do we store all the number in the sequence, we only want to show the user the last number in the sequence?! Maybe we can optimize this some more…
The Greedy way
Woah its even faster now, but it doesn’t change very much using the upper end of the sequence. Looks like you could save real computation resources by greedily grabbing the number you are you looking for.
Dynamic programming amounts to breaking down an optimization problem into simpler sub-problems, and storing the solution to each sub-problem so that each sub-problem is only solved once. Now we can turn our attention to other problems, like shaving yaks, using what we just learned about Dynamic Programming.
Good job, we have separated the wheat from the chaff now. How does it feel be a golden kernel of wheat? Now you can grow into the solid software engineer you’ve always wanted to be.
The book ‘The Imposter’s Handbook’ by Rob Conery has also been a great help in understanding computer science fundamentals including unpacking concepts like “Big O” and “Dynamic Programming”.
Here is a great article that focuses on learning Dynamic Programming not by looking at the outcomes, or explaining the algorithm, but rather by using practical steps to find the algorithm. You can read it here: Demystifying Dynamic Programming: How to construct & code dynamic programming algorithms.