As software engineers, our aim is to develop efficient programs to solve given problems. In this process, we come across many programming paradigms and idioms. Few of these programing concepts are intimidating at first sight but once we dwell inside to realize these are not difficult to understand. One such concept is called Dynamic Programming. Dynamic Programming (DP) is simply put an optimization approach to improve an algorithm or program.
What is Dynamic Programming?
Dynamic Programming (commonly referred to as DP) is an algorithmic technique for solving a problem by recursively breaking it down into simpler subproblems and using the fact that the optimal solution to the overall problem depends upon the optimal solution to its individual subproblems.
If we take an example of Fibonacci number calculation using the recursive method, it has exponential time complexity. From image 1 we can clearly see there are many calls to fib() which are redundant and executed multiple times.
// time complexity : O(2^n)
int fib(int n)
{
if(n<2)
return n;
else
return fibonacci_dynamic(n-2) + fibonacci_dynamic(n-1);
}
In short, Dynamic Programming is mainly an optimization over plain recursion. The idea is to simply store the results of subproblems so that we do not have to re-compute them when needed later. This simple optimization reduces time complexities from exponential to polynomial.
Dynamic Programming has two approaches to solving a problem.
Top-down with Memoization
In this approach, we try to solve the bigger problem by recursively finding the solution to smaller sub-problems. Whenever we solve a sub-problem, we cache its result so that we don’t end up solving it repeatedly if it’s called multiple times. Whenever we require results we can simply return from saved results. This technique of storing the results of already solved subproblems is called Memoization.
Let's try to understand the top-down approach with the help of a simple example.
// top down approch with memoization
#include<iostream>
#include<map> // for cache
using std::cout;
using std::endl;
using std::map;
// time complexity : O(n)
int fibonacci_dynamic(int n)
{
static map<int, int> cache; // static storage to avoid
// reinitialization
if(cache[n])
return cache[n];
if(n<2)
return n;
else
{
cache[n]=fibonacci_dynamic(n-2) + fibonacci_dynamic(n-1);
return cache[n];
}
}
int main()
{
int n=50;
for(int i=0; i<n; i++)
cout<<"Dynamic Fibonacci of "<< i<< " is "
<<fibonacci_dynamic(i)<<endl;
return 0;
}
Bottom-up with Tabulation
Tabulation is the opposite of the top-down approach and avoids recursion. In this approach, we solve the problem “bottom-up” (i.e. by solving all the related sub-problems first). This is typically done by filling up an n-dimensional table. Based on the results in the table, the solution to the top/original problem is then computed.
Let's try to understand the bottom-up approach with the help of a simple example.
// bottom up approch using tabulation
#include<iostream>
using std::cout;
using std::endl;
// time complexity : O(n)
int fibonacci_dynamic(int n)
{
// Declare an array to store Fibonacci numbers.
// 1 extra to handle case, n = 0
int fib[n + 2];
// 0th and 1st number of the series are 0 and 1
fib[0] = 0;
fib[1] = 1;
for(int i = 2; i <= n; i++)
{
//Add the previous 2 numbers in the series and store it
fib[i] = fib[i - 1] + fib[i - 2];
}
return fib[n];
}
int main()
{
int n=50;
for(int i=0; i<n; i++)
cout<<"Dynamic Fibonacci of "<< i<< " is "
<<fibonacci_dynamic(i)<<endl;
return 0;
}
Kommentare