We've frequently been able to speed up a divide-and-conquer algorithm by memoizing it (adding a table of known results, and not bothering to compute a particular sub-problem if its answer is already in the table).
Draw the recursion tree for the naive, recursive Fibonacci algorithm on input 6.
Draw the recursion tree for the Merge-Sort algorithm on an input list of length 16.
Why does memoization help the Fibonacci algorithm, but not the Merge-Sort algorithm?
Consider the problem of making change using an arbitrary set of coin values (not necessarily {1, 5, 10, 25}, or anything reasonable like that). You may, however, assume that the smallest coin value is 1, so for any integer input at least there is a solution.
Give an algorithm (in pseudocode) which, for any specified number of cents, makes that amount of change with as few coins as possible. (1 point)
Implement, test, and debug this algorithm in a real programming language. (Be sure to test it on some examples for which the greedy algorithm doesn't work!)
Extra credit: give an algorithm which solves the same problem, but with a limited number of each coin available in the cash register. (For example, perhaps there are only 5 pennies, 6 coins worth 7 cents each, 2 coins worth 11 cents each, and 5 coins worth 18 cents each.) Obviously, if the amount of change to be made is more than the total value of the coins, the answer is "impossible". There may also be other problem instances that are impossible. Your algorithm should determine whether the problem is solvable with the given coins, and if so, tell what coins to use to make change with the smallest possible number of coins.)
More extra credit: Implement, test, and debug this algorithm in a real programming language. As usual, the part of the program that does the real work should be easily callable from other parts of the program.