Growth Rates
A difficult concept for many students to grasp is the idea of
function growth rates, and in particular why we allow ourselves
to "ignore constant factors" and "ignore low-order
terms".  Here's a copy of the table from p. 15 of Steven Skiena's
Algorithm Design Manual,
which lists the running time of algorithms with various growth rates,
measured in nanoseconds (billionths of a second).
| n | f(n)=lg n | f(n)=n | f(n)=n lg
n | f(n)=n2 | f(n)=2n | f(n)=n! | 
|---|
| 10 | 0.003 us | 0.01 us | 0.033 us | 0.1 us | 1 us | 3.63 ms | 
| 20 | 0.004 us | 0.02 us | 0.086 us | 0.4 us | 1 ms | 77.1 years | 
| 30 | 0.005 us | 0.03 us | 0.147 us | 0.9 us | 1 sec | 8.4 * 1015 years | 
| 40 | 0.005 us | 0.04 us | 0.213 us | 1.6 us | 18.3 min | never mind | 
| 50 | 0.006 us | 0.05 us | 0.282 us | 2.5 us | 13 days |  | 
| 100 | 0.007 us | 0.1 us | 0.644 us | 10 us | 4 * 1013 years |  | 
| 1000 | 0.010 us | 1.0 us | 9.966 us | 1 ms | never mind |  | 
| 10000 | 0.013 us | 10 us | 130 us | 100 ms |  |  | 
| 100000 | 0.017 us | 0.10 ms | 1.67 ms | 10 sec |  |  | 
| 1000000 | 0.020 us | 1 ms | 19.93 ms | 16.7 min |  |  | 
| 10000000 | 0.023 us | 0.01 sec | 0.23 sec | 1.16 days |  |  | 
| 100000000 | 0.027 us | 0.10 sec | 2.66 sec | 115.7 days |  |  | 
| 1000000000 | 0.030 us | 1 sec | 29.90 sec | 31.7 years |  |  | 
But let's look at it another way.  Suppose we've decided how much time
we have to solve the problem, and we want to know how large a problem we
can solve in that time.  In the following table, the left-hand column 
represents the available time, and the entries in the body of the table
represent the largest n for which we can solve the
problem in that time.
| time | f(n)=lg n | f(n)=n | f(n)=n lg n | f(n)=n2 | f(n)=2n | f(n)=n! | 
|---|
| 1 us | 1.07 * 10301 | 1000 | 140 | 31 | 10 | 6 | 
| 1 ms | c. 10300000 | 1000000 | 62746 | 1000 | 20 | 9 | 
| 1 sec | never mind | 109 | c. 4 * 107 | 31623 | 30 | 13 | 
| 1 min |  | 6*1010 | c. 2*109 | 244949 | 36 | 14 | 
Now let's try that same table again, but we'll divide each function by
100, equivalent to using a 100-times-faster processor.
| time | f(n)=(lg n)/100 | f(n)=n/100 | f(n)=(n lg n)/100 | f(n)=n2/100 | f(n)=2n/100 | f(n)=n!/100 | 
|---|
| 1 us | c. 1030000 | 100000 | 7741 | 316 | 16 | 8 | 
| 1 ms | c. 1030000000 | 108 | 4523071 | 10000 | 26 | 11 | 
| 1 sec | never mind | 1011 | c. 3*109 | 316227 | 36 | 14 | 
| 1 min |  | 6 * 1012 | c. 1.6*1011 | 2449489 | 42 | 15 | 
You should take two lessons from this table.  
- Improving the
algorithm makes a lot more difference than improving the processor.
-  If the algorithm is asymptotically inefficient, processor
speed won't help you much; 
 if the algorithm is asymptotically efficient, improving processor speed
does help.
Last modified:
Wed Jan  3 15:12:32 EST 2001
Stephen Bloch / sbloch@adelphi.edu