n | f(n)=lg n | f(n)=n | f(n)=n lg n | f(n)=n^{2} | f(n)=2^{n} | f(n)=n! |
---|---|---|---|---|---|---|

10 | 0.003 µs | 0.01 us | 0.033 us | 0.1 us | 1 us | 3.63 ms |

20 | 0.004 µs | 0.02 us | 0.086 us | 0.4 us | 1 ms | 77.1 years |

30 | 0.005 µs | 0.03 us | 0.147 us | 0.9 us | 1 sec | 8.4 * 10^{15} years |

40 | 0.005 µs | 0.04 us | 0.213 us | 1.6 us | 18.3 min | never mind |

50 | 0.006 µs | 0.05 us | 0.282 us | 2.5 us | 13 days | |

100 | 0.007 µs | 0.1 us | 0.644 us | 10 us | 4 * 10^{13} years | |

1000 | 0.010 µs | 1.0 us | 9.966 us | 1 ms | never mind | |

10000 | 0.013 µs | 10 us | 130 us | 100 ms | ||

100000 | 0.017 µs | 0.10 ms | 1.67 ms | 10 sec | ||

1000000 | 0.020 µs | 1 ms | 19.93 ms | 16.7 min | ||

10000000 | 0.023 µs | 0.01 sec | 0.23 sec | 1.16 days | ||

100000000 | 0.027 µs | 0.10 sec | 2.66 sec | 115.7 days | ||

1000000000 | 0.030 µs | 1 sec | 29.90 sec | 31.7 years |

But let's look at it another way. Suppose we've decided how much time
we have to solve the problem, and we want to know how large a problem we
can solve in that time. In the following table, the left-hand column
represents the available time, and the entries in the body of the table
represent the largest **n** for which we can solve the
problem in that time.

time | f(n)=lg n | f(n)=n | f(n)=n lg n | f(n)=n^{2} |
f(n)=2^{n} |
f(n)=n! |
---|---|---|---|---|---|---|

1 µs | 1.07 * 10^{301}
| 1000 | 140 | 31 | 10 | 6 |

1 ms | c. 10^{300000} |
1000000 | 62746 | 1000 | 20 | 9 |

1 sec | never mind | 10^{9} |
c. 4 * 10^{7} |
31623 | 30 | 13 |

1 min | 6*10^{10} |
c. 2*10^{9} |
244949 | 36 | 14 |

Now let's try that same table again, but we'll divide each function by 100, equivalent to using a 100-times-faster processor (and memory, and bus, and....)

time | f(n)=(lg n)/100 | f(n)=n/100 | f(n)=(n lg n)/100 | f(n)=n^{2}/100 |
f(n)=2^{n}/100 |
f(n)=n!/100 |
---|---|---|---|---|---|---|

1 µs | c. 10^{30000} |
100000 | 7741 | 316 | 16 | 8 |

1 ms | c. 10^{30000000} |
10^{8} |
4523071 | 10000 | 26 | 11 |

1 sec | never mind | 10^{11} |
c. 3*10^{9} |
316227 | 36 | 14 |

1 min | 6 * 10^{12} |
c. 1.6*10^{11} |
2449489 | 42 | 15 |

- Improving the algorithm (changing the growth rate of the function) makes a lot more difference than improving the processor (multiplying the function by a constant).
- If the algorithm is asymptotically inefficient, processor
speed won't help you much;

if the algorithm is asymptotically efficient, improving processor speed*does*help.

Last modified: Wed Dec 27 22:37:27 EST 2006 Stephen Bloch / sbloch@adelphi.edu