*Notes by Gene Cooperman, © 2009
(may be freely copied as long as this copyright notice remains)*

Dynamic Programming is important for probably that normally would
have an easy solution by recursion, but the *natural* recursive
algorithm has exponential complexity, O(e^{n}).

Several problems from the text are summarized here. See the text, if details are unclear.

If you would like to look at additional solved dynamic programming problems, see one or more of:

- this site from MIT, along with a video lecture and lecture notes from the MIT Open Courseware.
- this site from McGill University
- this site from the University of Texas at Dallas
- this extended example from the topcoder.com web site
- this extended example of DNA alignment in a course on Molecular Bioinformatics from Uppsala University in Sweden, along with this corresponding advanced example
- this example from a course on Scientific Computation at ETHZ (Federal Institute of Technology at Zürich) in Switzerland
- this extended introduction as pdf from Middle Tennessee State University
- this more advanced site from Carnegie Mellon University
- this animated example from Carleton University
- this pdf from Cambridge University in England

Dynamic Programming solutions require two key features that may not be obvious. Once those two key features have been specified, the remaining issues of determining complexity, pseudo-code, and implementation are usually easy. The two features should remind you of recursion:

- Specify the subproblems (the recursive cases)
*(See p. 165 of the text for common subproblems.)* - Specify how to combine the answers from the subproblems (from the recursive cases)

Dynamic programming can be thought of as recursion, with the addition of a lookup table. The lookup table is an n-dimensional array.

As an example, suppose we wish to find the longest subsequence
satisfying some property (palindrome, longest increasing subsequence,
longest increasing subarray (longest increasing contiguous subsequence),
longest subsequence of alternating letters "ababab...", etc.). Define
`len[s[i..j])`

as *the longest subsequence satisfying the property within
the subarray s[i..j]*. So, a typical pseudo-code might look like
the following. (Technically, this is the memoization-based variation of
dynamic programming (see p. 169 of the text), as opposed to the
iteration-based version emphasized by the textbook.):

```
SolveLen(s[i..j])
Look up len(s[i..j]) in Table[i,j].
If Table[i,j] has the answer, then
Return Table[i,j].
Else if i=j, then
Set the answer directly via the base case (trivial case).
Else if Table[i,j] is empty, then
SolveLen(s[i..j-1]) and compute best answer for len(s[i..j]) using it.
SolveLen(s[i-1..j]) and compute best answer for len(s[i..j]) using it.
SolveLen(s[i-1..j-1]) and compute best answer for len(s[i..j]) using it.
Set answer to be the best of the above three answers.
Set Table[i,j] = answer.
Return answer.
```

Since the `Table[]`

has at most `n`

entries (^{2}`1≤i≤n`

and `1≤j≤n`

), we will finish in polynomial time. If we did not
use the lookup table, we might need to solve the same subproblem
many times, and the running time might become exponential.

*I should specify how to implement a solution using pointers in the dynamic
programming array (and maybe also describe memoization) in the next version.
For now, the text and other web pages describe this.*

**Description:** F_{0} = 0, F_{1} = 1,
F_{i} = F_{i-1} + F_{i-2} for i > 1.

*Problem:* Find F_{n}

*Subproblem:* Find F_{n-1} and Find F_{n-2}

*Combining Subproblems:* Given F_{n-1} and Find F_{n-2},
return F_{n-1} + Find F_{n-2}

**Description:** Given an array A[1..n], find the longest increasing
subsequence, A[x_{1}], A[x_{2}], …, A[x_{k}]
such that if i < j, then A[x_{i}] < A[x_{j}]

*Problem:* Find the longest increasing subsequence A[i..n] that includes A[i].
Then return the longest of those subsequences:
max_{i, 1 ≤ i < n} len(A[i..n])

*Subproblem:* Find the longest increasing subsequence A[j..n]
that includes A[j]. Do this for each j > i.

*Combining Subproblems:* For each j such that A[i] < A[j], note that
we can create an increasing subsequence that is length one
longer by prefixing A[j..n] with A[i]. Let `lis`

be the length
of the longest increasing subsequence. So,

`lis(A[i..n]) = max`

_{i<j, A[i]<'A[j]} lis[A[j..n]) + 1

If there is no j such that i<j and A[i]<A[j],
then set `lis(A[i..n]) = 1`

.

Edit distances are used heavily in the problem of
sequence alignment. They are used in genomics (DNA sequences as
words in the 4 nucleotides (halves of base pairs), given by
letters, A, C, G, T)
and proteomics (protein sequences as
words in the 20 amino acids given by letters:

A,R,N,D,C,E,Q,G,H,I,L,K,M,F,P,S,T,W,Y,V

*Problem:* Find the length of the longest (not necessarily contiguous)
subsequences A[i..n] and B[j..n]
that *includes* A[i] and B[j], such that the two subsequences are equal.
(Define the length of these two subsequences as `E(i,j)`

.)

If A[i] and B[j] match, then we should use it. There's no advantage to doing a deletion. Then continue to look for a match between A[i+1..n] and B[j+1..n]. If A[i] and B[j] do not match, then we are forced to either delete A[i] and look for a match of A[i+1..n] to B[j..n] or to delete B[j] and look for a match of A[i..n] to B[j+1..n].

*Subproblem:*

- A[i] ≠ B[j], delete A[i] from proposed match: E(i+1,j)
- A[i] ≠ B[j], delete B[j] from proposed match: E(i,j+1)
- A[i] = B[j] : Remove A[i] and B[j] and look for further match: E(i+1,j+1)

*Combining Subproblems:*

- If A[i] = B[j], then return 1 + E(i+1,j+1).
- If A[i] ≠ B[j], then return max( E(i+1,j), E(i,j+1) ).

*Final Answer:*

Return max_{i≤n} E(i,j).

(We defined E(i,j) such that the longest common subsequence must start at
A[i] and B[j]. Since we don't know which (i,j) pair it starts at,
we just take the maximum of all of them.)

See the Wikipedia article on dynamic programming for more examples and a rich variety of applications and other web pages on dynamic programming.