Again we are given an array of integers. We want to compute the (or rather: a) longest increasing subsequence. A subsequence of a sequence $\vec{a} = [a_1, a_2, \dots, a_n]$ is a sequence of the form $\vec{b} = [a_{j_1}, a_{j_2}, \dots, a_{j_k}]$ with $1 \leq j_1 \lt j_2 \lt \dots \lt j_k \leq n$. More intuitively: a sequence that can be formed by deleting entries in $\vec{a}$. The subsequence $\vec{b}$ is increasing if
\begin{align*} a_{j_1} \lt a_{j_2} \lt a_{j_3} \lt \dots \lt a_{j_k} \ . \end{align*}We write ISS for increasing subsequence and LISS for longest increasing subsequence. Here is an example:
So $[8,9,11,12]$ is an ISS (increasing subsequence), albeit not a longest one. That would be this one:
Longest increasing subsequence differs from the previous two problems in that even the most trivial (and inefficient) algorithm is not completely trivial.
Exercise Write test cases as described in Chapter 2.1
An exponential algorithm
Here is a simple recursive idea: let $A = [a_0, \dots, a_{n-1}]$ be the input array. We set $B = [a_0,\dots,a_{n-2}]$ and $x = a_{n-1}$. There are two cases: (1) there is a LISS of $A$ that does not contain the last element $x$; in this case $\liss(A) = \liss(B)$. (2) every LISS of $A$ contains the last index; then it must be of the form $S + [x]$ with $S$ being a LISS of $B$. But this means that all elements of $S$ (in particular the last one) are strictly less than $x$. This suggests the following algorithm:
- Compute $\liss(B)$.
- Compute the longest increasing subsequence $S$ of $B$ whose maximum entry is less than $x$; then append $x$.
- Take the better of the two.
Note that in Point 2, we have to answer a more general question: what is the longest increasing subsequence with a specified upper bound.
Problem Implement the above idea. Specifically, implement a recursive function longest_increasing_subsequence_less_than(array, upper_bound). You can then find the overall longest increasing subsequence by calling it with upper_bound being $+\infty$.
A quadratic algorithm
The previous idea was inefficient because a call to longest_increasing_subsequence_less_than(array, inf) might entail an exponential number of recursive calls. But how many different input arguments (array, upper_bound) will the function encounter? There are at most $n+1$ possibilities for array, since each will be a prefix of the original input array, i.e., will be of the form input_array[0:j]. The variabel upper_bound is $+\infty$ at the beginning, but afterwards only values occurring in the input value will be passed as upper_bound; thus, at most $n+1$ values will ever occur, and the total number of combinations (array, upper_bound) is at most $(n+1)^2$. We could just store function calls in a dictionary and will end up with a (somewhat) quadratic algorithm. However, the right way to do things in such a case is to use dynamic programming to search the possibilities systematically.
To actually implement it, we turn things on their head, in a way. Rather than computing, for each index $j$ and upper bound $\theta$ the longest increasing subsequence of $A[0:j]$ all whose elements are $\lt \theta$, we do the following: for each index $j$ and desired length $k$, compute, among all increasing subsequences $\vec{b}$ in $A[0:j]$ that have length $k$, the one whose maximum entry $\max(\vec{b})$ is smallest. That is,
\begin{align*} T^{(j)}_k := \min \{ \max(\vec{b}) \ | \ \vec{b} \textnormal{ is ISS of } A[0:j], \vec{b} \textnormal{ has length } k \} \ . \end{align*}Note that if $A[0:j]$ does have any ISS of length $k$, then we take a minimum over the empty set and thus $T^{(j)}_k$ would be empty. On the other hand, for $j=0$ the array $A[0:0]$ is the empty array, which has only one increasing subsequence: the empty sequence itself; the maximum of the empty sequence is the maximum over an empty set, thus it is $-\infty$, and therefore $T^{(0)}_0 = 0$. Another insight: it holds that $T^{(j)}_0 \leq T^{(j)}_1 \leq T^{(j)}_2 \leq \dots$. This is because an ISS of length $k+1$ contains an ISS of length $k$, so the definition of $T{(j)}_{k}$ takes a minimum over a potentially larger set than $T{(j)}_{k+1}$, giving a smaller or equal value.
How to compute the values of $T^{(j+1)}_{*}$ when given $T^{(j)}_{*}$? Let $x := A[j]$. Let $k$ be such that
\begin{align} T^{(j)}_{k-1} \lt x \leq T^{(j)}_{k} \ . \label{correct-position-of-x} \end{align}Since $T^{(j)}_0 = -\infty$ and $T^{(j)}_n = +\infty$ for all $j \leq n-1$, and the $T^{(j)}_l$ are increasing in $l$, there is one unique such $k$. To build an ISS of length $k$ in $A[0:j]$, we can take an ISS of $A[0:j]$ of length $k-1$ ending in $T^{(j)}_{k-1}$, append $x$, and obtain an ISS of $A[0:j+1]$ of length $k$ ending in $x$. Or we could just take an ISS of length $k$ in $A[0:j]$. Either way, it holds that
\begin{align*} T^{(j+1)}_{k} = \min(x, T^{(j)}_{k}) \ . \end{align*}What about $l \gt k$? Since $T^{(j)}_l \geq T^{(j)}_{k} \geq x$, $x$ cannot be appended to any ISS in $A[0:j]$ of length $l$; it would not be an increasing sequence anymore. Therefore:
\begin{align*} T^{(j+1)}_{l} = T^{(j)}_{l} \textnormal{ for all $l \geq k+1$} \ . \end{align*}What about $l \leq k-1$? Take an ISS of length $l-1$ in $A[0:j]$. Maybe we can append $x$ to form an ISS of length $l$ ending in $x$; but we already have one ending in $T^{(j)}_{l} \leq T^{(j)}_{k-1} \lt x$, so appending $x$ would just give a worse (i.e., higher-ending) ISS than we already have. The row $j+1$ thus differs from row $j$ in at most one position. To summarize:
\begin{align*} T^{(j+1)}_{k} & = \min(x, T^{(j)}_{k}) \textnormal{ for $k$ such that $T^{(j)}_{k-1} \lt x \leq T^{(j)}_{k}$,}\\ T^{(j+1)}_l & = T^{(j)}_l \textnormal{ for all $l \ne k$.} \end{align*}In the example run below, we don't keep row for every $j$. We just show the values $T^{(j)}_k$ for the current $j$ and cross out numbers that have been replaced by better ones.
Once we have processed all elements, we have arrived at $T^{(n)}_k$, which is the smallest possible value a ISS of size $k$ can end with. The largest $k$ for which this is not $+\infty$ is the length of a LISS.
Problem Implement the idea that we just outlined. Start with a program that computes the length of the longest increasing subsequence, not the sequence itself.
Computing the sequence itself
We want our program to output a longest sequence, not just its length. To facilitate judging the correctness, the answer should be unique, so we have to come up with tie breaking rules: when two increasing subsequences $b_1 \lt b_2 \lt \dots \lt b_k$ and $c_1 \lt c_2 \lt \dots \lt c_k$ have the same size, we prefer one with the smallest values. But what does that even mean? Couldn't it be that one LISS is $[4, 5, 21]$ and the other is $[7, 8, 19]$? In that case, which one has "smaller values"? It turns out that there is always a unique minimum that is minimal in every position:
Theorem Suppose $[b_1, \dots, b_k]$ and $[c_1, \dots, c_k]$ are both LISS of the array $A$. Then
\begin{align*} \min(b_1, c_1), \min(b_2, c_2), \dots, \min(b_k, c_k) \end{align*}is also a LISS of $A$.
Remark: It is pretty easy to show that the new sequence is increasing. In fact, this holds even if $\vec{b}$ and $\vec{a}$ are not necessarily longest ISS of $A$. However, it might be that the new sequence is not a subsequence of $A$! For example, when $A = [2, 3, 1, 4]$ and $\vec{a} = [2,3]$ and $\vec{b} = [1,4]$ then the pointwise minimum would be $[1,3]$, but now this is not a subsequence of $A$ anymore. To prove that this cannot occur, you must use that $\vec{a}$ and $\vec{b}$ are, in fact, longest increasing subsequences of $A$.
Problem Add bookkeeping to your implementation such that it does not only output the length of the LISS but the LISS with minimal values itself.
python quadratic.py < testcases/example-from-lecture-notes.in6 4 6 7 8 11 12
Problem Improve your implementation of the quadratic algorithm: in order to find the $k$ for which (\ref{correct-position-of-x}) holds, don't use linear search; use binary search.