{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Problem Sheet 9 - Dimension Reduction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this exercise, we focus on the two dimension reduction techniques considered in the lecture:\n", "- Principal component regression (PCR) and principal component analysis (PCA) \n", "- Partial least squares (PLS)\n", "\n", "Both techniques are used to construct a low-dimensional set of features from a large set of variables, i.e., instead of solving a learning problem in terms of the original variables $X_1, \\ldots, X_p$, we replace these by a smaller number of new variables $Z_1,\\ldots, Z_M$ with $M < p$.\n", "The $Z_m$ are chosen as linear combinations of the original predictor variables, i.e.,\n", "\n", "$$ Z_m = \\sum_{j=1}^p \\phi_{j,m} X_j $$\n", "\n", "with coefficients $\\phi_{1,m}, \\ldots, \\phi_{p,m}$ for $m = 1,\\ldots,M$.\n", "\n", "After this step, we can use one of the already known learning methods.\n", "Denote by $\\boldsymbol y \\in \\mathbb R^n$ and $\\boldsymbol Z \\in \\mathbb R^{n \\times (M+1)}$ the observation vector and the data matrix (now with the $M$ data columns obtained as linear combinations of the columns of the original data matrix $\\boldsymbol X$ with coefficients $\\phi_{j,m}$). \n", "\n", "In the case of (standard) linear regression, our new problem reads\n", "\n", "$$ \\min_{{\\boldsymbol \\theta} \\in \\mathbb{R}^{M+1}} \\|{\\boldsymbol Z} {\\boldsymbol \\theta} - {\\boldsymbol y}\\|_2^2.$$\n", "\n", "One important application of this approach is given in situations where $p$ is large relative to $n$.\n", "In this case choosing $M << p$ can significantly reduce the variance in the model, while a regression applied to the original data might lead to a a highly overfitted model with a training error of zero.\n", "Another advantage lies in the reduced computational cost." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Problem 9.1 - Principle Component Analysis and LDA" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We start by loading the iris data set." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.datasets import load_iris\n", "\n", "iris = load_iris()\n", "X = iris.data\n", "y = iris.target" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Scale the data matrix `X` to have mean 0 and variance 1." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Import the function `PCA` provided by `sklearn.decomposition`.\n", "Take a short look into the documentation and perform a principal component analysis on your scaled data using 2 components.\n", "Store the 2 principal components as a variable `pc`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Find out, what fraction of the variance in the data is explained by these 2 principal components." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now you should be able to plot the principal components." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "plt.rcParams['figure.figsize'] = (15,9)\n", "\n", "for i in range(3):\n", " idx = (y == i)\n", " plt.scatter(pc[idx,0], pc[idx,1], label=iris.target_names[i])\n", " \n", "plt.title('2 component PCA')\n", "plt.xlabel('1st principal component')\n", "plt.ylabel('2nd principal component')\n", "plt.legend();" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We observe, that the data is quite well seperated.\n", "But if we plot all variables against each other, we see that there are also pairs of variables that are similar, or even better separated." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "plt.rcParams['figure.figsize'] = (15,9)\n", "fig, ax = plt.subplots(3,3)\n", "\n", "for i in range(4):\n", " for j in range(4):\n", " if i < j:\n", " for k in range(3):\n", " idx = (y == k)\n", " ax[i][j-1].scatter(X[idx,i], X[idx,j], label=iris.target_names[k])\n", " ax[i][j-1].set_xlabel(iris.feature_names[i])\n", " ax[i][j-1].set_ylabel(iris.feature_names[j])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Fit a linear discriminant analysis using the 2 principal components from above.\n", "What proportion of the *training data* is classified correctly?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Compare the optained score to the classification error of models incorporating exactly two of the original variables." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA\n", "\n", "for i in range(4):\n", " for j in range(4):\n", " if i < j:\n", " print('LDA on variables %d and %d' % (i, j))\n", " clf = LDA()\n", " clf.fit(X[:,[i,j]], y)\n", " print('\\t\\tscore = %6.4f' % clf.score(X[:,[i,j]],y))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Observation**:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## PCA for dimension reduction\n", "\n", "As we have already mentioned, principal component analysis can be used to decrease the computational cost of different learning procedures by reducing the number of variables in our model.\n", "Therefore, we now use a slightly larger data set.\n", "The **MNIST** data set is widely used for testing.\n", "It contains 70,000 grey-valued images of size 28x28, each assigned with a digit from 0 to 9.\n", "\n", "Download the `mnist_784.csv` from the class web page.\n", "Adapt the following code and read the `csv` file into a `pandas DataFrame`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "df = pd.read_csv('./datasets/mnist_784.csv')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following cell splits the data into training and test data.\n", "It should be executable once you read the `csv` file correctly." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ntrain = 60000\n", "X = df.iloc[:ntrain,:-1].values.astype(float)\n", "y = df.iloc[:ntrain,-1].values\n", "Xtest = df.iloc[ntrain:,:-1].values.astype(float)\n", "ytest = df.iloc[ntrain:,-1].values\n", "\n", "# Number of pixels along on axis\n", "npxl = np.sqrt(X.shape[1]).astype(int)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following cell plots the first numbers in the data set.\n", "Images can be plottet using the function `plt.imshow()`.\n", "\n", "**Task**: Execute the code from below and try to explain the term `ax[i//npix][i%npix]`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "npix = 4\n", "fig, ax = plt.subplots(npix,npix)\n", "\n", "for i in range(npix**2):\n", " x = X[i,:]\n", " ax[i//npix][i%npix].imshow(x.reshape((npxl,npxl)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: *Train/Fit* a `StandardScaler` using your training data, and *transform* both, your training and test set.\n", "Store the scaled versions under their names, i.e., `X` and `Xtest`.\n", "You can import it by\n", "\n", " from sklearn.preprocessing import StandardScaler" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Now plot the scaled numbers again. Copy and paste the code from above. Try to explain, why some numbers appear lighter and some darker." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Perform a linear discriminant analysis on your full data set `X`.\n", "Measure the time your computer needs to perform this task.\n", "You can do this easily by using the *magic command* `% time` in front of your `*.fit(X,y)`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: What is the proportion of correct classications on your test set?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Perform a truncated principal component analysis of your scaled data `X`.\n", "Depending wheather the optional parameter `n_components` is an integer larger or equal to 1, or a float betwean 0 and 1, the behaviour is different.\n", "Setting the option `n_components = 0.9` lets the algorithm choose the number of components, such that these principal components declare 90\\% of the variability in the data.\n", "Store the principal components as a `numpy.ndarray` named `pc`.\n", "\n", "**Caution**: You can find out the number of principal components in the model using the attribute `*.n_components_`.\n", "There is also the attribute `*.n_components` which is the value of the option that you specified." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Perform now an LDA using these principal components. Measure the time that is necessary for this operation, and compare the score on your test set with the score obtained by using the full model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You should observe that the computing time has been reduced by 3/4.\n", "The score of 87.2\\% is comparable to the previous reached 87.3\\% in the full model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### A closer look at PCA, and its connection to SVD\n", "As you all know from the lecture, principal component anaylsis (PCA) is strongly connected to a (truncated) singular value decomposition (SVD).\n", "\n", "Assume, that the matrix $X \\in \\mathbb{R}^{n \\times p}$ is scaled, i.e. has column-mean zero and variance one.\n", "Then the covariance matrix $C$ is given by $C = X^T X$.\n", "This is a symmetric matrix, and thus can be diagonalized into\n", "\n", "$$ C = V \\Lambda V^T $$\n", "\n", "with $V \\in \\mathbb{R}^{p \\times p}$ the matrix of eigenvectors of $C$ and $\\Lambda = \\text{diag}(\\lambda_1, \\ldots, \\lambda_p)$ the matrix of eigenvalues on the diagonal.\n", "The columns of the matrix $V$ are called principal directions of the data, and projections of the data on the principal directions are called *principal components*.\n", "Thus, the $j$-th column of the matrix $XV$ is called the $j$-th principal component.\n", "\n", "#### Full SVD\n", "\n", "If we perform a full SVD of the matrix $X \\in \\mathbb{R}^{n \\times p}$, we obtain the decomposition\n", "\n", "$$ X = U \\Sigma V^T $$\n", "\n", "with a unitary matrix $U \\in \\mathbb{R}^{n \\times n}$ (left-singular vectors),\n", "a \"diagonal\" matrix $\\Sigma \\in \\mathbb{R}^{n \\times p}$ (singular values $\\Sigma_{i,i}$, rest zero),\n", "and a unitary matrix $V \\in \\mathbb{R}^{p \\times p}$ (right-singular vectors).\n", "\n", "Thus, we see that\n", "\n", "$$ C = X^T X = (U \\Sigma V^T)^T (U \\Sigma V^T) = V \\Sigma^2 V^T. $$\n", "\n", "In other words, this says that the right-singular vectors are principal directions, and the principal components are given by\n", "\n", "$$ XV = (U \\Sigma V^T) V = U \\Sigma. $$\n", "\n", "#### Truncated SVD\n", "\n", "In comparison to a full SVD, a truncated SVD **approximates** the matrix by restricting only on the $k$ largest singular values, i.e.,\n", "\n", "$$ X \\approx U_k \\Sigma_k V_k^T $$\n", "\n", "with matrices \n", "$U_k \\in \\mathbb{R}^{n \\times k}$,\n", "$\\Sigma_k \\in \\mathbb{R}^{k \\times k}$,\n", "$V_k \\in \\mathbb{R}^{p \\times k}$.\n", "\n", "Thus, we get the first $k$ principal components by\n", "\n", "$$ X V_k = U_k \\Sigma_k \\in \\mathbb{R}^{n \\times k}$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Read the documentation of the sklearn function `PCA`.\n", "We see, that `pca.components_` is equivalent to the matrix $V_k^T$.\n", "Assign the matrix `V` by `pca.components_.T` and check the size of the matrix.\n", "\n", "The rest of this notebook is prepared for you.\n", "Please make sure, that you understand each step.\n", "Ask questions or have a look into the documentation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Can we resemble the principal components? Yes, we can! Remember, it's the matrix product of $X$ and $V_k$." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.abs(X.dot(V)-pc).max()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we can assign the matrix with the singular values.\n", "The method `pca.singular_values_` returns a vector, but we can easily store it as a diagonal matrix $\\Sigma$.\n", "This is also true for $\\Sigma^{-1}$." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Sigma = np.diag(pca.singular_values_)\n", "SigmaInv = np.diag(1./pca.singular_values_)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Is the size correct?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Sigma.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And therefore, we can also compute the matrix $U_k$ by simple matrix multiplication.\n", "This can, as you know, done either by the `numpy.ndarray` method `.dot()`, or by the operator `@`.\n", "\n", "**Remember**: The operator `*` performs an element-wise multiplication." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "U = X @ V @ SigmaInv" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Check the size." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "U.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is an interesting exercise to plot the first principal directions, i.e., the first columns of the matrix $V_k$.\n", "What do you observe?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "npix = 4\n", "fig, ax = plt.subplots(npix,npix)\n", "\n", "for i in range(npix**2):\n", " x = V[:,i]\n", " ax[i//npix][i%npix].imshow(x.reshape((npxl,npxl)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we want to compare some numbers and their corresponding approximations using PCA.\n", "\n", "We can compute the approximations simply by setting \n", "\n", "$$ X_{\\text{approx}} = U_k \\Sigma_k V_k^T $$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Xapprox = U @ Sigma @ V.T" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, plotting is easy. You can play with $m$ to display different samples in the data set." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m = 0\n", "fig, ax = plt.subplots(1,2)\n", "ax[0].imshow(X[m,:].reshape((npxl,npxl)))\n", "ax[0].set_title('Original (%d pixel)' % (npxl**2))\n", "ax[1].imshow(Xapprox[m,:].reshape((npxl,npxl)))\n", "ax[1].set_title('Approximation')\n", "ax[1].set_xlabel('%d principal components')\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.6" } }, "nbformat": 4, "nbformat_minor": 2 }