{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\\rightarrow$Run All).\n", "\n", "Make sure you fill in any place that says `YOUR CODE HERE` or \"YOUR ANSWER HERE\", as well as your name below.\n", "\n", "Rename this problem sheet as follows:\n", "\n", " ps{number of lab}_{your user name}_problem{number of problem sheet in this lab}\n", " \n", "for example\n", " \n", " ps2_blja_problem1\n", "\n", "Submit your homework within one week until next Monday, 9 a.m." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "NAME = \"\"\n", "EMAIL = \"\"\n", "USERNAME = \"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Introduction to Data Science\n", "## Lab 13: Dimension Reduction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this exercise, we focus on the two dimension reduction techniques considered in the lecture:\n", "- Principal component regression (PCR) and principal component analysis (PCA) \n", "- Partial least squares (PLS)\n", "\n", "Both techniques are used to construct a low-dimensional set of features from a large set of variables, i.e., instead of solving a learning problem in terms of the original variables $X_1, \\ldots, X_p$, we replace these by a smaller number of new variables $Z_1,\\ldots, Z_M$ with $M < p$.\n", "The $Z_m$ are chosen as linear combinations of the original predictor variables, i.e.,\n", "\n", "$$ Z_m = \\sum_{j=1}^p \\phi_{j,m} X_j $$\n", "\n", "with coefficients $\\phi_{1,m}, \\ldots, \\phi_{p,m}$ for $m = 1,\\ldots,M$.\n", "\n", "After this step, we can use one of the already known learning methods.\n", "Denote by $\\boldsymbol y \\in \\mathbb R^n$ and $\\boldsymbol Z \\in \\mathbb R^{n \\times (M+1)}$ the observation vector and the data matrix (now with the $M$ data columns obtained as linear combinations of the columns of the original data matrix $\\boldsymbol X$ with coefficients $\\phi_{j,m}$). \n", "\n", "In the case of (standard) linear regression, our new problem reads\n", "\n", "$$ \\min_{{\\boldsymbol \\theta} \\in \\mathbb{R}^{M+1}} \\|{\\boldsymbol Z} {\\boldsymbol \\theta} - {\\boldsymbol y}\\|_2^2.$$\n", "\n", "One important application of this approach is given in situations where $p$ is large relative to $n$.\n", "In this case choosing $M << p$ can significantly reduce the variance in the model, while a regression applied to the original data might lead to a a highly overfitted model with a training error of zero.\n", "Another advantage lies in the reduced computational cost." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part A - Principle Component Analysis and LDA" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We start by loading the iris data set." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "from sklearn.datasets import load_iris\n", "\n", "iris = load_iris()\n", "X = iris.data\n", "y = iris.target" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task (1 point)**: Scale the data matrix `X` to have mean 0 and variance 1.\n", "Store the scaled data as `Xscaled`.\n", "You can use the `sklearn`-function `scale`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "879b4a6fc5a68b9525f31f134f4775ae", "grade": false, "grade_id": "cell-2a6734fb79e931f3", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "c1469bdd8227cd092d640d77f512d415", "grade": true, "grade_id": "cell-6692b7a7a0b66605", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert abs(Xscaled.mean()) < 1e-10\n", "assert abs(Xscaled.var() - 1) < 1e-10" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task (2 points)**: Import the function `PCA` provided by `sklearn.decomposition`.\n", "Take a short look into the documentation and perform a principal component analysis on your scaled data using 2 components.\n", "Store the model in a variable `pca`, and the learned principal compontent vectors in a variable `pc`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "2324fca44768dfd7adffc32b2b985a06", "grade": false, "grade_id": "cell-447f5372c6009c6f", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "31f34c661cd548585c57111a265461d6", "grade": true, "grade_id": "cell-e7f26324d3abd473", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert abs(pca.components_.mean() - 0.34864187186234485) < 1e-10" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "ded2629ee94ba35e9848f48a00c0d37c", "grade": true, "grade_id": "cell-1d491139d06afc74", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert all(abs(pc.mean(axis=0)) < 1e-10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task (1 point)**: Find out, what fraction of the variance in the data is explained by these 2 principal components. Store your answer in `expl_var`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "f38042a965ba8a17646d6b01db50d7c9", "grade": false, "grade_id": "cell-ee0b710344316afe", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "5dbbe64af107d4a2a26046cdf47b147a", "grade": true, "grade_id": "cell-4c2ff59ffde948fa", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert abs(expl_var - 0.9581320720000165) < 1e-10" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following code should plot the principal components.\n", "Altough the original data is 4 dimensional, plotting the two principal components allows you to seperate the types pretty well from the other two types." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "plt.rcParams['figure.figsize'] = (15,9)\n", "\n", "for i in range(3):\n", " idx = (y == i)\n", " plt.scatter(pc[idx,0], pc[idx,1], label=iris.target_names[i])\n", " \n", "plt.title('2 component PCA')\n", "plt.xlabel('1st principal component')\n", "plt.ylabel('2nd principal component')\n", "plt.legend();" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We observe, that the data is quite well seperated.\n", "But if we plot all variables against each other, we see that there are also pairs of variables that are similar, or even better separated." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "plt.rcParams['figure.figsize'] = (15,9)\n", "fig, ax = plt.subplots(3,3)\n", "\n", "for i in range(4):\n", " for j in range(4):\n", " if i < j:\n", " for k in range(3):\n", " idx = (y == k)\n", " ax[i][j-1].scatter(X[idx,i], X[idx,j], label=iris.target_names[k])\n", " ax[i][j-1].set_xlabel(iris.feature_names[i])\n", " ax[i][j-1].set_ylabel(iris.feature_names[j])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task (1 point)**: Fit a linear discriminant analysis using the 2 principal components from above.\n", "What proportion of the *training data* is classified correctly? Store your answer in the variable `correct_pred`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "1f404fe9f31d5ae97c0f6444b2d17f1e", "grade": false, "grade_id": "cell-b82ce77dd3afc875", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "7ea38a2755e5ecd2eb234daf2bb06682", "grade": true, "grade_id": "cell-1c1c68f07f427af8", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert abs(correct_pred - .9333333333333333) < 1e-10" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We compare the optained score to the classification error of models incorporating exactly two of the original variables:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA\n", "\n", "for i in range(4):\n", " for j in range(4):\n", " if i < j:\n", " print('LDA on variables %d and %d' % (i, j))\n", " clf = LDA()\n", " clf.fit(X[:,[i,j]], y)\n", " print('\\t\\tscore = %6.4f' % clf.score(X[:,[i,j]],y))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You should observe that altough principal component analysis has explained more than 95\\% of the variance in the data, this doesn't, by any means, guarantee that a regression applied to the principle components `is better` than any other fit." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.7" } }, "nbformat": 4, "nbformat_minor": 2 }