{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\\rightarrow$Run All).\n", "\n", "Make sure you fill in any place that says `YOUR CODE HERE` or \"YOUR ANSWER HERE\", as well as your name below.\n", "\n", "Rename this problem sheet as follows:\n", "\n", " ps{number of lab}_{your user name}_problem{number of problem sheet in this lab}\n", " \n", "for example\n", " \n", " ps2_blja_problem1\n", "\n", "Submit your homework within one week until next Monday, 9 a.m." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "NAME = \"\"\n", "EMAIL = \"\"\n", "USERNAME = \"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Introduction to Data Science\n", "## Lab 11: Subset selection" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this and the following labs, we want to explore different methods of linear model selection.\n", "\n", "In the lecture you've learned about problems that might occur in datasets with many predictors (high $p$)and a low number of samples (low $n$).\n", "Ways out could be:\n", "* Subset selection - try to find a suitable subset of predictors\n", "* Skrinkage/Regularization - increase weights of *important* predictors, decrease weights of *unimportant* ones\n", "* Dimension reduction - Build linear combinations $v_i, i=1,\\ldots,M$ of predictors and fit a model using these vectors instead of predictors with $M < p$\n", "\n", "We always have to keep in mind that it's in general not wise to select the model with the minimal training error, due to the danger of overfitting.\n", "Our goal is to find a model that performs well on a test set.\n", "This refers to a subset of samples that are completely held out from training (and also cross-validation)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part A - Best subset selection" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We want to implement the **best subset selection** algorithm from the lecture:\n", "1. Let $\\mathcal{M}_0$ denote the *null model*, which contains no predictors.\n", "2. For $k = 1, 2, \\ldots, p$:\n", " 1. Fit all $p \\choose k$ models that contain exactly $k$ predictors.\n", " 2. Pick the best among these and call it $\\mathcal{M}_k$, while the best is the one with highest $R^2$ score.\n", "3. Select a single best model from among $\\mathcal{M}_0, \\ldots \\mathcal{M}_p$ using one of the following methods:\n", " * cross-validated prediction error\n", " * $C_p$ (or equivalently AIC - Akaike information criterion)\n", " * BIC - Bayesian information criterion\n", " * adjusted $R^2$\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Understanding steps 1 and 2\n", " \n", "The algorithm `bestSubsetComputation` belows contains the implementation of step 1 and step 2.\n", "\n", "**Task (no points)**: Understand the following code and add comments as you wish." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "from itertools import combinations\n", "\n", "from sklearn.linear_model import LinearRegression\n", "from sklearn.metrics import r2_score\n", "\n", "def bestSubsetComputation(X, y, scoring_func = r2_score):\n", " \"\"\" Input: X - predictor array of size (n,p)\n", " y - array of size (n,)\n", " scoring_func - function that takes two arguments y_true\n", " and y_pred, and returns a score\n", " \n", " \"\"\"\n", " \n", " # Get the number of samples and the number of predictors \n", " n, p = X.shape\n", " \n", " # Prepare lists that keep the best scores and models:\n", " # best_score[i] keeps the best score in a model i predictors\n", " # best_model[i] keeps the best model using i predictors\n", " \n", " best_score = []\n", " best_model = []\n", "\n", " ### First step in best subset selection algorithm\n", " \n", " # The model containing no predictors simply predicts the sample mean\n", " ybar = y.mean()\n", " yhat = ybar * np.ones_like(y)\n", "\n", " best_score.append(scoring_func(y, yhat))\n", " best_model.append( () )\n", " \n", " ### Second step in best subset selection algorithm\n", "\n", " # Loop over k - number of predictors in our model\n", " for k in range(1,p+1):\n", "\n", " best_model.append( () )\n", " best_score.append(0.)\n", "\n", " for l in combinations(range(p),k):\n", "\n", " lr = LinearRegression()\n", " lrfit = lr.fit(X[:,l],y)\n", " yhat = lrfit.predict(X[:,l])\n", "\n", " this_score = scoring_func(y,yhat)\n", " \n", " if this_score > best_score[k]:\n", " best_score[k] = this_score\n", " best_model[k] = l\n", " \n", " return (best_model, best_score)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task (1 point)**: Load the diabetes data set form `sklearn`.\n", "If you forgot how to do this have a look at the previous labs.\n", "\n", "Store the predictors in an array `X`, and the targets in an array `y`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "90c2f78f0e32a901c8c70181c80bbd5d", "grade": false, "grade_id": "cell-a3f898ee3397b16e", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "21b9dc9237ca826fb28fced47e724170", "grade": true, "grade_id": "cell-eaf1d609190d2f5b", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert type(X) == np.ndarray\n", "assert abs(X.mean()) < 1e-12\n", "assert abs(y.mean() - 152.13348416289594) <1e-10\n", "assert X.shape == (442,10)\n", "assert y.shape == (442,)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task (1 point)**: Use the function `train_test_split` from `sklearn.model_selection` to split the data into a training and test set `X_train`, `y_train` and `X_test`, `y_test`.\n", "Use as `test_size=0.2` and `random_state=1`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "c7ec973d903d667d45bc06d50b3b4fc5", "grade": false, "grade_id": "cell-338dfd6bd1c5c750", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "1711e61eb775e1de09bd5e702d94a755", "grade": true, "grade_id": "cell-974db3a0860ab7b2", "locked": true, "points": 0, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert abs(X_train[3,4] + 0.0469754041408486) < 1e-6\n", "assert abs(y_test[5] - 178) < 1e-6\n", "assert X_test.shape == (89,10)\n", "assert y_train.shape == (353,)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task (1 point)**:\n", "Apply the `bestSubsetComputation` function from above to the training set.\n", "Store the list of best models in a variable `best_models` and the corresponding maximum scores in `best_scores`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "cf96a288ca0823cbfa2d95a8b4e8fd0d", "grade": false, "grade_id": "cell-b507dea2f32437b1", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "ad0301b8ae5a15e964a57044968a0022", "grade": true, "grade_id": "cell-98afd4bd2b1a65f1", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert abs(np.mean([sum(i) for i in best_models]) - 21.90909090909091) < 1e-8\n", "assert (np.mean([i[0] for i in best_models[1:]]) - 1.2) < 1e-8\n", "assert abs(np.var(best_scores) - 0.023162490321673723) < 1e-8" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task (no points)**: Now, you can execute the following cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "plt.rcParams['figure.figsize'] = (15,8)\n", "plt.plot(range(len(best_score)), best_score, 'r+-')\n", "plt.xlabel('Number of predictors')\n", "plt.ylabel('R^2 score')\n", "\n", "for i,s in enumerate(best_score):\n", " print('\\nScore of model with %i predictors has score %6.4f' % (i,s))\n", " print('\\tSelected predictors', best_model[i])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Implementing step 3 of the *best subset selection* algorithm\n", "\n", "In the following tasks, you should implement step 3 of the *best subset selection* algorithm using the **training data** from the diabetes data set." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the lecture you've learned about the performance criteria:\n", "* AIC\n", " $$ AIC = \\frac{1}{n \\hat \\sigma^2} (RSS + 2 d {\\hat \\sigma}^2)$$\n", "* BIC\n", " $$ BIC = \\frac{1}{n} (RSS + \\log(n) d {\\hat \\sigma}^2)$$\n", "* Adjusted R^2\n", " $$ R^2_{Adj} = 1 - \\frac{RSS / (n - d - 1)}{TSS / (n - 1)} $$\n", " \n", "with $d$ being the number of parameters in the model and $\\hat{\\sigma}^2$ referring to an estimate of the variance associated with each response in the linear model (estimated on a model containing all predictors):\n", " $$ {\\hat \\sigma}^2 = \\frac{1}{n - p - 1} \\sum_{i=1}^n (y_i - \\hat y_i)^2. $$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task (1 point)**: Implement the function `RSS` to compute the residual sum of squares for input `y` and corresponding `yhat`.\n", "\n", "**Remember**: $TSS$ is the total sum of squares, which is defined by \n", "$\\sum_{i=1}^n (y_i - \\hat y_i)^2$." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "445e1e31a48d15bb9fdb2d6377de9879", "grade": false, "grade_id": "cell-29945c7e0ddf38b3", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "def RSS(y, yhat):\n", " \"\"\" This function return the residual sum of squares\n", " for inputs of size (n,). \"\"\"\n", " # YOUR CODE HERE\n", " raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "2e9640e770b96c5221095937b5ac6e19", "grade": true, "grade_id": "cell-e5a02756e280b1ea", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert abs(RSS(y,np.ones_like(y)) - 12716877) < 1e-10" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task (1 point)**: Implement the function `TSS` to compute the total sum of squares for an input `y`.\n", "\n", "**Remember**: $TSS$ is the total sum of squares, which is defined by $\\sum_{i=1}^n (y_i - \\bar y)^2$." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "d22dba98ab45721d539d03c8315a18c4", "grade": false, "grade_id": "cell-d3a0db14c7826d69", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "def TSS(y):\n", " \"\"\" This function return the total sum of squares\n", " for an input of size (n,). \"\"\"\n", " # YOUR CODE HERE\n", " raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "8a80d1bdcf70cbaae52716df87fcb439", "grade": true, "grade_id": "cell-9519d36083546339", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert abs(TSS(y) - 2621009.124434389) < 1e-10\n", "np.random.seed(0)\n", "assert abs(TSS(np.random.randn(1000)) - 974.2344563121542) < 1e-10" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task (2 points)**: Implement a function `sigmaHat` that takes the input `X` and `y` and returns the estimate of sigma, i.e., $\\hat \\sigma$.\n", "Then, compute $\\hat \\sigma$ for your training data and store its value in the variable `shat_dia`.\n", "\n", "*Hint*: Within the function, you might have to perform a linear regression fit. Use also the function `RSS` if necessary." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "f2b601c2ccc40eb2e791eacdf7d8fc66", "grade": false, "grade_id": "cell-556140a35ed34ad2", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# Implement the function sigmaHat\n", "# YOUR CODE HERE\n", "raise NotImplementedError()\n", "\n", "# Evaluate the function for your training data\n", "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "1614222115f4491d1bd77ff4113d2e5e", "grade": true, "grade_id": "cell-8be79771fa8f7056", "locked": true, "points": 2, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert abs(shat_dia - 54.09455498707545) < 1e-10" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Below, you find an implementation of the Akaike Information Criterion (AIC).\n", "\n", "**Task (2 points)**: Complete the implementations of the functions `BIC` and `adjustedRSquare` that take as arguments `y`, `yhat`, `d`, and `shat`, if necessary." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "24a17109bfd15d5ffb351866e3a053ca", "grade": false, "grade_id": "cell-7eb9fa8da095b74d", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "def AIC(y, yhat, d, shat):\n", " n = len(y)\n", " return (RSS(y,yhat) + 2 * d * shat**2) / (n * shat**2)\n", "\n", "def BIC(y, yhat, d, shat):\n", " # YOUR CODE HERE\n", " raise NotImplementedError()\n", "\n", "def adjustedRSquare(y, yhat, d):\n", " # YOUR CODE HERE\n", " raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "50e1ef18af0f06730ef525694093faf0", "grade": true, "grade_id": "cell-e6b9707bdf77ca9e", "locked": true, "points": 2, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert abs(AIC(y_train, y_train, 3, 5) - 0.0169971671388102) < 1e-10\n", "assert abs(BIC(y_train, y_train, 3, 5) - 1.2464167259773293) < 1e-10\n", "assert abs(adjustedRSquare(y_train, y_train, 3) - 1) < 1e-10" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task (2 points)**: Use the return values of the function `bestSubsetComputation`, in particular the variable `best_models`, to perform step 3 of the *best subset selection* algorithm using the training data from the diabetes data set.\n", "\n", "Store the scores for the three criteria (adjusted $R^2$, AIC and BIC) in the corresponding lists `R2score`, `AICscore` and `BICscore`, resp." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "f38343bd0b0dc9f40af06f4f8f35c725", "grade": false, "grade_id": "cell-28ad09ace4486788", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "n,p = X_train.shape\n", "\n", "R2score = []\n", "AICscore = []\n", "BICscore = []\n", "\n", "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "b8b94f2c826b6eafee3e65e831a5adf3", "grade": true, "grade_id": "cell-0d580dbc57278116", "locked": true, "points": 2, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert abs(np.mean(R2score) - 0.44973260288515904) < 1e-8\n", "assert abs(np.mean(AICscore) - 1.1561158131306397) < 1e-8\n", "assert abs(np.mean(BICscore) - 3543.307165450913) < 1e-8" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task (1 point)**: For each of the considered criteria, select and store the index that optimizes the criterion as `R2idx`, `AICidx` and `BICidx`, resp." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "b05228cc169bf3f87b8ea486d0bd8ecb", "grade": false, "grade_id": "cell-ab683a4c4cd6db63", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "db1a4e45bfbc58d4de2aac20eb2b528a", "grade": true, "grade_id": "cell-d2914d3235d3f421", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert R2idx + AICidx + BICidx == 18" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task (3 points)**: Plot the scores against the number of predictors in seperate plots.\n", "Highlight the point that optimizes the respective criterion." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "d90aab7b435a07369f16f8c13717dcef", "grade": true, "grade_id": "cell-16f8def5d3362cec", "locked": false, "points": 3, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "plt.rcParams['figure.figsize'] = (15,8)\n", "fig, ax = plt.subplots(1,3)\n", "\n", "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Below, you can find an implementation of the best subset selection algorithm as one function `bestSubsetSelection`.\n", "Since it employs your implemented functions, e.g. `BIC`, `adjustedRSquare`, etc., it's a further check for the correctness of your implementation.\n", "\n", "**Task (1 point)**: Read the code thoroughly and add comments as you wish. Make sure that the following assert statements are passed." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def bestSubsetSelection(X, y, scoring = 'aic'):\n", " \"\"\" Input: X - predictor array of size (n,p)\n", " y - array of size (n,)\n", " scoring - string, either 'aic', 'bic' or 'r2'\n", " Output: (best_score, best_model) - best parameter selection\n", " \"\"\"\n", " score = []\n", " def scoring_func(y, yhat, d):\n", " if scoring == 'r2':\n", " return adjustedRSquare(y, yhat, d)\n", " else:\n", " shat = sigmaHat(X,y) \n", " if scoring == 'aic':\n", " return AIC(y, yhat, d, shat)\n", " elif scoring == 'bic':\n", " return BIC(y, yhat, d, shat)\n", " else:\n", " raise NameError('scoring not known')\n", "\n", " best_model, best_score = bestSubsetComputation(X, y)\n", " \n", " for d, l in enumerate(best_model):\n", " if d == 0:\n", " yhat = y.mean() * np.ones_like(y)\n", " else:\n", " lr = LinearRegression()\n", " lrfit = lr.fit(X[:,l],y)\n", " yhat = lrfit.predict(X[:,l])\n", "\n", " score.append( scoring_func(y, yhat, d) )\n", " \n", " if scoring == 'r2':\n", " best_idx = np.argmax(score)\n", " else:\n", " best_idx = np.argmin(score)\n", " \n", " return score[best_idx], best_model[best_idx]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "ce30dcdf15adc6f56870c4fbfedacd67", "grade": true, "grade_id": "cell-bacc72437f4919df", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "# Test the implementation\n", "R2_score, R2_model = bestSubsetSelection(X_train,y_train,'r2')\n", "assert abs(R2_score - 0.5225204821123409) < 1e-10\n", "assert R2_model == (1, 2, 3, 4, 5, 7, 8)\n", "\n", "AIC_score, AIC_model = bestSubsetSelection(X_train,y_train,'aic')\n", "assert abs(AIC_score - 1.0090748842464763) < 1e-10\n", "assert AIC_model == (1, 2, 3, 4, 6, 8)\n", "\n", "BIC_score, BIC_model = bestSubsetSelection(X_train,y_train,'bic')\n", "assert abs(BIC_score - 3114.815511092917) < 1e-10\n", "assert BIC_model == (1, 2, 3, 6, 8)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part B: Forward and backward stepwise selection algorithm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task (5 points)**: Once you've done Part A of this lab, it should be rather easy to implement either the **forward stepwise selection algorithm** or the **backward stepwise selection algorithm** as a function `forwardStepwiseSelection` or `backwardStepwiseSelection`, resp.\n", "\n", "Start by implementing a function `forwardStepwiseComputation` or `backwardStepwiseComputation` to perform steps 1 and 2 of the respective algorithm.\n", "\n", "Use 10-fold cross-validation as a measure for model selection (step 3 of the algorithms).\n", "Here, you can use the function `cross_val_score` from `sklearn.model_selection`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "1cb18559f8dc6ddc917dd26b2260c42d", "grade": false, "grade_id": "cell-d457193eecf936f7", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "7bbec86076b2ff3a2c33a26c40479b0b", "grade": true, "grade_id": "cell-6dea1887b538e822", "locked": true, "points": 5, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "from sklearn.datasets import load_diabetes\n", "data = load_diabetes()\n", "\n", "#print(data.DESCR)\n", "X = data.data\n", "y = data.target\n", "\n", "if 'forwardStepwiseSelection' in locals():\n", " fSS_score, fSS_model = forwardStepwiseSelection(X,y)\n", "elif 'backwardStepwiseSelection' in locals():\n", " fBB_score, fBB_model = backwardStepwiseSelection(X,y)\n", "else: assert False, 'Implement the backwardStepwiseSelection or the forwardStepwiseSelection'\n", "\n", "assert 'fSS_score' in locals() and abs(fSS_score - 0.4723839929236253) < 1e-8 or 'fBB_score' in locals() and abs(fBB_score - 0.4723839929236256) < 1e-8\n", "\n", "if 'forwardStepwiseSelection' in locals():\n", " np.random.seed(0)\n", " fSS_score, fSS_model = forwardStepwiseSelection(np.random.randn(100,10),np.random.randn(100,))\n", " assert abs(fSS_score + 0.11591245146628207) < 1e-6\n", " assert fSS_model[1] == 4\n", "elif 'backwardStepwiseSelection' in locals():\n", " np.random.seed(1)\n", " bSS_score, bSS_model = backwardStepwiseSelection(np.random.randn(100,10),np.random.randn(100,))\n", " assert abs(bSS_score + 0.17226717542216358) < 1e-6\n", " assert bSS_model[1] == 3\n", " \n", "else:\n", " assert False, 'Implement the backwardStepwiseSelection or the forwardStepwiseSelection'" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.7" } }, "nbformat": 4, "nbformat_minor": 2 }