{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Problem Sheet 6" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Problem 1: Cross-validation methods provided by Scikit-Learn" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We want to experiment with the methods `sklearn` provides to us.\n", "\n", "**Task**: For this we generate a *toy* dataset containing only the numbers from 0 to 9, i.e.,\n", "\n", " X = range(10)\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Leave One Out Cross-Validation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The function `LeaveOneOut` is a simple cross-validation.\n", "Each training set is created by taking all the samples except one, the test set consisting of the single remaining sample.\n", "Thus, for `n` samples, we have `n` different training sets and `n` different test sets.\n", "Leave-one-out cross-validation (LOOCV) can be computationally expensive for large datasets.\n", "\n", "You can import the function `LeaveOneOut` by\n", "\n", " from sklearn.model_selection import LeaveOneOut\n", " \n", "The documentation can be found [here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.LeaveOneOut.html#sklearn.model_selection.LeaveOneOut).\n", "\n", "With\n", "\n", " loo = LeaveOneOut()\n", " \n", "you generate a so-called *iterator* in python.\n", "An iterator is an object that can be iterated upon, meaning that you can traverse through all its values.\n", "\n", "The command\n", "\n", " S = loo.split(X)\n", "\n", "generates a leave-one-out cross-validation iterator `S` across the set/list/array `X`.\n", "\n", "**Task**: Execute the above commands." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In general, you can always access the next item in the iterator `S` by typing\n", "\n", " next(S)\n", " \n", "**Task**: Try this out multiple times and see what changes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In general, iterators are used in loops:\n", "\n", " for train, test in loo.split(X):\n", " print(\"Training set: %s\\t Test set: %s\" % (train, test))\n", "\n", "**Task**: Try it!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## K-Fold cross validation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The function `KFold` divides all the samples into `k` groups of samples called folds (if $k=n$, this is equivalent to the Leave-One-Out strategy) of equal sizes (if possible).\n", "The prediction function is learned using `k−1` folds, and the omitted fold is used for testing." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can import the function `KFold` by\n", "\n", " from sklearn.model_selection import KFold\n", "\n", "Check out the documentation of the function [here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html#sklearn.model_selection.KFold).\n", "As for LOOCV, create a test example that shows the behaviour of the function.\n", "For `n_splits=2`, you should obtain\n", "\n", " Training set: [5 6 7 8 9]\t Test set: [0 1 2 3 4]\n", " Training set: [0 1 2 3 4]\t Test set: [5 6 7 8 9]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Problem 2 - Cross-validation for a diabetes data set" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The diabetes data set contains ten measurements (age, sex, body mass index, average blood pressure, and six blood serum measurements) for each of the `n = 442` patients.\n", "\n", "The response variable is a quantitative measure of disease progression one year after baseline.\n", "\n", "**Task**: The data set is part of scikit learn, you can import it using\n", "\n", " from sklearn import datasets\n", " diabetes = datasets.load_diabetes()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we create a pandas data frame to hold this information.\n", "\n", "**Task**:\n", "Create a pandas data frame `X` holding the ten predictor variables. You should name the columns in the data frame using the optional argument `columns=cols`, where `cols` is given by\n", " \n", " cols = [\"age\", \"sex\", \"bmi\", \"map\", \"tc\",\n", " \"ldl\", \"hdl\", \"tch\", \"ltg\", \"glu\"]\n", " \n", "Store the response variables as an numpy array `y`\n", "\n", "**Hint**:\n", "As in the iris data set, the diabetes dataset is as a python dictionary. The predictor variables can be accessed by `diabetes.data`, the responses via `diabetes.target`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We want to try two different estimation approaches here.\n", "1. At first, we use a plain training set/validation set approach, where we exclude $1/5$ of the data from training.\n", "2. Our second approach is to estimate $5$ different models using 5-fold cross-validation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 1st approach: Simple splitting into training and validation set" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this part, we want to train a linear model using a subset of our samples.\n", "We have done this by hand so far, but there are also methods provided by `sklearn` which will do this work for us.\n", "Use the function `train_test_split` from the module `sklearn.model_selection` to divide your data inta a training and a validation set. SInce this selection is made randomly, you should set the optional input `random_state` to fix the seed of the random number generator to ensure comparability, e.g., by setting `random_state = 1`.\n", "\n", "**Task**: Split your data into a training and a validation set using the function `train_test_split`.\n", "Your validation set should contain 20\\% of the data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Check the size of your sets. The training set should contain 353 samples, while the test set contains 89." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**:\n", "Fit a linear regression model to your **training** data. Use the appropriate method in `sklearn`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Use your model to predict the response on the validation set." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Until now, our plots were always of the type predictor against response or against regression line.\n", "Another way to display the quality of a regression fit is to plot the true values against the predicted values.\n", "The closer the values are to the identity $f(x) = x$, the better the fit.\n", "\n", "**Task**:\n", "Produce a scatterplot of the true values in the validation response against the predicted values. Label the axes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Compute the mean squared error $\\text{MSE}_\\text{val}$ on the validation set.\n", "You can either use the method `mean_squared_error` from the module `sklearn.metrics`, or you can implement it by yourself." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: What is the proportion of variability that is explained by this linear fit. *Remember*: A `LinearRegression` has a method that computes exactly this." ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "R^2 score: 0.43843604017332694\n" ] } ], "source": [ "print(\"R^2 score:\", lm.score(X_test, y_test))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2nd approach: Use K-Fold Cross-Validation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we want to use cross-validation to select our model.\n", "Scikit-learn is a powerful library and possesses numerous modules and functions.\n", "Here, we explore the function `cross_val_score`, which can be imported by\n", "\n", " from sklearn.model_selection import cross_val_score\n", " \n", "This function performs K-fold cross-validation and returns a score for each fold (this is the $R^2$-score by default).\n", " \n", "**Task**: Please read the [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html#sklearn.model_selection.cross_val_score) and import the function `cross_val_score`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The functions expects as a first argument an `estimator`.\n", "We are informed by the documentation that this should be an \"estimator object implementing ‘fit’\". This is fulfilled by all estimation methods used so far (e.g. linear models, logistic regression, LDA).\n", "In the case of a linear regression fit, this could be\n", " \n", " model = linear_model.LinearRegression()\n", "\n", "**Task**: Perform a 5-fold cross-validation for a linear model on the diabetes data set and print the scores." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Use the function `cross_val_predict` in the module `sklearn.model_selection` to make prediction on the diabetes data set." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Make a scatterplot of the true values in the test response against the predicted values. Label the axes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Compute the $R^2$-score this model. You can use the function `r2_score` from the module `sklearn.metrics`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Caution**: Altough this $R^2$-score is higher than the score for the training/validation set split, they are not really comparable since we computed them on different subsets of the data.\n", "To get a more reliable comparison, we must keep part of the data as a so-called *hold-out* data set to be used for estimating the true learning error." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.6" } }, "nbformat": 4, "nbformat_minor": 2 }