{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Introduction to Data Science\n", "## A Complete scikit-learn Project - Part 1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this exercise, we will cover a complete data science project.\n", "The underlying data set consists of different predictors (variables) explaining the median home values in different regions of California, USA.\n", "\n", "This week, we focus on all the steps that come *before* our typical learning procedure starts, i.e., data preparation, data wrangling, feature generation, etc.\n", "\n", "## Loading the data\n", "We begin with the usual data and library imports and set the figure size for subsequent plots.\n", "\n", "**Task**: Execute the following code cell and customize the path as necessary." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "\n", "# Set standard figure size\n", "plt.rcParams['figure.figsize'] = (16,9)\n", "\n", "# Read csv file\n", "df = pd.read_csv('./housing.csv')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Getting an overview\n", "\n", "**Task**: Use the methods `head()`, `info()` and `describe()` applied to a `pandas DataFrame` to get an overview of the data. Try to answer the following questions:\n", "- Are there any missing values and, if so, in which columns/variables do they occur?\n", "- Are there non-numerical variables?\n", "- Are there some variables that have to be combined/modified in order to become meaningful?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "d9cb4ea1983cc23490965a68ba0fc144", "grade": true, "grade_id": "cell-66e611409475842b", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Observations**:" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "nbgrader": { "cell_type": "markdown", "checksum": "165d0dc7d30194d002d6491036283252", "grade": true, "grade_id": "cell-f8f48edc04dc12e2", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false } }, "source": [ "YOUR ANSWER HERE" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You should observe one categorical variable.\n", "\n", "**Task**: How many different classes occur in this attribute?\n", "You can use the method `value_counts()` of a `Series` object to count the occurences in each of the categories." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "7183579da69a6f0b9b19243c618cd41a", "grade": true, "grade_id": "cell-f7b35c70a6767864", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Observations**: Thus, there are 5 categories that might be important." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we take a look at histograms of the data.\n", "We have already seen in previous labs that this can reveal important features of a data set." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.hist(bins=50);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Observations**:\n", "- The values of `housing_median_age` and `median_house_value` have been capped. This is not a priori a problem, but can lead to wrong price predictions beyond this level. We have to take care of this.\n", "- The variables have different scales. A proper scaling can be necessary.\n", "- Many of the variables are not normally distributed, but heavy-tailed, i.e., they tend to extend farther to the extremal values (in our case to the right). Since some of our algorithms assume that the data is normally distributed, we have be cautious.\n", "- The medium income seems to be scaled in thousands of dollars, and the value seems to be capped at a medium income of 15, since there are exceptionally many samples with the value 15.0001.\n", "\n", "**Task**: Have a look at `df.median_income.value_counts()` to verify the last observation." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "8d0657b234bc387c19f6ffbc636fdc49", "grade": false, "grade_id": "cell-a13c3f9ca4812884", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Stratified Training-Test Splits\n", "\n", "Until now, we have tried to get a better understanding of the data.\n", "Before we look closer into price prediction\n", "we want to make sure not to include what is known as **data snooping bias**.\n", "\n", "In this project, we want to employ a split into a test and training set:\n", "The function `train_test_split` splits the data randomly into two distinct sets.\n", "This works well for fairly uniformly or normally distributed data, but can cause some problems for heavy-tailed attributes or categorical data with varying numbers of samples in each category.\n", "\n", "For this reason we want to stratify our random split according to the attribute `median_income`.\n", "Stratifying means that we choose our split in accordance with different classes.\n", "This means nothing else than that the algorithm makes sure to include the same proportion of each class in both the test and the training set.\n", "\n", "Pandas provides a method to set up categories from of a numerical variable that splits the data into equal-sized bins based on sample quantiles. This function is called `pd.qcut`.\n", "\n", "**Task**: Use the function `pd.qcut` to split `median_income` into 5 categories. Store the output in a variable `income_cat`.\n", "One could also label the categories using the optional `labels` parameter.\n", "By default, labels is equal to the interval boundaries of the corresponding class.\n", "You should keep the labels parameter unchanged because we only want to use this in our training-test split. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "9619f0e1b1e3f6b146325a0375d1d718", "grade": false, "grade_id": "cell-9e2c99c3dfdf5504", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Split the data using the `train_test_split` function from `sklearn.model_selection` such that the test set contains approximately 20\\% of the samples.\n", "Set `random_state = 1` and `stratify = income_cat`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "c607d3967c6f9159bc6c13458ff47515", "grade": false, "grade_id": "cell-ac8a50a8afb25a30", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "from sklearn.model_selection import train_test_split\n", "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "2efbf77e9bd31ae1fb8aec51943aa917", "grade": true, "grade_id": "cell-a5d3f829e27a52aa", "locked": true, "points": 0, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert train.shape == (16512,10)\n", "assert test.shape == (4128,10)\n", "assert abs(train['longitude'].mean() + 119.56374576065892) < 1e-8" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Visualizing the data\n", "Here we try to visualize as many features of the data as possible.\n", "In previous labs, we already used the scatterplot, but mainly with standard options.\n", "The nice thing about scatterplots is that we can represent more than two features of the data set.\n", "The following code cell plots every sample in our training set as one circle.\n", "The variable `longitude` sets up the x-axis, `latitude` the y-axis. \n", "\n", "The optional parameters are as follows:\n", "- `alpha = 0.4`: opacity of cicles\n", "- `s = train.population / 100`: size of circles, i.e., the larger the population in the region, the larger the circle\n", "- `c = 'median_house_value'`: color of the circle, i.e., the higher the median house value in the region, the brighter the color (this depends on the colormap `cmap`)\n", "\n", "**Task**: Execute the following code and identify the two regions with higher house values." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train.plot(kind='scatter', x = \"longitude\", y =\"latitude\", alpha=0.4,\n", " s = train.population/100, c = 'median_house_value',\n", " label='population', cmap = plt.get_cmap('viridis'),\n", " colorbar='true');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Observation**: The two clusters of high median house values are the surrounding areas of San Francisco (North) and Los Angeles (South)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Copy and paste the code fragment from above. Adapt the code so that the color is determined by the `median_income` variable." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "7031ffb68258bd9e0b18a6f526c4305e", "grade": true, "grade_id": "cell-fb0a3a4255b15e7d", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Take a look at the correlation matrix\n", "\n", "**Task**: Compute the correlation matrix of the training set.\n", "Determine the variables with the highest correlation with `median_house_value`.\n", "The output of the method `corr()` is a `pandas DataFrame` itself, and possesses the same methods, e.g., `sort_values()`.\n", "\n", "You should obtain the following:\n", "\n", " median_house_value 1.000000\n", " median_income 0.689024\n", " total_rooms 0.138906\n", " housing_median_age 0.100355\n", " households 0.067951\n", " total_bedrooms 0.050363\n", " population -0.022518\n", " longitude -0.043252\n", " latitude -0.147158" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "300ff11bd0ccddf3f4da03941120fdb7", "grade": true, "grade_id": "cell-cc52a71566544288", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Observation**:" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "nbgrader": { "cell_type": "markdown", "checksum": "ba263a23ea337c77d37d9f7e169f63b3", "grade": true, "grade_id": "cell-54f1076ea34c848e", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false } }, "source": [ "YOUR ANSWER HERE" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next plot generates a heatmap of the correlation (assuming you named the correlation matrix `cm`).\n", "Blue squares indicate a negative correlation, red ones a positive correlation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.matshow(cm, cmap=plt.get_cmap('jet'))\n", "plt.xticks(range(len(cm.columns)), cm.columns, rotation='vertical');\n", "plt.yticks(range(len(cm.columns)), cm.columns);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we look at the scatter matrix of the 4 most strongly positively correlated variables.\n", "These are\n", "\n", " attr = [\"median_house_value\", \"median_income\", \"total_rooms\", \"housing_median_age\"]\n", "\n", "**Task**: Execute the following code. What do you observe?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "attr = [\"median_house_value\", \"median_income\", \"total_rooms\", \"housing_median_age\"]\n", "pd.plotting.scatter_matrix(train[attr], alpha=0.1);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Observations**:" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "nbgrader": { "cell_type": "markdown", "checksum": "1be6411cfe2a1fb725f6204d8a706c53", "grade": true, "grade_id": "cell-deac595ab62220ad", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false } }, "source": [ "YOUR ANSWER HERE" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Generate Combinations of Attributes\n", "\n", "Now we want to try if we can do better with derived attributes.\n", "This is an important step, and we did this already by the incorporation of powers of variables.\n", "In this lab, we want to use a more reflected approach.\n", "Since the number of total rooms in an area depends heavily on the size of the area, we might take the rooms per household into consideration.\n", "Another interesting quantity could be the number of bedrooms per room, which we can get by dividing `total_bedrooms` through `total_rooms`.\n", "\n", "**Task**: Expand your training set by two new variables `rooms_per_household` and `bedrooms_per_room`.\n", "\n", "*Hint*: You can ignore the `SettingWithCopyWarning`!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "a79e9c63e8531bd5f6e1932b298acf66", "grade": false, "grade_id": "cell-2a47c6ea8caf56ad", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# Generate new variables\n", "\n", "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Check the correlation of the new variables with `median_house_value`.\n", "You should observe a correlation of -0.256096 for `bedrooms_per_room` vs. `median_house_value` and a corralation of 0.151931 for `rooms_per_household` vs. `median_house_value`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "6d4c1297c56b98045d0123d7203e2231", "grade": false, "grade_id": "cell-54ff35057ae948e7", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# Take a look at the correlation\n", "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Observation**: Both new variables have a very high correlation (in absolute values) with median house value." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data Wrangling\n", "Now we want to start to clean up the data.\n", "We've already noted that the variable `total_bedrooms` contains some missing values.\n", "\n", "In previous labs, we used to delete the samples from the data set, and this might be reasonable for this data set as well.\n", "In this lab, we employ another common way to get around missing values, which is replacing the missing value with some *standard* value.\n", "This *standard* value could be anything, even a constant or a optimistic/pessimistic guess.\n", "\n", "Scikit-learn provides a class to do this job, called `SimpleImputer` from the module `sklearn.impute`.\n", "\n", "**Task**:\n", "Define a `SimpleImputer` that assigns the median value of an attribute over all (training) samples to all missing values (**Hint**: Use the option `strategy`).\n", "Your `SimpleImputer` should be named `imp`.\n", "\n", "**Caution**: As with other scaling methods, we should use the *training median* to fill out the missing values in both the *training and the test set*." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "69c35ea3c652495d7b325f11782cb84f", "grade": false, "grade_id": "cell-afedacc046a27117", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Unfortunately, the `SimpleImputer` is not able to handle non-numerical data.\n", "Thus we have to split our data into the numerical attributes (i.e. everything but `ocean_proximity`) and the categorical one (`ocean_proximity`).\n", "\n", "**Task**: If you defined your `SimpleImputer` as a variable `imp`, the following code cell should execute. It splits the data into numerical and non-numerical ones, and fits the imputer." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_num = train.drop('ocean_proximity',axis=1)\n", "train_cat = train['ocean_proximity']\n", "imp.fit(train_num)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**:\n", "You can look at the trained median values in each of the attributes with numerical values by calling `imp.statistics_`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "4eda60350cff967f408801388fac407a", "grade": false, "grade_id": "cell-d8e6680551993fef", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Caution**:\n", "The output of `imp.transform(train_num)` is a `numpy array`.\n", "It can be converted easily into a `pandas DataFrame`.\n", "This can be done by executing the following code snippet." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X = imp.transform(train_num)\n", "df_num = pd.DataFrame(X, columns=train_num.columns)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we want to look at the categorical data.\n", "\n", "Simply assigning numerical values to the categories might be possible, but not the best choice, because this would introduce:\n", "- an ordering among the categories\n", "- the same distance between each of the categories in terms of *predictive power*\n", "\n", "A better choice is the so-called *one-hot encoding* or *one-of-K scheme*.\n", "We introduced this transformation in Homework 9 as well as in the lecture on Slide 112.\n", "There, we used the pandas method `get_dummies` to do the job.\n", "This time, we want to use the class `LabelBinarizer` from `sklearn.preprocessing`.\n", "It essentially generates a `numpy array` $X$ with \n", "\n", "$$ X_{i,j} = \\begin{cases} 1 & \\text{sample $i$ belongs to category $j$}, \\\\ 0 & \\text{otherwise}. \\end{cases} $$\n", "\n", "**Task**: Use a `LabelBinarizer` to transform the categorical data `train_cat` into an $n_\\text{samples} \\times 5$ `numpy array` named `X`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "120754ed02fc271b4e805795f2af8d9d", "grade": false, "grade_id": "cell-4d82442cc17531ac", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "from sklearn.preprocessing import LabelBinarizer\n", "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "716182fe07b70e5e77570ae315d871bf", "grade": true, "grade_id": "cell-db8d6f3b05b65d30", "locked": true, "points": 0, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert X.shape == (16512, 5)\n", "assert abs(X.mean() - 0.2) < 1e-8" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you named your `LabelBinarizer` `cat_enc`, you can see the classes via `cat_enc.classes_`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "2b42d9b3bdb268b790ac32f018c93670", "grade": true, "grade_id": "cell-8000b724bf072ddd", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Custom Transformers\n", "As you can see above, there are a lot of steps to be carried out before we can actually apply a statistical learning method such as linear regression.\n", "Altough `sklearn` provides a lot of useful transformers, we have to create our own for each learning task.\n", "In the following, we want to implement such a custom transformer.\n", "A `Transformer` is a class in sklearn, derived from an `Estimator`, which implements at least the methods `fit()` and `transform()`, as well as `fit_transform()`, which combines fitting and transformation.\n", "\n", "We have already used many transformers, without exactly knowing what they are, e.g. `StandardScaler` or the `SimpleImputer`.\n", "In contrast to a transformer, an `Estimator` is required to implement only the method `fit()`.\n", "A `Predictor` is an `Estimator` which additionally implements a `predict()` method as well as a `score()` method.\n", "\n", "There is a nice article on [arxiv.org](https://arxiv.org/pdf/1309.0238v1.pdf) that describes the design pattern in `scikit-learn`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Before we can start, we have to decide whether to stay with `pandas DataFrames` or switch to `numpy arrays`.\n", "It might be tempting to stay with `DataFrames`, and there are situations where this might be beneficial, but in general, we should use `numpy arrays` (or sparse `scipy arrays`). As we already noticed, both the `SimpleImputer` and the `StandardScaler` return `numpy arrays`, even if we input a `pandas DataFrame`. Thus, it's advantageous from a computational point of view to use `numpy arrays`.\n", "\n", "In order to add our new attributes `rooms_per_household` and `bedrooms_per_room`, we have to locate the columns by index and not by name.\n", "\n", "**Task**: We start by setting the indices of the column indexes of `total_rooms`, `total_bedrooms` and `households`. Execute the following cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ix_rooms = 3\n", "ix_beds = 4\n", "ix_households = 6" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "The next cell defines a custom `Estimator`.\n", "It sets up a new class derived by `BaseEstimator` and `TransformerMixin`.\n", "The latter sets up the `fit_transform()` method for us simply by calling `transform()` after `fit()`.\n", "\n", "**Task**:\n", "Try to understand the following code cell.\n", "It contains a number of items that might be new to you.\n", "Ask questions!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.base import BaseEstimator, TransformerMixin\n", "\n", "# We derive our new class from an BaseEstimator and TransformerMixin\n", "class AddRoomsPerHousehold(BaseEstimator, TransformerMixin): \n", " \n", " # The constructor in Python is defined by the method __init__,\n", " # we have to pass self as a first argument in the functions to\n", " # be able to access the attributes of the class object.\n", " # The argument\n", " # add_rooms_per_household = True\n", " # is a parameter with standard value 'True'.\n", " def __init__(self, add_rooms_per_household = True):\n", " self.add_rooms_per_household = add_rooms_per_household\n", " \n", " # Now, we define the fit-method, but there is nothing to do\n", " # here, so we only return the object itself, as you might\n", " # have noticed before.\n", " def fit(self, X, y = None):\n", " return self\n", " \n", " # Here, we define the transform method. We want to append a new\n", " # column that gives the number of rooms per household\n", " def transform(self, X, y = None):\n", " \n", " # We add the 'rooms_per_household' attribute only,\n", " # if add_rooms_per_household = True\n", " if self.add_rooms_per_household: \n", " new_var = X[:,ix_rooms] / X[:,ix_households]\n", " return np.c_[X,new_var]\n", " else:\n", " return X" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Now, copy and paste the code from above and define a second class that adds the attribute `bedrooms_per_room`. Call your class `AddBedroomsPerRoom`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "64a2d8057d203d79b93fd59b13daf5df", "grade": false, "grade_id": "cell-2d5b98bef0b927ff", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setting up a Transformation Pipeline\n", "The next step in our scikit-learn project walkthrough is to combine every preprocessing task in a `sklearn Pipeline`.\n", "A `Pipeline` is a list of $n$ pairs, the first entry holds a name, the second a `Transformer`-object. \n", "\n", "**Task**:\n", "Below you will find such a `Pipeline` for our non-categorical attributes.\n", "If you did everthing right so far, you should be able to execute the following code cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.pipeline import Pipeline\n", "from sklearn.preprocessing import StandardScaler\n", "\n", "num_pipeline = Pipeline([('fill_nas', SimpleImputer(strategy='median')),\n", " ('add_rooms_per_household', AddRoomsPerHousehold()),\n", " ('add_bedrooms_per_room', AddBedroomsPerRoom()),\n", " ('scaling', StandardScaler())])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**:\n", "A `Pipeline` consists of a number of transformers.\n", "The last object might be an `Estimator`, a `Transformer` or a `Predictor`.\n", "This determines its final behaviour.\n", "In our case, `StandardScaler` comes last, which *defines* our `num_pipeline` as a `Transformer` in its own.\n", "Therefore, we can call the method `fit_transform()` on it.\n", "Try this out on our data set `train_num`!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "3c32b2ab2097ad05be4a5323ac3f831f", "grade": false, "grade_id": "cell-8638f1cd8b20d4d4", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "aff7679d29c8ca67cd41ca9a0a6831eb", "grade": true, "grade_id": "cell-82ada359010a0fc5", "locked": true, "points": 0, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert X.shape == (16512, 14)\n", "assert abs(X.mean()) < 1e-8\n", "assert abs(X.std() - 1) < 1e-8" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One thing that we have so far done by hand was the splitting into numerical and categorical attributes.\n", "\n", "Fortunately, the programming principle behind `scikit-learn` is avoid typing, see [Wikipedia](https://en.wikipedia.org/wiki/Duck_typing) for an explanation.\n", "In a nutshell, this means that every class that has a method `fit()`, can be used as an `Estimator`, and every class that implements the methods `fit()`, `transform()` and `fit_transform()`, can be used as a `Transformer`.\n", "\n", "Therefore, we define another `Transformer`-class that selects the columns by label and returns a numpy array.\n", "\n", "**Task**:\n", "Understand the following definition of the class `AttributeSelector`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.base import BaseEstimator, TransformerMixin\n", "\n", "class AttributeSelector(BaseEstimator, TransformerMixin): \n", " \n", " def __init__(self, attributes):\n", " self.attributes = attributes\n", " \n", " def fit(self, X, y = None):\n", " return self # This again does nothing\n", " \n", " def transform(self, X, y = None):\n", " return X.loc[:,self.attributes].values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Below you will find the labels of the non-categorical and categorical columns in our data set." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "num_cols = ['longitude', 'latitude', 'housing_median_age', 'total_rooms',\n", " 'total_bedrooms', 'population', 'households', 'median_income']\n", "cat_cols = ['ocean_proximity']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Copy the definition from `num_pipeline` and include the `AttributeSelector` as a new initial step.\n", "Test the method `fit_transform()` on your training data `train` (and not `train_num`)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "6cc8570589419edc9b398b74166bdfb5", "grade": false, "grade_id": "cell-0f09728bbd3d7f6a", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "1ed094646a35ce9237b87d6a931b3c51", "grade": true, "grade_id": "cell-292d32e77bc34603", "locked": true, "points": 0, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert X.shape == (16512, 10)\n", "assert abs(X.mean()) < 1e-8\n", "assert abs(X.std() - 1) < 1e-8" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we want to define another pipeline for the categorical variable in your data set.\n", "Unfortunatelly, the `LabelBinarizer` cannot be used in a `Pipeline` in the current version of `sklearn`.\n", "With the function `super(SubClass, self)`, we are able to call methods from the `BaseClass`, which in our case is the original `LabelBinarizer` from `sklearn`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class PipelineBinarizer(LabelBinarizer):\n", " def fit(self, X, y=None):\n", " super(PipelineBinarizer, self).fit(X)\n", " \n", " def transform(self, X, y=None):\n", " return super(PipelineBinarizer, self).transform(X)\n", "\n", " def fit_transform(self, X, y=None):\n", " return super(PipelineBinarizer, self).fit(X).transform(X)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**:\n", "Define another pipeline for the categorical variable in your data set.\n", "Start with the selection of the correct variable(s), and use the `PipelineBinarizer` from above.\n", "Test the method `fit_transform()` on your training data `train` (and not `train_cat`)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cat_pipeline = Pipeline([('selection', AttributeSelector(cat_cols)),\n", " ('bin', PipelineBinarizer())])\n", "\n", "X = cat_pipeline.fit_transform(train)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "assert X.shape == (16512, 5)\n", "assert abs(X.mean() - 0.2) < 1e-8" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we have to unite our preprocessed numerical and categorical variables.\n", "This can be done with `FeatureUnion` from `sklearn.pipeline`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.pipeline import FeatureUnion\n", "\n", "unite_features = FeatureUnion([('num_pipe', num_pipeline),\n", " ('cat_pipe', cat_pipeline)])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Task**: Call the `unite_features.fit_transform()` method on your training data `train`.\n", "Check, if everything has been set up correctly and store the output as a `numpy array` $X$." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "7f72d839c2406c038a6f0d5826369836", "grade": false, "grade_id": "cell-f49f88491defee2f", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "004bbac9e2e5b04d0d5b687f9b5f0ceb", "grade": true, "grade_id": "cell-708a7b589fafc15b", "locked": true, "points": 0, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert(X.shape == (16512, 15))\n", "assert abs(X.mean() - 0.06666666666666704) < 1e-8" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Application of a Learning Algorithm\n", "\n", "Until now, we have preprocessed the data to prepare it for applying a learning algorithm.\n", "Altough we had to introduce a lot of new functions and classes, most of this material was already considered in previous lectures.\n", "\n", "After this lab, you should be able to define your own `Estimator`, `Transformer` or `Predictor`.\n", "Arranging these objects in a `Pipeline` pays off in larger data science projects.\n", "There are a number of advantages that are beyond of the scope of this notebook, but will hopefully be considered in an upcoming lab.\n", "You can, for example, preprocess your test data without thinking about further details by\n", "\n", " Xtest = unite_features.transform(test)\n", "\n", "**Task**: As a final task in this lab, we want to apply a simple linear regression to our training set and compute the RMSE (root mean squared error). It should be 68219.0015." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "df2c27e0090895fad3dcaaf4bce6b20a", "grade": false, "grade_id": "cell-3197eaf0d7e761bc", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "42ecdf23f107ef41a2c9d04d0396a750", "grade": true, "grade_id": "cell-2309a20e0ae53820", "locked": true, "points": 0, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert abs(mse - 68219.00150860597) < 1e-8" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.7" } }, "nbformat": 4, "nbformat_minor": 2 }