{ "cells": [ { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "95f31869ad26d906294a0537dd57c942", "grade": false, "grade_id": "cell-b2e4aa17c59cd417", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "# Introduction to Data Science\n", "## Lab 9: Introduction to Natural Language Processing\n", "### About the 'Sarcasm' data set\n", "This dataset contains about 1 million sarcastic comments from the Internet commentary website [Reddit](https://www.reddit.com/).\n", "The dataset was generated by scraping comments by the scientists [Mikhail Khodak, Nikunj Saunshi and Kiran Vodrahalli](https://arxiv.org/abs/1704.05579) containing the \\s (sarcasm) tag.\n", "This tag is often used by users of Reddit to indicate that their comment is in jest and not meant to be taken seriously, and is generally a reliable indicator of sarcastic comment content.\n", "\n", "The dataset is balanced, i.e., it contains equal parts of sarcastic and non-sarcastic comments, while the true ratio is about 1:100.\n", "\n", "The data can be found [here](https://nlp.cs.princeton.edu/SARC/0.0/), the notebook is based on [this source](https://www.kaggle.com/kashnitsky/a4-demo-sarcasm-detection-with-logit-solution)." ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "939784d8a7ab2e256f580321746b85e9", "grade": false, "grade_id": "cell-528f458609a86830", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "### Part A: Downloading and importing the data set\n", "\n", "Before we start, we import the necessary modules." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import numpy as np\n", "import pandas as pd\n", "from sklearn.feature_extraction.text import TfidfVectorizer\n", "from sklearn.linear_model import LogisticRegression\n", "from sklearn.pipeline import Pipeline\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.metrics import accuracy_score, confusion_matrix\n", "from matplotlib import pyplot as plt" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "9c20be30dfe4addefae0f5ca98051ba3", "grade": false, "grade_id": "cell-82e50621e32b99b8", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "**Task**: Download the files `train-balanced.csv.bz2` and `train-balanced_pol.csv.bz2` from the webpage.\n", "You don't have to unzip the files manually, it can be done using, e.g., `pandas` `read_csv` function.\n", "The file `train-balanced.csv.bz2` is fairly large, as it contains about 1 million samples.\n", "The file `train-balanced_pol.csv.bz2` contains only a subset of the data.\n", "You should use this file (`train_balanced_pol.csv.bz2`) to set up the options for the `pd.read_csv` function correctly.\n", "\n", "Once you've sure that everything works as expected, you can switch to the other file (`train_balanced.csv.bz2`).\n", "Then, import the file `train-balanced.csv.bz2` as `df`.\n", "\n", "**Note**: The names for the colomns are as follows:\n", " \n", " ['label', 'comment', 'author', 'subreddit', 'score', 'ups', 'downs', 'date', 'created_utc', 'parent_comment']" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "6bfd0f40922af6b51fca408718c50774", "grade": false, "grade_id": "cell-9f377b7d0c341cb0", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "30e7a4aac9b348fa0adfa0c4bda78565", "grade": false, "grade_id": "cell-a3cc531b9965dac0", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "**Task**: Use the methods `head()` and `info()` to get an overview of the data set." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "fd65c29e2136f3e4560c0562e7589555", "grade": true, "grade_id": "cell-7234ba624ebd7eb3", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "4ab4d2dddb54007f479c4e28524d0327", "grade": false, "grade_id": "cell-9df0dcb57f2bca52", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "You should find out that some comments are missing.\n", "\n", "**Task**: Delete them using the `dropna` method with appropriate options." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "dab74107954c9df76d11f2dfe40b61d5", "grade": false, "grade_id": "cell-98fa8d572f0a3e44", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "81d08a4be9263aa234508e01b0a0ab1e", "grade": true, "grade_id": "cell-e8e74af9232afac5", "locked": true, "points": 0, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert df.shape == (1010772, 10)\n", "assert abs(df.score.mean() - 6.88600396528594) < 1e-8" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "868ec71672fcc9841535239f969ce113", "grade": false, "grade_id": "cell-845d6a6d429fc37b", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "**Question**: How many sarcastic comments are now in the data set? Store your answer in the variable `ans_1`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "1952bc2b9ce39724e5d2ae98c2cbea30", "grade": false, "grade_id": "cell-7938685a47cdbb17", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "9499f83d3adb966caaaad31d86918045", "grade": true, "grade_id": "cell-ae732a65797abcb2", "locked": true, "points": 0, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert 'ans_1' in locals()\n", "assert ans_1 == 505368" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "117588afa98ad044a1b90b42bb8dbc89", "grade": false, "grade_id": "cell-9e01077a321f057d", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "Altough we could use all the data provided to us in the `train_df`, we only want to use the column containing the `'comment'`s.\n", "\n", "**Task**: Extract the `'comment'` column as variable `X` and the `'label'` column as variable `y`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "f327dfc6ef8c7220abdff713f46bd29c", "grade": false, "grade_id": "cell-9db8e35795d62c79", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "0b22b314385ef0006ec9ea32f4605b51", "grade": true, "grade_id": "cell-7c38eda3aaca29bf", "locked": true, "points": 0, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert abs(y.mean() - 0.4999821918296114) < 1e-8\n", "assert X.dtype == 'O'\n", "assert X.shape == (1010772,)\n", "assert type(X) == pd.core.series.Series" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "aaf69b97395e89b545a307f47581d43a", "grade": false, "grade_id": "cell-568f41b01a1b4b6a", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "**Task**: Next, we want to split the data set into a training and validation set.\n", "Use the function `train_test_split` to split the data into\n", "- the training data set `Xtrain` with labels `ytrain`\n", "- the validation data set `Xtest`with labels `ytest`\n", "\n", "Use the option `random_state = 1`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "c59b8374cbc49da370937d44f2839cec", "grade": false, "grade_id": "cell-4f2d02901fefd9d0", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "5220fe1a7ac123c7d1c93aedc5e068b2", "grade": false, "grade_id": "cell-5f4467671c0f0198", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "From now on, we only work with the training data `Xtrain` and `ytrain`, and keep the validation data set for testing after training." ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "8fd4c6dd06dae7e998f8d0ccc0d14ca8", "grade": false, "grade_id": "cell-69cfc03ca4061504", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "Let's explore whether the length of the comment might already indicate if it's sarcastic or not.\n", "\n", "With\n", "\n", " Xtrain.str.len()\n", "\n", "we get a `Series` object which contains the lengths of the comments.\n", "\n", "Unfortunately, plotting a histogram of the lengths is not insightful at all, even if you increase the number of bins." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Xtrain.str.len().hist()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "792599cc44c6f6da016c04199ef93739", "grade": false, "grade_id": "cell-95860d81fa662768", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "The problem is that most of the comments are rather short, only some contain more than 1000 characters.\n", "Fortunately, applying the logarithm to the lengths helps to represent the data more clearly.\n", "\n", "You can to this by the method `apply(np.log)`.\n", "In general, the method `apply(some_fun)` applies the function `some_fun` to all elements in the `Series`.\n", "\n", "**Task**: Generate one figure containing two histograms (one for the sarcastic and one for the non-sarcastic comments) of the log-lengths of the comments.\n", "Use the options `label` to name your histograms as well as `alpha = 0.5` to draw the histogramms semi-transparent.\n", "Finally call `plt.legend()` to show the legend." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "0353daf8ced5988f69053a0d14cd8935", "grade": false, "grade_id": "cell-e0661b4deca5fb64", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "921de0efd0548ab559352af15cbb91ed", "grade": false, "grade_id": "cell-f3855185e8452e5f", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "### Wordclouds\n", "\n", "Next, we want to find out which words occur most often in the sarcastic and non-sarcastic comments.\n", "We can do this using a word cloud.\n", "\n", "The following code cell does this for the sarcastic comments:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Import necessary stuff\n", "from wordcloud import WordCloud, STOPWORDS\n", "\n", "# Set up the word cloud generator\n", "wordcloud = WordCloud(background_color='black',\n", " stopwords = STOPWORDS,\n", " max_words = 200,\n", " max_font_size = 100,\n", " random_state = 1,\n", " width=600,\n", " height=400)\n", "\n", "# Generate wordcloud\n", "plt.figure(figsize=(16, 12))\n", "wordcloud.generate(str(Xtrain[y==1]))\n", "plt.imshow(wordcloud);" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "5cab9ba5e6bfd771061620aa1aa2ec47", "grade": false, "grade_id": "cell-f3b5d46d37118e56", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "**Task**: Generate a second word cloud for the non-sarcastic comments." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "f0e45faf2f5d5da80a217d1c351aae37", "grade": true, "grade_id": "cell-1dad7f7a94ea8d3d", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "6a88afd29cda84ce49d5f66476707a20", "grade": false, "grade_id": "cell-74a192c07fa470ef", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "### Aggretation functions\n", "\n", "Now, we want to investigate whether sarcastic comments are more prone to occur in particular `subreddit`'s.\n", "\n", "Here, we have to use our whole data frame `df` again.\n", "The command \n", "\n", " sub_df = df.groupby('subreddit')['label'].agg([np.size, np.sum, np.mean])\n", " \n", "returns a data frame which contains the size, the sum and the mean of the `label`'s grouped by the `subreddit`'s.\n", "\n", "Since the `'label'` columns marks a sarcastic comment with a `1`, a non-sarcastic with a `0`, mean value gives the proportion of sarcastic comments.\n", "\n", "**Task**: Use the `sort_values()` method together with `head()` to display the ten `subreddits` with the highest number of sarcastic comments. Store this data frame as `agg_df`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "da15a7895d8a9185bd4eee7d230558e6", "grade": false, "grade_id": "cell-88f5b5362f23bd13", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "sub_df = df.groupby('subreddit')['label'].agg([np.size, np.sum, np.mean])\n", "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "7657785bc40fec98f72233b7375bad14", "grade": true, "grade_id": "cell-79624a281c1851d1", "locked": true, "points": 0, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert agg_df.shape == (10,3)\n", "assert agg_df['size'].sum() == 250442\n", "assert abs(agg_df['mean'].mean() - 0.5397175526479832) < 1e-8" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "a37bcd0e4b8bf418eaca5eef6e62a39e", "grade": false, "grade_id": "cell-74a21d33a49a753f", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "**Task**: Generate a data frame which contains all `subreddit`'s with more than `1000` comments (both sarcastic and non-sarcastic), and sort it by its mean values in descending order.\n", "Store this data frame as `large_df`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "857e203352208c7eeb16223283bda0ca", "grade": false, "grade_id": "cell-28772e07c9234fc6", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "fe7ae126d4c00e1b7b120606a2b9b7da", "grade": true, "grade_id": "cell-1dc7f8689e5d0375", "locked": true, "points": 0, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert large_df['sum'].mean() == 3516.7\n", "assert abs(large_df['mean'].std() - 0.04852678950867806) < 1e-8" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "a13c85e156e7f2e5161e9519405f6bd0", "grade": false, "grade_id": "cell-fb49c445e3c7d6a8", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "You should observe that there are indeed a lot of `subreddit`'s with significantly more than 50 % of sarcastic comments." ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "7bf18fe1aea4e86a8993288bc02793b0", "grade": false, "grade_id": "cell-bec1164166290cbb", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "Now, instead of grouping by the `subreddit`, we want to group by the `author` to find out whether there are some extraordinarily sarcastic `author`'s.\n", "\n", "**Task**: Similar to the generation of the data frame `sub_df` you should set up a data frame `author_df` which contains:\n", "- the number of comments,\n", "- the number of sarcastic comments as well as\n", "- the proportion of sarcastic comments\n", "grouped by the `author`'s." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "1062e6163f107e3a21d0e1d576b5989e", "grade": false, "grade_id": "cell-046940a19071d634", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "512def6116b762c0c927add3e2ce9d86", "grade": true, "grade_id": "cell-d9a6d0cfb302bf28", "locked": true, "points": 0, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert abs(author_df['size'].mean() - 3.939710009354537) < 1e-8" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "4482dbb45428314794a9491c3f51dc7d", "grade": false, "grade_id": "cell-5450d05fb10bb2f0", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "**Task**: \n", "Let's analyse only the authors with more than 200 comments and print both the 10 authors with highest proportion of sarcastic comments as well as the 10 authors with the lowest proportion." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "9e90f40edcf0ed0e45038c0fc7888c49", "grade": false, "grade_id": "cell-5d30c7796faef31d", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "6f5c6d8caac85c4d2d878e67694fb1ca", "grade": false, "grade_id": "cell-769ca13388c6bd16", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "Here you should find out the data seems to be pre-selected to contain equal fractions of sarcastic and non-sarcastic comments." ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "ea37356698b1e9ad551dba0e9177f0c9", "grade": false, "grade_id": "cell-c1b560969b44db43", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "### Training a logistic regression model." ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "a091a84f1f330e24fb7ee7da906a0eae", "grade": false, "grade_id": "cell-fd1dd7b7e1260e13", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "In order to train a logistic regression model, we have to convert our `string`-valued data into some numerical values.\n", "One way to accomplish this task is by using a [**term frequency–inverse document frequency** (short **tfidf**) measure](https://en.wikipedia.org/wiki/Tf%E2%80%93idf).\n", "\n", "Fortunately, this method is already part of `scikit-learn`, we can use the `TfidfVectorizer` to convert an array of strings to a sparse matrix, i.e., a matrix with a particular storage pattern which is used often for matrices containing mostly zero's.\n", "\n", "Let's test the function on behalf of the list `x`\n", "\n", " x = np.array(['This is the first document.',\n", " 'This document is the second document.',\n", " 'And this is the third one.',\n", " 'Is this the first document?'])\n", "\n", "You can set up a standard TfidfVectorizer by setting \n", "\n", " vectorizer = TfidfVectorizer()\n", "\n", "and then calling the `fit_transform()` method, i.e.\n", "\n", " s = vectorizer.fit_transform(x)\n", " \n", "**Task**: Execute the commands from above.\n", "Print the variable `s`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "d4ae3a9eef27ec934f1e82143ffc9f6f", "grade": false, "grade_id": "cell-025d6cc97ff52974", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "11d2107e608cec8b30634b53bfff2b3a", "grade": false, "grade_id": "cell-8e1ef2b320508d1b", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "Since `s` is a sparse matrix, you will only see the position of a value as a tuple `(i,j)` together with its value `s_{i,j}`.\n", "You can print the full array using the `toarray()` method.\n", "\n", "**Task**: Print `s` as an array!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "ae52731b9623ee10307cd2c2802457a6", "grade": false, "grade_id": "cell-11a8a7b730e1ae9e", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "ef3741cb3ec2cb2872b7b9078dbec2de", "grade": false, "grade_id": "cell-cb472229a84f62b8", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "**Task**: With `print(vectorizer.get_feature_names())`, you can see `names` belonging to the columns in the array `s`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "926ce2f64751adb61f57907688b1a6c7", "grade": false, "grade_id": "cell-cdcbc10679799322", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "c39ddb770a71b05fb66b16603064e334", "grade": false, "grade_id": "cell-e72d4fc3e3451e82", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "**Task**: Describe by your own words: What kind of information is contained in the i-th row, j-th column of the array `s`, i.e., $s_{i,j}$." ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "nbgrader": { "cell_type": "markdown", "checksum": "85283b4cc1b6e5a5e71682facdef79ed", "grade": true, "grade_id": "cell-28b388aba07dbc25", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false } }, "source": [ "YOUR ANSWER HERE" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "aa8b00756218b337238212766814eaf2", "grade": false, "grade_id": "cell-35eb5d4f5cecac26", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "Now, we want to apply the vectorizer to our training data set `(Xtrain, ytrain)` and train a logistic regression model and the transformed data.\n", "\n", "With\n", "\n", " tf_idf = TfidfVectorizer(ngram_range=(1, 2), max_features=50000, min_df=2)\n", " \n", "we set up a `TfidfVectorizer`.\n", "With\n", " \n", " logit = LogisticRegression(C=1, n_jobs=4, solver='lbfgs', random_state=1, verbose=1)\n", " \n", "we set up a Logistic regression model with $\\ell^2$-regularization (`C = 1`).\n", "\n", "Since the pre-processing is necessary for both the training and test data, we create a full model using a so-called `Pipeline`:\n", "\n", " full_model = Pipeline([('tf_idf', tf_idf), ('logit', logit)])\n", "\n", "which consists of a list of $n$ `scikit-learn` objects, whose:\n", "- first (n-1) elements have a built-in `fit_transform` method\n", "- last element has a built-in `fit` method." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "717c64c556480e917c414cdd4772fa7c", "grade": false, "grade_id": "cell-11ad7e7ed9f20fe9", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "88b187a76c956fd717f8c1a9f667c45e", "grade": false, "grade_id": "cell-9d72c47db0ecb8ea", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "**Task**: With\n", " \n", " %%time\n", " full_model.fit(Xtrain,ytrain)\n", " \n", "we can train the model (this can last about a minute)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "09dbb3c1df4aa8107319aa94fb060b35", "grade": false, "grade_id": "cell-91b9ad619461b216", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "c45ac07244bbad7e0f7ff2d884a94212", "grade": false, "grade_id": "cell-a57435491b51a49e", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "Now we can use our trained model to predict the labels on the validation data `Xtest`:\n", "\n", " %%time\n", " ypred = full_model.predict(Xtest)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "d6ee633191093199bc6a48695ef9cf09", "grade": false, "grade_id": "cell-6789ca3cc7fee28f", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "b19b17248b480858820b52dfdd8e7c5d", "grade": false, "grade_id": "cell-265a4cbd1c0041f8", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "**Task**: Determine the accuracy of the model, i.e., the percentage of correct predictions, for the validation data set.\n", "Implement a function by yourself, or use the function `accuracy_score()`.\n", "Store the value as `acc_score`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "bcae75a3240b29f73af7a3969e771f07", "grade": false, "grade_id": "cell-0159c3759eabbd8e", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "5d3872f4d60cf495e3d85e35491791de", "grade": true, "grade_id": "cell-96f33cbb66fa8d13", "locked": true, "points": 0, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert abs(acc_score - 0.7208707799582893) < 1e-8" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "f8a61f5f257c0417d45e2ec4a995bf75", "grade": false, "grade_id": "cell-b42f32e7ac0a74bb", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "**Task**: Print the confusion matrix for our model for the validation data set which contains the numbers of\n", "\n", " [[True positives, False positives],\n", " [False negatives, True negatives]].\n", " \n", "**Hint**: Use an appropriate function." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "510843f8a5fef6e9f61b1dab79886ae6", "grade": false, "grade_id": "cell-dd000c963ce946c2", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.7" } }, "nbformat": 4, "nbformat_minor": 2 }