HW04: Practice with feature engineering, splitting data, and fitting and regularizing linear models¶

[Please put your name and NetID here.]

Hello Students:¶

  • Start by downloading HW04.ipynb from this folder. Then develop it into your solution.

  • Write code where you see "... your code here ..." below. (You are welcome to use more than one cell.)

  • If you have questions, please ask them in class or office hours. Our TA and I are very happy to help with the programming (provided you start early enough, and provided we are not helping so much that we undermine your learning).

  • When you are done, run these Notebook commands:

    • Shift-L (once, so that line numbers are visible)
    • Kernel > Restart and Run All (run all cells from scratch)
    • Esc S (save)
    • File > Download as > HTML
  • Turn in HW04.ipynb and HW04.html to Canvas's HW04 assignment

    As a check, download your files from Canvas to a new 'junk' folder. Try 'Kernel > Restart and Run All' on the '.ipynb' file to make sure it works. Glance through the '.html' file.

  • Turn in partial solutions to Canvas before the deadline. e.g. Turn in part 1, then parts 1 and 2, then your whole solution. That way we can award partial credit even if you miss the deadline. We will grade your last submission before the deadline.

In [1]:
# ... your code here ... (import statements)

1. Feature engineering (one-hot encoding and data imputation)¶

(Note: This paragraph is not instructions but rather is to communicate context for this exercise. We use the same Titanic data we used in HW02:

  • There we used df.dropna() to drop any observations with missing values; here we use data imputation instead.
  • There we manually did one-hot encoding of the categorical Sex column by making a Female column; here we do the same one-hot encoding with the help of pandas's df.join(pd.get_dummies()).
  • There we used a decision tree; here we use $k$-NN.

We evaluate how these strategies can improve model performance by allowing us to use columns with categorical or missing data.)

1a. Read the data from http://www.stat.wisc.edu/~jgillett/451/data/kaggle_titanic_train.csv.¶

  • Retain only these columns: Survived, Pclass, Sex, Age, SibSp, Parch.
  • Display the first 7 rows.

These data are described at https://www.kaggle.com/competitions/titanic/data (click on the small down-arrow to see the "Data Dictionary"), which is where they are from.

  • Read that "Data Dictionary" paragraph (with your eyes, not python) so you understand what each column represents.
In [2]:
# ... your code here ...

1b. Try to train a $k$NN model to predict $y=$ 'Survived' from $X=$ these features: 'Pclass', 'Sex', 'Age', 'SibSp', 'Parch'.¶

  • Use $k = 3$ and the (default) euclidean metric.
  • Notice at the bottom of the error message that it fails with the error "ValueError: could not convert string to float: 'male'".
  • Comment out your .fit() line so the cell can run without error.
In [3]:
# ... your code here ...

1c. Try to train again, this time without the 'Sex' feature.¶

  • Notice that it fails because "Input contains NaN".
  • Comment out your .fit() line so the cell can run without error.
  • Run X.isna().any() (where X is the name of your DataFrame of features) to see that the 'Age' feature has missing values. (You can see the first missing value in the sixth row that you displayed above.)
In [4]:
# ... your code here ...

1d. Train without the 'Sex' and 'Age' features.¶

  • Report accuracy on the training data with a line of the form Accuracy on training data is 0.500 (0.500 may not be correct).
In [5]:
# ... your code here ...

1e. Use one-hot encoding¶

to include a binary 'male' feature made from the 'Sex' feature. (Or include a binary 'female' feature, according to your preference. Using both is unnecessary since either is the logical negation of the other.) That is, train on these features: 'Pclass', 'SibSp', 'Parch', 'male'.

  • Use pandas's df.join(pd.get_dummies())`.
  • Report training accuracy as before.
In [6]:
# ... your code here ...

1f. Use data imputation¶

to include an 'age' feature made from 'Age' but replacing each missing value with the median of the non-missing ages. That is, train on these features: 'Pclass', 'SibSp', 'Parch', 'male', 'age'.

  • Report training accuracy as before.
In [7]:
# ... your code here ...

2. Explore model fit, overfit, and regularization in the context of multiple linear regression¶

2a. Prepare the data:¶

  • Read http://www.stat.wisc.edu/~jgillett/451/data/mtcars.csv into a DataFrame.
  • Set a variable X to the subset consisting of all columns except mpg.
  • Set a variable y to the mpg column.
  • Use train_test_split() to split X and y into X_train, X_test, y_train, and y_test.
    • Reserve half the data for training and half for testing.
    • Use random_state=0 to get reproducible results.
In [8]:
# ... your code here ...

2b. Train three models on the training data and evaluate each on the test data:¶

  • LinearRegression()
  • Lasso()
  • Ridge()

The evaluation consists in displaying MSE$_\text{train}, $ MSE$_\text{test}$, and the coefficients $\mathbf{w}$ for each model.

In [9]:
# ... your code here ...

2c. Answer a few questions about the models:¶

  • Which one best fits the training data?
  • Which one best fits the test data?
  • Which one does feature selection by setting most coefficients to zero?

... your answers here in a markdown cell ...