A Bridges Machine Learning Tutorial:

Predicting Divorce Using a Logistic Regression Model

 

Goals and prerequisites

This hour-long tutorial provides an introduction to using Bridges OnDemand notebooks for machine learning. Since machine learning often requires the use of large datasets, part of this tutorial will involve uploading data to Bridges pylon5 file space (also called $SCRATCH). Bridges home directories have very limited space so your $SCRATCH directory should be used for storing data.

To complete this tutorial you must have an active Bridges account. You also need to know your PSC username and account. If you don't have a Bridges account, see how to apply here.

If you don't know your account string on Bridges, log into Bridges. See the Connecting to Bridges section of the Bridges User Guide for  help on logging in.

Once you are connected, type

id -Gn

Your account will be displayed. It's possible to have multiple accounts.

 

Use sftp to add data to your Bridges pylon5 directory 

  1. Download the divorce.rar file to your local computer from https://archive.ics.uci.edu/ml/machine-learning-databases/00497/ and decompress it.
  2. In the divorce directory find 'divorce.csv'. You'll need this file for the tutorial.
  3. If you don't know your account see 'Goals and prerequisites' above.
  4. To add 'divorce.csv' to your pylon5 file space via sftp, open your terminal and type the following commands, replacing account and username with your information:
    sftp username@data.bridges.psc.edu (Enter your password when prompted.)
    cd /pylon5/account/username
    mkdir divorce
    cd divorce
    put /local/path/to/file/divorce.csv
    
 

Start a Jupyter notebook through the OnDemand interface on Bridges

  1. Go to the following link: https://ondemand.bridges.psc.edu and login with  your Bridges username and password.
  2. Select "Jupyter Notebook Production" from the "Interactive Apps" drop-down.
  3. Fill in the form as follows:
    • Set 'Number of Nodes' to '1'.
    • Set 'Number of Hours' to '1'.
    • Type in your account. If you don't know your account see 'Goals and prerequisites' above.
    • Choose RM-shared or RM-small for Partition. You can learn about other Bridges partitions here.
    • Extra Args can be left blank.
  4. Click "Launch".
  5. When the "Connect to Jupyter" link shows up, click on it.
  6. Start a new Jupyter Notebook by clicking on 'New' and choosing 'Python 3' from the dropdown.
  7. Change the title of the new notebook by clicking on "Untitled" on the new notebook page and typing "Divorce".

Importing modules and loading the data

Now you will be typing python3 code into Code cells in the Jupyter Notebook. First import the required modules.

import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import os
 

Click "Run" to run your cell. In a new cell, type the following code to save the path to the divorce data file in your $SCRATCH (pylon5) file space.

env_var = os.environ
scratch_dir = env_var.get('$SCRATCH')
data = scratch_dir + '/divorce/divorce.csv'

Click "Run" to run your cell. In a new cell, type the following code to load 'divorce.csv'and save it as a Pandas DataFrame named 'divorced'.

divorced = pd.read_csv(data,index_col=None,sep=';')
#The 'divorce.csv' is using colons to separate cells instead of commas so sep=';' 
 

Exploring the data

 

Run the following code to display the first five rows of your dataframe with the column names along the top.

divorced.head()
Out[4]:
 Atr1Atr2Atr3Atr4Atr5Atr6Atr7Atr8Atr9Atr10...Atr46Atr47Atr48Atr49Atr50Atr51Atr52Atr53Atr54Class
2 2 4 1 0 0 0 0 0 0 ... 2 1 3 3 3 2 3 2 1 1
4 4 4 4 4 0 0 4 4 4 ... 2 2 3 4 4 4 4 2 2 1
2 2 2 2 1 3 2 1 1 2 ... 3 2 3 1 1 1 2 2 2 1
3 2 3 2 3 3 3 3 3 3 ... 2 2 3 3 3 3 2 2 2 1
2 2 1 1 1 1 0 0 0 0 ... 2 1 2 3 2 2 2 1 0 1

5 rows × 55 columns

 

As you can see, a Pandas DataFrame is like an Excel spreadsheet. Each row is a different sample, in this case a marriage. Each column is a different attribute of the marriages (as defined by the answers to survey questions). You can find the Attribute Information (survery questions) here. In machine learning the attributes of the marriages are called independent variables or features. The variable you are trying to predict is called the dependent variable or target variable. In this case the rightmost column,'Class', contains the target variable. For this dataset '1' means divorced and '0' means not divorced.

 

Running the following code will show you the number of records for each class in the "Class" column.

divorced['Class'].value_counts()
0    86
1    84
Name: Class, dtype: int64
 

The number of records for 'divorced' and 'not divorced' are about the same so you can use accuracy to evaluate your model. If you'd like to learn about evaluating models where the number of records in each class is not balanced, this is a good overview.

Strong correlations between variables make the coefficients in a logistic regression model unstable and this may cause overfitting. Overfitting is when the model fits so well to the training data that it doesn't work for new data.

The following code will look at correlations between variables by calculating Pearson Correlation Coefficients.

#Create a correlation matrix
corrmatrix=divorced.corr()
corrmatrix.head()
Out[6]:
 Atr1Atr2Atr3Atr4Atr5Atr6Atr7Atr8Atr9Atr10...Atr46Atr47Atr48Atr49Atr50Atr51Atr52Atr53Atr54Class
Atr1 1.000000 0.819066 0.832508 0.825066 0.881272 0.287140 0.427989 0.802357 0.845916 0.790183 ... 0.400296 0.582693 0.633564 0.674843 0.725443 0.684143 0.575463 0.611422 0.768522 0.861324
Atr2 0.819066 1.000000 0.805876 0.791313 0.819360 0.102843 0.417616 0.864284 0.827711 0.782286 ... 0.389519 0.616884 0.643762 0.659841 0.680538 0.636558 0.536294 0.610726 0.728897 0.820774
Atr3 0.832508 0.805876 1.000000 0.806709 0.800774 0.263032 0.464071 0.757264 0.816653 0.753017 ... 0.308149 0.544863 0.638256 0.647961 0.663995 0.600603 0.491803 0.598749 0.673012 0.806709
Atr4 0.825066 0.791313 0.806709 1.000000 0.818472 0.185963 0.474806 0.798347 0.829053 0.873636 ... 0.340240 0.552301 0.630205 0.699069 0.685263 0.624015 0.534264 0.588390 0.698264 0.819583
Atr5 0.881272 0.819360 0.800774 0.818472 1.000000 0.297834 0.381378 0.877584 0.916327 0.823659 ... 0.470758 0.719899 0.659220 0.762257 0.795960 0.742664 0.663855 0.719493 0.836799 0.893180

5 rows × 55 columns

 

Plotting correlation coefficients as a heatmap will make it easier to find correlated variables.

 

Pearson Correlation matrix plotted as a heatmap

dims = (22, 12)
fig, ax = plt.subplots(figsize=dims)
sns.heatmap(corrmatrix, vmax=.8, square=True)
plt.show()
 
 

There are pretty high correlations across the board, but it's possible that only a few columns are needed to make predictions. Try selecting only the columns relating to questions 6,7, 43,45, and 46 for your model, as these are the least correlated. In the following code the colon is selecting all rows, and the list of numbers (in brackets) is selecting the columns by number (rather than by name).

divorced2= pd.DataFrame(divorced.iloc[:,[5,6,42,44,45]])
 

Training the model

 

Logistic Regression is a binary classifier because it places samples into categories such as True/False or Positive/Negative. In this case we are looking at divorced, or not divorced. However, logistic regression can be used as one step in solving multinomial classification problems. The Scikit-Learn Logistic Regression module we are using can implement multinomial logistic regression. If you'd like to learn more about Logistic Regression, you can find a good overview here.

Before fitting a model it is important to remove some data (usually about 20-30%) and save it as a test set. This test set will be used to test model accuracy. It cannot be used to train the model. The Scikit-Learn Train_Test_Split module is very useful for splitting datasets into training and test sets. First import the required module, then declare your target variable (y) and the variables being used to predict (X) before splitting both into training and test sets.

from sklearn.model_selection import train_test_split
 
#Declare predictor variables.    
X = divorced2
#Declare target variable.
y = divorced['Class']

#Split the data into train and test sets.
X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.20)
 

Next import and declare a logistic regression classifier and fit the model to the training data. Fitting the model means finding the right coefficients and intercept for the logistic regression equation. The right coefficients make it so the equation can correctly calculate the value of the target variable for all or most of the samples, when the values of the independent variables are plugged into it.

The regularization parameter (C) which you see below is used to prevent the model from overfitting. Regularization helps the model generalize better to new data not used for training. In this module a lower number for C means stronger regularization, so the value for C can be lowered (by dividing it by 10) if test set accuracy is much lower than training set accuracy. The default regularization for this Scikit-Learn Logistic Regression module is L2 regularization.

from sklearn.linear_model import LogisticRegression

# Declare a logistic regression classifier.
lr = LogisticRegression(C = 1e5)

#Fit the model to the training data.
fit = lr.fit(X_train, Y_train)
 

The following code calculates and prints the model coefficients. The coefficients can be useful for asessing the relative importance of each of the features (especially if all of the features are on the same scale).

print('Intercept')
print (fit.intercept_)
print('Coefficients') 
print(fit.coef_)
Intercept
[-5.37940024]
Coefficients
[[ 0.74640783 10.84130871  0.90409127  0.42400431  0.04745981]]
 

The coefficient output is in the same order as the columns in the training dataset. I'll create a dataframe to make it easier to match the coefficients with the features.

weights =pd.DataFrame({'Attributes':list(X_train.columns),'Coefficients':list(fit.coef_)[0]})
weights
 AttributesCoefficients
0 Atr6 0.746408
1 Atr7 10.841309
2 Atr43 0.904091
3 Atr45 0.424004
4 Atr46 0.047460

The most important attribute was number 7: "We are like two strangers who share the same environment at home rather than family."

Makes sense.

Making predictions

Now use your trained logistic regression model to predict Divorce using both the test and train datasets, and calculate the accuracy of predictions for both.

#Predict Divorce
pred_y_sklearn = lr.predict(X_test)
pred_y_sklearn = lr.predict(X_train)
print('\n Accuracy')
print('Test',lr.score(X_test, Y_test))
print('Train',lr.score(X_train, Y_train))
 Accuracy
Test 0.9117647058823529
Train 0.9044117647058824
 

There are no signs of overfitting, as the test set accuracy is about the same as the training set accuracy. An accuracy above 87% is not bad, but accuracy might be improved by using PCA (Principal Component Analysis) to deal with correlated variables, rather than throwing them out.