Decision Trees in Python

Data Cleaning

The dataset being used is the classic titanic dataset from the seaborn library. The models will attempt to predict whether the passenger survived or not.

The initial uncleaned dataset can be viewed and downloaded below.

The first step is removing duplicate columns.

  • survived is the same as alive
  • embarked is the same as embark_town
  • sex is the same as who
  • pclass is the same as class

So, alive, embark_town, who, and class were removed.

The second step is dealing with missing values for the deck column. Approximately 77% of the data for this column was missing, so the deck variable was removed alltogether.

The third step is dealing with missing values for the embarked column. There are only two missing values. Since, this is a categorical column, the missing values were replaced with the mode. The mode was ‘S’.

The fourth step is dealing with missing values for the age column. Since age may have a relationship with other columns, the missing values were imputed with the mean after grouping by pclass and sex.

The cleaned dataset can be viewed and downloaded below.

The code for the data cleaning can be viewed and downloaded below.

# -*- coding: utf-8 -*-
"""
Created on Mon May 15 16:37:51 2023

@author: casey
"""

## LOAD LIBRARIES
# Set seed for reproducibility
import random; 
random.seed(53)
import pandas as pd
import seaborn as sns

# ------------------------------------------------------------------------------------------------ #
## LOAD DATASET
df = sns.load_dataset('titanic')

# ------------------------------------------------------------------------------------------------ #
## DISPLAY DETAILS OF DATASET

# Number of rows and columns
df.shape

# General details (includes missing values)
df.info()

# Visual look at dataset
df.head(10)

# ------------------------------------------------------------------------------------------------ #
## REMOVE DUPLICATE COLUMNS
# 'survived' is same as 'alive', 
# 'embarked' is abbreviation of 'embark_town', 
# 'sex' is same as 'who' 
# 'pclass' is same as 'class'

df.drop(columns=['alive','embark_town','who','class'], axis = 1, inplace = True)

# ------------------------------------------------------------------------------------------------ #
## DEAL WITH MISSING VALUES
df.isnull().sum()

# 'deck' has 77% of values missing so that column will just be removed
df.drop(columns=['deck'], axis=1, inplace=True)

# `embarked` has 2 missing values, will replace with the mode
df.embarked.fillna('S', inplace=True)

# age may have a relationship with other columns, so try imputing after grouping
#df.age.fillna(df.age.median(), inplace = True)
df['age'] = df['age'].groupby([df['pclass'], df['sex']]).apply(lambda x: x.fillna(x.mean()))

# ------------------------------------------------------------------------------------------------ #
## WRITE CLEANED DATASET TO .CSV

df.to_csv('C:/Users/casey/OneDrive/Documents/Machine_Learning/Supervised_Learning/Data/Clean_Data_Titanic.csv',
          index = False)

The final cleaned dataset contains the following columns.

  • survived: whether the passenger survived or not (1=survived, 0=not survived)
  • pclass: the class the passenger stayed in (1, 2, or 3)
  • sex: the sex of the passenger
  • age: the age of the passenger
  • sibsp: number of siblings/spouses aboard for each passenger
  • parch: number of parents/children aboard for each passenger
  • fare: the fare for each passenger
  • embarked: port each passenger embarked from
  • adult_male: whether the passenger was an adult male or not
  • alone: whether the passenger was traveling alone or not

Modeling Prep

  • Numeric variables only
    • Use one hot encoding for categorical variables
  • Split data into train and test sets

Creating decision tree models in python requires two key steps. The first, is that variables need to be numeric. So, any categorical variables need to be converted to numeric variables using one hot encoding. In this case, pclass, sex, embarked, alone, and adult_male were all converted to numeric variables. The second, is that since, Decision Trees are a supervised machine learning model it requires the prepped data to be split into training and testing sets. The training set is used to train the model. The testing set is used to test the accuracy of the model. In the following example, the training set is created by randomly selecting 80% of the data and the testing set is created by randomly selecting 20% of the data. These numbers are not the only option, just a popular one. The training and testing sets must be kept disjoint (separated) throughout the modeling process. Failure to do so, will most likely result in overfitting and poor performance on real data that is not from the training or testing set.

The code for the modeling prep as well as the modeling and model evaluation can be viewed and downloaded below.

# -*- coding: utf-8 -*-
"""
Created on Wed Mar 15 10:16:55 2023

@author: casey
"""

## LOAD LIBRARIES
# Set seed for reproducibility
import random; 
random.seed(53)
import pandas as pd

# Import all we need from sklearn
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.tree import DecisionTreeClassifier

# Import visualization
import scikitplot as skplt
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import tree

# ------------------------------------------------------------------------------------------------ #
## LOAD DATA
dt_df = pd.read_csv('C:/Users/casey/OneDrive/Documents/Machine_Learning/Supervised_Learning/Data/Clean_Data_Titanic.csv')

# ------------------------------------------------------------------------------------------------ #
## CREATE DUMMY VARIABLES FOR CATEGORICAL VARIABLES
dt_onehot = dt_df.copy()
dt_onehot = pd.get_dummies(dt_onehot, columns = ['pclass', 'sex', 'embarked', 'alone', 'adult_male'])

# ------------------------------------------------------------------------------------------------ #
## CREATE TRAIN AND TEST SETS

# X will contain all variables except the labels (the labels are the first column 'survived')
X = dt_onehot.iloc[:,1:]
# y will contain the labels (the labels are the first column 'survived')
y = dt_onehot.iloc[:,:1]

# split the data vectors randomly into 80% train and 20% test
# X_train contains the quantitative variables for the training set
# X_test contains the quantitative variables for the testing set
# y_train contains the labels for training set
# y_test contains the lables for the testing set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

# ------------------------------------------------------------------------------------------------ #
## CREATE FULL TREE (Depth 5)
# Look at below documentation for parameters
# https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html
DT_Classifier = DecisionTreeClassifier(criterion='entropy', max_depth=5)
DT_Classifier.fit(X_train, y_train)

## EVALUATE TREE
y_pred = DT_Classifier.predict(X_test)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))

## GET FEATURE IMPORTANCE
feat_dict= {}
for col, val in sorted(zip(X_train.columns, DT_Classifier.feature_importances_),key=lambda x:x[1],reverse=True):
  feat_dict[col]=val
  
feat_df = pd.DataFrame({'Feature':feat_dict.keys(),'Importance':feat_dict.values()})

## PLOT FEATURE IMPORTANCE
values = feat_df.Importance    
idx = feat_df.Feature
plt.figure(figsize=(10,8))
clrs = ['green' if (x < max(values)) else 'red' for x in values ]
sns.barplot(y=idx,x=values,palette=clrs).set(title='Important Features to Predict Titanic Passenger Survival')
plt.show()

## VISUALIZE TREE
fig = plt.figure(figsize=(25,20))
_ = tree.plot_tree(DT_Classifier, 
                   feature_names=X.columns,  
                   class_names=['0','1'],
                   filled=True)

fig.savefig('C:/Users/casey/OneDrive/Documents/Machine_Learning/Supervised_Learning/Decision_Trees/Visualizations/Titanic_Tree_Full.pdf')

# ------------------------------------------------------------------------------------------------ #
## PLOT CONFUSION MATRIX

fig = plt.figure(figsize=(15,6))

ax1 = fig.add_subplot(121)
skplt.metrics.plot_confusion_matrix(y_pred, y_test,
                                    title="Confusion Matrix for Full Decision Tree (Depth 5)",
                                    cmap="Oranges",
                                    ax=ax1)

# ------------------------------------------------------------------------------------------------ #
## CREATE REDUCED TREE (Depth 5, Only important features)

# only keep important features in train and test sets
X_train = X_train[['adult_male_False', 'fare', 'pclass_3', 'age', 'pclass_2', 'parch', 'embarked_C']]
X_test = X_test[['adult_male_False', 'fare', 'pclass_3', 'age', 'pclass_2', 'parch', 'embarked_C']]

# Look at below documentation for parameters
# https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html
DT_Classifier = DecisionTreeClassifier(criterion='entropy', max_depth=5)
DT_Classifier.fit(X_train, y_train)

## EVALUATE TREE
y_pred = DT_Classifier.predict(X_test)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))

## VISUALIZE TREE
fig = plt.figure(figsize=(25,20))
_ = tree.plot_tree(DT_Classifier, 
                   feature_names=X.columns,  
                   class_names=['0','1'],
                   filled=True)

fig.savefig('C:/Users/casey/OneDrive/Documents/Machine_Learning/Supervised_Learning/Decision_Trees/Visualizations/Titanic_Tree_Reduced_Important.pdf')

# ------------------------------------------------------------------------------------------------ #
## PLOT CONFUSION MATRIX

fig = plt.figure(figsize=(15,6))

ax1 = fig.add_subplot(121)
skplt.metrics.plot_confusion_matrix(y_pred, y_test,
                                    title="Confusion Matrix for Reduced Decision Tree (Important Features)",
                                    cmap="Oranges",
                                    ax=ax1)

# ------------------------------------------------------------------------------------------------ #
## CREATE REDUCED TREE (Depth 3, only important features)
# Look at below documentation for parameters
# https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html
DT_Classifier = DecisionTreeClassifier(criterion='entropy', max_depth=3)
DT_Classifier.fit(X_train, y_train)

## EVALUATE TREE
y_pred = DT_Classifier.predict(X_test)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))

## VISUALIZE TREE
fig = plt.figure(figsize=(25,20))
_ = tree.plot_tree(DT_Classifier, 
                   feature_names=X.columns,  
                   class_names=['0','1'],
                   filled=True)

fig.savefig('C:/Users/casey/OneDrive/Documents/Machine_Learning/Supervised_Learning/Decision_Trees/Visualizations/Titanic_Tree_Reduced_Depth.pdf')

# ------------------------------------------------------------------------------------------------ #
## PLOT CONFUSION MATRIX

fig = plt.figure(figsize=(15,6))

ax1 = fig.add_subplot(121)
skplt.metrics.plot_confusion_matrix(y_pred, y_test,
                                    title="Confusion Matrix for Reduced Decision Tree (Depth 3)",
                                    cmap="Oranges",
                                    ax=ax1)

Model Evaluation Key Ideas

  • Simpler is better
    • Begin by fitting a full tree
    • Then reduce the depth and/or number of variables until accuracy is significantly impacted
  • Leaf nodes with 1 sample indicate overfitting
    • Reduced the depth of the tree until there aren’t leaf nodes containing 1 sample
  • Decision Trees can be used to determine variable importance
  • Attempt to classify passengers of the titanic as survived or not, with high accuracy

Modeling (Full Tree)

To begin, a decision tree model was created using all of the variables and a depth of 5. The resulting tree can be viewed below.

The confusion matrix and evaluation metrics can be viewed below.

  • Accuracy: 0.83
  • Precision (0): 0.81
  • Precision (1): 0.85
  • Recall (0): 0.90
  • Recall (1): 0.73

Feature Importance

Using the full Decision Tree model the feature importance can be found and plotted.

Looking at the plot, there are only 7 variables that are important. This means that the unimportant variables should be able to be removed with no significant effect to the accuracy of the model. A reduced model with only the important features will be created next.

Modeling (Reduced Tree #1: Reducing Features)

Next, a reduced model using only important features, was created. The depth of the tree remains at 5. The resulting tree can be viewed below.

The confusion matrix and evaluation metrics can be viewed below.

  • Accuracy: 0.82
  • Precision (0): 0.82
  • Precision (1): 0.83
  • Recall (0): 0.88
  • Recall (1): 0.74

Reducing the number of variables did not significantly impact the accuracy of the model, while making the model simpler. However, there are some leaf nodes in the tree that contain only 1 or 2 samples. This possibly indicates overfitting, which is bad. To fix this, another model will be created where the depth of the tree will be reduced.

Modeling (Reduced Tree #2: Reducing Depth)

Finally, a tree using only important features and a depth of 3 is created. The resulting tree can be viewed below.

The confusion matrix and evaluation metrics can be viewed below.

  • Accuracy: 0.84
  • Precision (0): 0.82
  • Precision (1): 0.87
  • Recall (0): 0.91
  • Recall (1): 0.74

Reducing the depth of the tree to 3, resulted in an improved accuracy, while removing the issue of overfitting from the previous model.