Neural Network Regression

1_regression

Work In Progress…

Neural Network (Regression)

Predicting Medical Insurance Costs

HI.jpg

Data Source: Medical Insurance

Project Goal:

We would like to predict the individual medical costs (charges) given the rest of the columns/features. Since charges represent continuous values (in dollars), we’re performing a regression task.

Content:


Import Libraries

Import python libraries and loading the dataset

In [1]:
import pandas as pd
import numpy as np

from sklearn.model_selection import train_test_split

# from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import StandardScaler

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import EarlyStopping

from scikeras.wrappers import KerasClassifier, KerasRegressor
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import mean_squared_error, make_scorer, r2_score

import matplotlib.pyplot as plt
In [2]:
df = pd.read_csv('insurance.csv') #load the dataset
print(df.shape)
df.head(3)
(1338, 7)
Out[2]:
age sex bmi children smoker region charges
0 19 1 27.90 0 1 southwest 16884.9240
1 18 0 33.77 1 0 southeast 1725.5523
2 28 0 33.00 3 0 southeast 4449.4620

Data Cleaning

In [3]:
# inspect categorical features
df.region.unique()
Out[3]:
array(['southwest', 'southeast', '0rthwest', '0rtheast'], dtype=object)
In [4]:
# clean categorical features
df.region = df.region.replace('0', 'no', regex=True)
df.region.unique()
Out[4]:
array(['southwest', 'southeast', 'northwest', 'northeast'], dtype=object)

Define X and y

In [5]:
X = df.iloc[:,0:6]
y = df.iloc[:,-1]

One-Hot Encoding For Categorical Variables

In [6]:
X = pd.get_dummies(X) 
X.head(2)
Out[6]:
age sex bmi children smoker region_northeast region_northwest region_southeast region_southwest
0 19 1 27.90 0 1 0 0 0 1
1 18 0 33.77 1 0 0 0 1 0

Split data

Note:

Train,Test, Validation splits comes differently in terms of Neural Networks. Usually using traditional ML algorithm we do the process is to split a given data set into 70% train data set and 30% test data set (ideally). In the training phase, we fit the model on the training data. And now to evaluate the model (i.e., to check how well the model is able to predict on unseen data), we run the model against the test data and get the predicted results. Since we already know what the expected results are, we compare the predicted and the real results to get the accuracy of the model. If the accuracy is not up to the desired level, we repeat the above process (train, test, compare) until the desired accuracy is achieved.

In Neural Networks approach, we do spliting our data set in train_test_plit. And In training/fitting phase we do spliting again. We split out training and validation_set. Then finally we will test our model using the testing set(unseen data) and compare the predicted result to the real result.

In [7]:
x_train, x_test, y_train, y_test = train_test_split(X, y, 
                                                    test_size = 0.1, # 10%
                                                    random_state = 42)

print(x_train.shape, x_test.shape, y_train.shape, y_test.shape)
(1204, 9) (134, 9) (1204,) (134,)

Standardize

In [8]:
scaler = StandardScaler()
x_train = scaler.fit_transform(x_train)
x_test = scaler.transform(x_test)

Designing Model

In [9]:
# Creating a keras sequential object
model_regr = Sequential()
In [10]:
# fix random seed for reproducibility
seed = 7
tf.random.set_seed(seed)

DEFINE MODEL

In [11]:
############## INPUT LAYER ##########################################
model_regr.add(Dense(units = X.shape[1] , activation = 'relu')) 


############## HIDDEN LAYER 1 ##########################################
# `Note:`
# How do we choose the number of hidden layers and the number of units per layer? That is a tough question and there 
# is no good answer. The rule of thumb is to start with one hidden layer and add as many units as we have features in the
# dataset. However, this might not always work. We need to try things out and observe our learning curve.

# there are a numbers of activation functions such as softmax, sigmoid, 
# but ReLU (relu) (Rectified Linear Unit) is very effective in many applications and we’ll use it here.
model_regr.add(Dense(128, activation = 'relu'))
# Adding dropout
model_regr.add(layers.Dropout(0.1))

############## OUTPUT LAYER ##########################################
model_regr.add(Dense(1, activation = 'linear'))  

OPTIMIZER

In [12]:
# We have a lot of optimizers such as SGD (Stochastic Gradient Descent optimizer), Adam, RMSprop, and others.
# right now adam is the best one as its solved previous optmizers issues.
opt = Adam(learning_rate = 0.01) # by default adam learning rate is 0.0.1

COMPILE MODEL

In [13]:
# loss/cost 
# MSE, MAE, Huber loss  
model_regr.compile(loss='mse',  metrics=['mae'], optimizer=opt)  

TRAINING PHASE/FIT THE MODEL

Add early stoping when theres no improvement.

In [14]:
# reference https://keras.io/api/callbacks/early_stopping/
stop = EarlyStopping(monitor='val_loss', # validation_split 20%
                     mode='min', 
                     patience=30,
                     verbose=1)

Here we define a validation_set to 20%. Spliting our training set in 80:20 ratio

In [15]:
h = model_regr.fit(x_train, y_train, 
               validation_split=0.2, # fraction of the training data to be used in validation
               epochs=100, 
               batch_size=1,
               verbose=1,
               callbacks=[stop])
Epoch 1/100
963/963 [==============================] - 2s 1ms/step - loss: 84836376.0000 - mae: 6272.8569 - val_loss: 33498484.0000 - val_mae: 3653.1277
Epoch 2/100
963/963 [==============================] - 1s 1ms/step - loss: 36482180.0000 - mae: 4092.4822 - val_loss: 33837864.0000 - val_mae: 3666.2742
Epoch 3/100
963/963 [==============================] - 1s 1ms/step - loss: 34112792.0000 - mae: 3941.5383 - val_loss: 32456930.0000 - val_mae: 3474.9731
Epoch 4/100
963/963 [==============================] - 1s 1ms/step - loss: 32827918.0000 - mae: 3818.4722 - val_loss: 33584344.0000 - val_mae: 3400.2710
Epoch 5/100
963/963 [==============================] - 1s 1ms/step - loss: 31316312.0000 - mae: 3685.8694 - val_loss: 28688896.0000 - val_mae: 3127.4031
Epoch 6/100
963/963 [==============================] - 1s 1ms/step - loss: 28934542.0000 - mae: 3426.2383 - val_loss: 32252736.0000 - val_mae: 3165.8496
Epoch 7/100
963/963 [==============================] - 1s 1ms/step - loss: 28398818.0000 - mae: 3387.0127 - val_loss: 30661140.0000 - val_mae: 3259.0959
Epoch 8/100
963/963 [==============================] - 1s 1ms/step - loss: 27667528.0000 - mae: 3358.4231 - val_loss: 26811668.0000 - val_mae: 2910.3145
Epoch 9/100
963/963 [==============================] - 1s 1ms/step - loss: 26873478.0000 - mae: 3242.3140 - val_loss: 27080068.0000 - val_mae: 3209.6121
Epoch 10/100
963/963 [==============================] - 1s 1ms/step - loss: 26596274.0000 - mae: 3203.4900 - val_loss: 25618400.0000 - val_mae: 3160.4277
Epoch 11/100
963/963 [==============================] - 1s 1ms/step - loss: 26280442.0000 - mae: 3188.4985 - val_loss: 26034614.0000 - val_mae: 3316.1929
Epoch 12/100
963/963 [==============================] - 1s 1ms/step - loss: 26011504.0000 - mae: 3237.9014 - val_loss: 25752972.0000 - val_mae: 2736.1611
Epoch 13/100
963/963 [==============================] - 1s 1ms/step - loss: 26239310.0000 - mae: 3138.3735 - val_loss: 26715952.0000 - val_mae: 2860.8071
Epoch 14/100
963/963 [==============================] - 1s 1ms/step - loss: 25646458.0000 - mae: 3071.8376 - val_loss: 28784238.0000 - val_mae: 3192.9370
Epoch 15/100
963/963 [==============================] - 1s 1ms/step - loss: 26319824.0000 - mae: 3170.3193 - val_loss: 28584862.0000 - val_mae: 2992.2185
Epoch 16/100
963/963 [==============================] - 1s 1ms/step - loss: 25338552.0000 - mae: 3062.9424 - val_loss: 25923078.0000 - val_mae: 3170.0696
Epoch 17/100
963/963 [==============================] - 1s 1ms/step - loss: 25601984.0000 - mae: 3159.5151 - val_loss: 25367716.0000 - val_mae: 2841.7500
Epoch 18/100
963/963 [==============================] - 1s 1ms/step - loss: 25002526.0000 - mae: 3079.8350 - val_loss: 25085496.0000 - val_mae: 2833.9055
Epoch 19/100
963/963 [==============================] - 1s 1ms/step - loss: 24896198.0000 - mae: 3016.6763 - val_loss: 24679398.0000 - val_mae: 2859.5903
Epoch 20/100
963/963 [==============================] - 1s 1ms/step - loss: 25077764.0000 - mae: 3076.9822 - val_loss: 26196322.0000 - val_mae: 3174.0571
Epoch 21/100
963/963 [==============================] - 1s 1ms/step - loss: 25119598.0000 - mae: 3134.8865 - val_loss: 28724372.0000 - val_mae: 2742.6887
Epoch 22/100
963/963 [==============================] - 1s 1ms/step - loss: 24860712.0000 - mae: 3007.4026 - val_loss: 25834578.0000 - val_mae: 2902.0684
Epoch 23/100
963/963 [==============================] - 1s 1ms/step - loss: 24394222.0000 - mae: 3027.6704 - val_loss: 26717674.0000 - val_mae: 3011.9272
Epoch 24/100
963/963 [==============================] - 1s 1ms/step - loss: 24795282.0000 - mae: 3034.1382 - val_loss: 26340460.0000 - val_mae: 3303.1765
Epoch 25/100
963/963 [==============================] - 1s 1ms/step - loss: 25022416.0000 - mae: 3081.8267 - val_loss: 24584384.0000 - val_mae: 3039.5144
Epoch 26/100
963/963 [==============================] - 1s 1ms/step - loss: 25225672.0000 - mae: 3076.4524 - val_loss: 25611650.0000 - val_mae: 3151.6863
Epoch 27/100
963/963 [==============================] - 1s 1ms/step - loss: 24747590.0000 - mae: 3083.6104 - val_loss: 26012368.0000 - val_mae: 2970.1680
Epoch 28/100
963/963 [==============================] - 1s 1ms/step - loss: 24154000.0000 - mae: 3002.8757 - val_loss: 26209348.0000 - val_mae: 2984.0950
Epoch 29/100
963/963 [==============================] - 1s 1ms/step - loss: 23879960.0000 - mae: 3023.9104 - val_loss: 24909338.0000 - val_mae: 3032.2275
Epoch 30/100
963/963 [==============================] - 1s 1ms/step - loss: 23948414.0000 - mae: 3014.7104 - val_loss: 27327590.0000 - val_mae: 2874.9453
Epoch 31/100
963/963 [==============================] - 1s 1ms/step - loss: 24293746.0000 - mae: 3027.2695 - val_loss: 25872336.0000 - val_mae: 2793.6890
Epoch 32/100
963/963 [==============================] - 1s 1ms/step - loss: 24203916.0000 - mae: 3010.7285 - val_loss: 25283808.0000 - val_mae: 2711.0496
Epoch 33/100
963/963 [==============================] - 1s 1ms/step - loss: 23926074.0000 - mae: 2967.2344 - val_loss: 24353826.0000 - val_mae: 2902.3782
Epoch 34/100
963/963 [==============================] - 1s 1ms/step - loss: 24285668.0000 - mae: 2981.1143 - val_loss: 24771380.0000 - val_mae: 3339.4932
Epoch 35/100
963/963 [==============================] - 1s 1ms/step - loss: 24116150.0000 - mae: 3009.8047 - val_loss: 24571392.0000 - val_mae: 3284.2454
Epoch 36/100
963/963 [==============================] - 1s 1ms/step - loss: 24305866.0000 - mae: 3057.8882 - val_loss: 25854314.0000 - val_mae: 2736.4734
Epoch 37/100
963/963 [==============================] - 1s 1ms/step - loss: 23732740.0000 - mae: 2943.0955 - val_loss: 25364392.0000 - val_mae: 3062.2534
Epoch 38/100
963/963 [==============================] - 1s 1ms/step - loss: 24244126.0000 - mae: 3048.1138 - val_loss: 23865710.0000 - val_mae: 2556.4631
Epoch 39/100
963/963 [==============================] - 1s 1ms/step - loss: 24884834.0000 - mae: 3032.2493 - val_loss: 25093562.0000 - val_mae: 2727.3508
Epoch 40/100
963/963 [==============================] - 1s 1ms/step - loss: 23693030.0000 - mae: 2976.2046 - val_loss: 26372846.0000 - val_mae: 2725.9280
Epoch 41/100
963/963 [==============================] - 1s 1ms/step - loss: 24550926.0000 - mae: 3015.5522 - val_loss: 24252882.0000 - val_mae: 2888.2910
Epoch 42/100
963/963 [==============================] - 1s 1ms/step - loss: 24155592.0000 - mae: 3000.4534 - val_loss: 25803616.0000 - val_mae: 2714.3640
Epoch 43/100
963/963 [==============================] - 1s 1ms/step - loss: 23953542.0000 - mae: 2996.3818 - val_loss: 24035930.0000 - val_mae: 2960.6365
Epoch 44/100
963/963 [==============================] - 1s 1ms/step - loss: 23774456.0000 - mae: 2931.3345 - val_loss: 25105876.0000 - val_mae: 3355.1201
Epoch 45/100
963/963 [==============================] - 1s 1ms/step - loss: 24036474.0000 - mae: 3040.4072 - val_loss: 24013582.0000 - val_mae: 2784.1572
Epoch 46/100
963/963 [==============================] - 1s 1ms/step - loss: 24123446.0000 - mae: 2999.7100 - val_loss: 24180500.0000 - val_mae: 2728.8928
Epoch 47/100
963/963 [==============================] - 1s 1ms/step - loss: 23918728.0000 - mae: 2938.0398 - val_loss: 25402140.0000 - val_mae: 2963.6179
Epoch 48/100
963/963 [==============================] - 1s 1ms/step - loss: 24080814.0000 - mae: 3001.7998 - val_loss: 24780806.0000 - val_mae: 2980.5225
Epoch 49/100
963/963 [==============================] - 1s 1ms/step - loss: 24116340.0000 - mae: 2994.9617 - val_loss: 26103144.0000 - val_mae: 2867.9980
Epoch 50/100
963/963 [==============================] - 1s 1ms/step - loss: 23292576.0000 - mae: 2927.1509 - val_loss: 26414156.0000 - val_mae: 3062.0779
Epoch 51/100
963/963 [==============================] - 1s 1ms/step - loss: 23366236.0000 - mae: 2939.5352 - val_loss: 29477594.0000 - val_mae: 3125.1907
Epoch 52/100
963/963 [==============================] - 1s 1ms/step - loss: 23656528.0000 - mae: 2989.5002 - val_loss: 23807972.0000 - val_mae: 2718.5464
Epoch 53/100
963/963 [==============================] - 1s 1ms/step - loss: 23614748.0000 - mae: 2947.2163 - val_loss: 26166970.0000 - val_mae: 2749.5242
Epoch 54/100
963/963 [==============================] - 1s 1ms/step - loss: 24114350.0000 - mae: 3007.3962 - val_loss: 25199180.0000 - val_mae: 2535.0083
Epoch 55/100
963/963 [==============================] - 1s 1ms/step - loss: 23990200.0000 - mae: 2940.0925 - val_loss: 25308176.0000 - val_mae: 2972.1299
Epoch 56/100
963/963 [==============================] - 1s 1ms/step - loss: 23844550.0000 - mae: 2954.9673 - val_loss: 24578900.0000 - val_mae: 3284.9775
Epoch 57/100
963/963 [==============================] - 1s 1ms/step - loss: 23663894.0000 - mae: 2978.3035 - val_loss: 24003746.0000 - val_mae: 2720.4927
Epoch 58/100
963/963 [==============================] - 1s 1ms/step - loss: 23813536.0000 - mae: 2978.9873 - val_loss: 24481780.0000 - val_mae: 2685.6990
Epoch 59/100
963/963 [==============================] - 1s 1ms/step - loss: 23368274.0000 - mae: 2892.1980 - val_loss: 25715758.0000 - val_mae: 2860.5696
Epoch 60/100
963/963 [==============================] - 1s 1ms/step - loss: 23761712.0000 - mae: 2945.4187 - val_loss: 25941814.0000 - val_mae: 2863.7847
Epoch 61/100
963/963 [==============================] - 1s 1ms/step - loss: 23538248.0000 - mae: 2953.8672 - val_loss: 27705610.0000 - val_mae: 2915.8787
Epoch 62/100
963/963 [==============================] - 1s 1ms/step - loss: 23869762.0000 - mae: 2964.0452 - val_loss: 23704916.0000 - val_mae: 2836.6680
Epoch 63/100
963/963 [==============================] - 1s 1ms/step - loss: 23597100.0000 - mae: 2968.1731 - val_loss: 23428922.0000 - val_mae: 2731.7019
Epoch 64/100
963/963 [==============================] - 1s 1ms/step - loss: 23320692.0000 - mae: 2928.6521 - val_loss: 24085830.0000 - val_mae: 2773.0652
Epoch 65/100
963/963 [==============================] - 1s 1ms/step - loss: 23555624.0000 - mae: 2931.7695 - val_loss: 26618548.0000 - val_mae: 2941.1296
Epoch 66/100
963/963 [==============================] - 1s 1ms/step - loss: 23931946.0000 - mae: 2927.2581 - val_loss: 29021730.0000 - val_mae: 3024.7244
Epoch 67/100
963/963 [==============================] - 1s 1ms/step - loss: 23323708.0000 - mae: 2907.6975 - val_loss: 24794800.0000 - val_mae: 3270.5193
Epoch 68/100
963/963 [==============================] - 1s 1ms/step - loss: 23208952.0000 - mae: 2940.1475 - val_loss: 26869508.0000 - val_mae: 2709.7200
Epoch 69/100
963/963 [==============================] - 1s 1ms/step - loss: 23696748.0000 - mae: 2926.1006 - val_loss: 26576302.0000 - val_mae: 3258.9995
Epoch 70/100
963/963 [==============================] - 1s 1ms/step - loss: 22538560.0000 - mae: 2872.6650 - val_loss: 23526700.0000 - val_mae: 2944.3284
Epoch 71/100
963/963 [==============================] - 1s 1ms/step - loss: 23097262.0000 - mae: 2948.3149 - val_loss: 24952104.0000 - val_mae: 2499.7339
Epoch 72/100
963/963 [==============================] - 1s 1ms/step - loss: 23744478.0000 - mae: 2899.1104 - val_loss: 24728910.0000 - val_mae: 2797.6282
Epoch 73/100
963/963 [==============================] - 1s 1ms/step - loss: 23002168.0000 - mae: 2885.9260 - val_loss: 27808656.0000 - val_mae: 2929.8511
Epoch 74/100
963/963 [==============================] - 1s 1ms/step - loss: 23024618.0000 - mae: 2917.7402 - val_loss: 24918486.0000 - val_mae: 2936.1880
Epoch 75/100
963/963 [==============================] - 1s 1ms/step - loss: 22963696.0000 - mae: 2932.3838 - val_loss: 26606736.0000 - val_mae: 2677.5356
Epoch 76/100
963/963 [==============================] - 1s 1ms/step - loss: 22876890.0000 - mae: 2833.4175 - val_loss: 28150848.0000 - val_mae: 2899.2874
Epoch 77/100
963/963 [==============================] - 1s 1ms/step - loss: 23080620.0000 - mae: 2899.1882 - val_loss: 24508636.0000 - val_mae: 3017.3311
Epoch 78/100
963/963 [==============================] - 1s 1ms/step - loss: 22419496.0000 - mae: 2876.8621 - val_loss: 25409542.0000 - val_mae: 2644.1348
Epoch 79/100
963/963 [==============================] - 1s 1ms/step - loss: 23105064.0000 - mae: 2905.7068 - val_loss: 23951888.0000 - val_mae: 2749.3391
Epoch 80/100
963/963 [==============================] - 1s 1ms/step - loss: 22418894.0000 - mae: 2846.5295 - val_loss: 25152406.0000 - val_mae: 3178.7495
Epoch 81/100
963/963 [==============================] - 1s 1ms/step - loss: 22816954.0000 - mae: 2886.1094 - val_loss: 25169940.0000 - val_mae: 2765.2773
Epoch 82/100
963/963 [==============================] - 1s 1ms/step - loss: 22089904.0000 - mae: 2797.2993 - val_loss: 23391270.0000 - val_mae: 3067.1411
Epoch 83/100
963/963 [==============================] - 1s 1ms/step - loss: 21996978.0000 - mae: 2850.4497 - val_loss: 28027132.0000 - val_mae: 2503.3997
Epoch 84/100
963/963 [==============================] - 1s 1ms/step - loss: 22194656.0000 - mae: 2824.8784 - val_loss: 23349448.0000 - val_mae: 2732.2588
Epoch 85/100
963/963 [==============================] - 1s 1ms/step - loss: 22534824.0000 - mae: 2784.9780 - val_loss: 24632874.0000 - val_mae: 2931.5000
Epoch 86/100
963/963 [==============================] - 1s 1ms/step - loss: 21668214.0000 - mae: 2806.0320 - val_loss: 23411928.0000 - val_mae: 2675.6580
Epoch 87/100
963/963 [==============================] - 1s 1ms/step - loss: 21863420.0000 - mae: 2819.5735 - val_loss: 22247556.0000 - val_mae: 2476.9114
Epoch 88/100
963/963 [==============================] - 1s 1ms/step - loss: 21637318.0000 - mae: 2696.0984 - val_loss: 23087284.0000 - val_mae: 2627.6265
Epoch 89/100
963/963 [==============================] - 1s 1ms/step - loss: 21924102.0000 - mae: 2776.2046 - val_loss: 25229220.0000 - val_mae: 2760.8843
Epoch 90/100
963/963 [==============================] - 1s 1ms/step - loss: 21531570.0000 - mae: 2699.7539 - val_loss: 24007298.0000 - val_mae: 3159.1533
Epoch 91/100
963/963 [==============================] - 1s 1ms/step - loss: 20789104.0000 - mae: 2682.0120 - val_loss: 27046274.0000 - val_mae: 3561.6125
Epoch 92/100
963/963 [==============================] - 1s 1ms/step - loss: 21498006.0000 - mae: 2788.6448 - val_loss: 26859916.0000 - val_mae: 2745.9355
Epoch 93/100
963/963 [==============================] - 1s 1ms/step - loss: 21484080.0000 - mae: 2704.0312 - val_loss: 24284402.0000 - val_mae: 2841.2637
Epoch 94/100
963/963 [==============================] - 1s 1ms/step - loss: 21335244.0000 - mae: 2711.2268 - val_loss: 24494548.0000 - val_mae: 2827.8723
Epoch 95/100
963/963 [==============================] - 1s 1ms/step - loss: 21102566.0000 - mae: 2701.3350 - val_loss: 24154878.0000 - val_mae: 3015.6440
Epoch 96/100
963/963 [==============================] - 1s 1ms/step - loss: 20926090.0000 - mae: 2737.8657 - val_loss: 24131036.0000 - val_mae: 2416.3784
Epoch 97/100
963/963 [==============================] - 1s 1ms/step - loss: 20919604.0000 - mae: 2672.2307 - val_loss: 29476202.0000 - val_mae: 2611.1755
Epoch 98/100
963/963 [==============================] - 1s 1ms/step - loss: 20937590.0000 - mae: 2666.3679 - val_loss: 24952142.0000 - val_mae: 3009.3706
Epoch 99/100
963/963 [==============================] - 1s 1ms/step - loss: 20901632.0000 - mae: 2681.7427 - val_loss: 25151392.0000 - val_mae: 2891.1382
Epoch 100/100
963/963 [==============================] - 1s 1ms/step - loss: 20714974.0000 - mae: 2698.6494 - val_loss: 23188818.0000 - val_mae: 2704.0757

Model Summary

In [16]:
# view summary
model_regr.summary()
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense (Dense)               (1, 9)                    90        
                                                                 
 dense_1 (Dense)             (1, 128)                  1280      
                                                                 
 dropout (Dropout)           (1, 128)                  0         
                                                                 
 dense_2 (Dense)             (1, 1)                    129       
                                                                 
=================================================================
Total params: 1,499
Trainable params: 1,499
Non-trainable params: 0
_________________________________________________________________

Visualization

In [17]:
h.history.keys()
Out[17]:
dict_keys(['loss', 'mae', 'val_loss', 'val_mae'])
In [18]:
#plotting

fig, axs = plt.subplots(1,2,
                        figsize=(15, 6),
                        gridspec_kw={'hspace': 0.5, 'wspace': 0.2}) 
(ax1, ax2) = axs
# MSE
ax1.plot(h.history['loss'], label='Train') 
ax1.plot(h.history['val_loss'], label='Validation')
ax1.set_title('learning rate=' + str(0.01))
ax1.legend(loc="upper right")
ax1.set_xlabel("# of epochs")
ax1.set_ylabel("loss (MSE)")

#MAE
ax2.plot(h.history['mae'], label='Train')
ax2.plot(h.history['val_mae'], label='Validation')
ax2.set_title('learning rate=' + str(0.01))
ax2.legend(loc="upper right")
ax2.set_xlabel("# of epochs")
ax2.set_ylabel("MAE")
Out[18]:
Text(0, 0.5, 'MAE')

Evaluation

In [19]:
val_mse, val_mae = model_regr.evaluate(x_test, y_test, verbose = 1)
5/5 [==============================] - 0s 1ms/step - loss: 17361822.0000 - mae: 2335.1934
In [20]:
y_predict = model_regr.predict(x_test)
5/5 [==============================] - 0s 1ms/step
In [21]:
r2_score(y_test, y_predict) 
Out[21]:
0.8741489208482112

Predicted vs. Actual Charges

In [22]:
# show/hide code
a = y_test.values.reshape(-1,1).flatten()
b = y_predict.flatten()
diff = (b - a)

sim_data={"Actual Charges":a, 'Predicted Charges':b, 'Difference':np.round(diff,2)}

sim_data=pd.DataFrame(sim_data)

# Showing first 5 rows
sim_data.head(5)
Out[22]:
Actual Charges Predicted Charges Difference
0 9095.06825 10037.707031 942.64
1 5272.17580 6469.976074 1197.80
2 29330.98315 31092.773438 1761.79
3 9301.89355 9455.603516 153.71
4 33750.29180 34416.398438 666.11

Visualization

In [23]:
# visualization of actual vs. predicted charges
plt.figure(figsize=(8, 6)) 

plt.scatter(y_test, y_predict, alpha=0.4, color = 'red')
plt.title("Actual Vs. Predicted Charges")
plt.xlabel("Actual Charges")
plt.ylabel("Predicted Charges")
Out[23]:
Text(0, 0.5, 'Predicted Charges')

GridSearchCV

Finding the optimal hypeparameters value.

Function For Designing Model

Function that creates and returns your Keras sequential model (To use in skires wrappers)

In [24]:
def design_model(features):
  # ann model instance  
  model_regr = Sequential()
  
  
  #### INPUT LAYER>>>>
  #adding the input layer
  model_regr.add(Dense(units = X.shape[1] , activation = 'relu')) 


  #### HIDDEN LAYER1>>>>
  # there are a numbers of activation functions such as softmax, sigmoid, 
  # but ReLU (relu) (Rectified Linear Unit) is very effective in many applications and we’ll use it here.
  model_regr.add(Dense(128, activation = 'relu'))


  #### OUTPUT LAYER>>>>
  model_regr.add(Dense(1, activation = 'linear'))  


  #### Optimizer
  # WE have a lot of optimizers such as SGD (Stochastic Gradient Descent optimizer), Adam, RMSprop, and others.
  # right now adam is the best one as its solved previous optmizers issues.
  opt = Adam(learning_rate = 0.01)
  # loss/cost 
  # MSE, MAE, Huber loss  
  model_regr.compile(loss='mse',  metrics=['mae'], optimizer=opt)  
    

  return model_regr

Invoke Our Fucntion And Pass The x_train Argument Then Save It In a Variable.

In [25]:
model_regr2 = design_model(x_train)

Training Phase/Fit The Model

In [26]:
model_regr2.fit(x_train, y_train, 
                validation_split=0.2, 
                verbose=1)
31/31 [==============================] - 1s 6ms/step - loss: 324137344.0000 - mae: 13391.1689 - val_loss: 321402176.0000 - val_mae: 12848.5938
Out[26]:
<keras.callbacks.History at 0x2709f9757c0>

To use KerasRegressor, we must define a function that creates and returns your Keras sequential model,(Above Function) then pass this function to the model argument when constructing the KerasClassifier class.

In [27]:
model = KerasRegressor(model = model_regr2)

Setting Up Hyperparameters

This is computational extensive, we will use small value here.

List of hyperparameters:

  1. the learning rate
  2. number of batches
  3. number of epochs
  4. number of units per hidden layer
  5. activation functions.
In [28]:
param_grid = dict(
                  epochs = [32,64],
                  batch_size = [1,10])
In [29]:
grid = GridSearchCV(estimator=model, 
                    param_grid=param_grid,
                    n_jobs=-1, # use all processor cores of our machine (faster!!) 
                    scoring = 'r2',
                    return_train_score = True,
                    cv=3)

grid_result = grid.fit(x_train, y_train)
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmp1uedlmgb\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmpe2ofwj8m\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmpds134ad1\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmp9bkf3c8_\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmp1ko5rbwe\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmp604ndp6m\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmpf6kzzb1d\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmppugkji0f\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmp5wjzqk0g\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmpr2caeo0w\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmpf96qyn43\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmppthc_xwf\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmpbc9zjcrc\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmp55l_m126\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmpax5trbh0\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmp4vj7flgw\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmpem7zmflx\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmpepimhu41\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmpurlee2xb\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmpxf0lzyqs\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmpotxx0fg5\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmp2srectio\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmpcws62y2g\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmp98545p4x\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmpyb53gqr7\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmp4yylyy84\assets
INFO:tensorflow:Assets written to: C:\Users\Toto\AppData\Local\Temp\tmps5zf07pa\assets
Epoch 1/32
1204/1204 [==============================] - 2s 997us/step - loss: 81919792.0000 - mae: 5628.8887
Epoch 2/32
1204/1204 [==============================] - 1s 992us/step - loss: 34698296.0000 - mae: 3983.9568
Epoch 3/32
1204/1204 [==============================] - 1s 1ms/step - loss: 33802104.0000 - mae: 3938.4231
Epoch 4/32
1204/1204 [==============================] - 1s 986us/step - loss: 32699082.0000 - mae: 3809.1814
Epoch 5/32
1204/1204 [==============================] - 1s 980us/step - loss: 31273430.0000 - mae: 3672.5884
Epoch 6/32
1204/1204 [==============================] - 1s 989us/step - loss: 29938436.0000 - mae: 3491.0923
Epoch 7/32
1204/1204 [==============================] - 1s 1ms/step - loss: 28982316.0000 - mae: 3341.0698
Epoch 8/32
1204/1204 [==============================] - 1s 992us/step - loss: 27721452.0000 - mae: 3219.3259
Epoch 9/32
1204/1204 [==============================] - 1s 990us/step - loss: 27044454.0000 - mae: 3166.3296
Epoch 10/32
1204/1204 [==============================] - 1s 978us/step - loss: 26222402.0000 - mae: 3044.4111
Epoch 11/32
1204/1204 [==============================] - 1s 1ms/step - loss: 25786122.0000 - mae: 3062.1077
Epoch 12/32
1204/1204 [==============================] - 1s 1ms/step - loss: 25286404.0000 - mae: 2977.0220
Epoch 13/32
1204/1204 [==============================] - 1s 1ms/step - loss: 25359996.0000 - mae: 3014.0378
Epoch 14/32
1204/1204 [==============================] - 1s 1ms/step - loss: 25014340.0000 - mae: 2985.4175
Epoch 15/32
1204/1204 [==============================] - 1s 982us/step - loss: 24901936.0000 - mae: 2967.0493
Epoch 16/32
1204/1204 [==============================] - 1s 1ms/step - loss: 24849198.0000 - mae: 2953.8083
Epoch 17/32
1204/1204 [==============================] - 1s 997us/step - loss: 24671554.0000 - mae: 2972.9041
Epoch 18/32
1204/1204 [==============================] - 1s 986us/step - loss: 24590806.0000 - mae: 2962.7341
Epoch 19/32
1204/1204 [==============================] - 1s 993us/step - loss: 24342726.0000 - mae: 2952.7439
Epoch 20/32
1204/1204 [==============================] - 1s 984us/step - loss: 24040266.0000 - mae: 2881.0232
Epoch 21/32
1204/1204 [==============================] - 1s 1ms/step - loss: 24795858.0000 - mae: 2957.1702
Epoch 22/32
1204/1204 [==============================] - 1s 984us/step - loss: 24645912.0000 - mae: 2959.0869
Epoch 23/32
1204/1204 [==============================] - 1s 996us/step - loss: 24471614.0000 - mae: 2962.0854
Epoch 24/32
1204/1204 [==============================] - 1s 984us/step - loss: 24090026.0000 - mae: 2884.4941
Epoch 25/32
1204/1204 [==============================] - 1s 1ms/step - loss: 24671648.0000 - mae: 2964.0579
Epoch 26/32
1204/1204 [==============================] - 1s 1ms/step - loss: 24319210.0000 - mae: 2932.2856
Epoch 27/32
1204/1204 [==============================] - 1s 1ms/step - loss: 24444258.0000 - mae: 2963.7439
Epoch 28/32
1204/1204 [==============================] - 1s 990us/step - loss: 24063476.0000 - mae: 2890.7292
Epoch 29/32
1204/1204 [==============================] - 1s 993us/step - loss: 24651622.0000 - mae: 3022.4983
Epoch 30/32
1204/1204 [==============================] - 1s 1ms/step - loss: 24249524.0000 - mae: 2909.5830
Epoch 31/32
1204/1204 [==============================] - 1s 996us/step - loss: 24360174.0000 - mae: 2926.7073
Epoch 32/32
1204/1204 [==============================] - 1s 985us/step - loss: 24195038.0000 - mae: 2917.5176
In [30]:
grid_result.best_score_ , grid_result.best_params_
Out[30]:
(0.8325221689505159, {'batch_size': 1, 'epochs': 32})

Leave a Reply