Last article , I have descussed how to use sequential neural network to train the digit recognition , but re we didn't add cross validation in , so the result may varies . Here I will describe some params to better tune your model . by the way, I will add more hidden layers this time . This is also a way to improve your socre
import keras from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation
Define the model :
model = Sequential() # adopt sequential neural model #input layer model.add(Dense(64, input_shape=(64,))) # Dense with 64 features model.add(Activation("relu")) #hidden layer 1 model.add(Dense(160)) model.add(Activation("relu")) #hidden layer 2 model.add(Dense(160)) model.add(Activation("relu")) #hidden layer 3 model.add(Dense(160)) model.add(Activation("relu")) #hidden layer 4 model.add(Dense(160)) model.add(Activation("relu")) # output layer model.add(Dense(10)) # 10 categories model.add(Activation("softmax")) # only can output 1 & 0 model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=["accuracy"])
Read data & train :
from sklearn.datasets import load_digits
from keras.utils import np_utils
digit = load_digits() # here we use another embeded dataset called digits
data_x = digit.data
data_y = np_utils.to_categorical(digit.target, 10) # convert to one-hot-encoder
train_x = data_x
train_y = data_y
# k-fold crossvalidation is added with a split point at 20% 80%
history = model.fit(train_x, train_y, epochs=200, verbose=1, validation_split= 0.2)
print (history.history['acc'])
print (history.history['loss'])
Here we all one parameter called validation_split, it will split the dataset into 80% training set and 20% test set to do the corss validation , It is not using the normal k-fold , it is using one called StratifiedKFold so that the accurancy will be more accurate.
Visualise to Tune :
%matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.xlabel('x') plt.ylabel('y') plt.title('acc graph') plt.plot(range(len(history.history['val_acc'])), history.history['val_acc']) plt.show()
This graph shown is the corss- validation graph which will be more precise .
In the end , you may think whether there exist a way to persist the model which i can use next time . What I can tell you is Yes.
Persist the model :
import h5py from keras.models import load_model model.save('model.h5') # persist the model to external file model2 = load_model('model.h5') # load the persistent model from external file
Compare the result from loaded model with the previous :
y_pred2 = model2.predict(train_x) y_pred1 = model.predict(train_x) y_pred1 == y_pred2 # compare the difference, no different
Revert data from one-hot-vector back :
import numpy as np # revert one-hot-encoder back to the original formatnp.argmax(y_pred1[1500], axis=0) # revert one-hot-encoder back to normal digit
1
If anything is unclear for you, please leave a msg.
No comments:
Post a Comment