Difference between revisions of "Keras: Save and Load Your Keras Deep Learning Model"

From OnnoWiki
Jump to navigation Jump to search
Line 29: Line 29:
 
Setiap contoh juga akan menunjukkan teknik save dan load weight model ke file format HDF5.
 
Setiap contoh juga akan menunjukkan teknik save dan load weight model ke file format HDF5.
  
Contoh-contoh akan menggunakan network sederhana yang di train menggunakan dilatih pada Pima Indians onset of diabetes binary classification dataset. Ini adalah dataset kecil yang berisi semua data numerik dan mudah digunakan. Kita dapat mengunduh dataset ini dan meletakkannya di direktori kerja kita dengan nama file "pima-indians-diabetes.csv" (pembaruan: unduh dari sini).
+
Contoh-contoh akan menggunakan network sederhana yang di train menggunakan dilatih pada Pima Indians onset of diabetes binary classification dataset. Ini adalah dataset kecil yang berisi semua data numerik dan mudah digunakan. Kita dapat mengunduh dataset ini dan meletakkannya di direktori kerja kita dengan nama file "pima-indians-diabetes.csv" (download dari: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv ).
  
 
Confirm that you have the latest version of Keras installed (e.g. v2.2.4 as of May 2019).
 
Confirm that you have the latest version of Keras installed (e.g. v2.2.4 as of May 2019).

Revision as of 13:54, 18 August 2019

Sumber: https://machinelearningmastery.com/save-load-keras-deep-learning-models/


Keras adalah library Python yang sederhana tapi powerful untuk deep learning.

Mengingat bahwa model deep learning dapat memakan waktu berjam-jam, berhari-hari, dan bahkan berminggu-minggu untuk di training, penting untuk mengetahui bagaimana cara menyimpan dan memuatnya dari disk.

Dalam tulisan ini, kita akan menemukan bagaimana kita dapat menyimpan model Keras ke file dan memuatnya lagi untuk membuat prediksi.

Setelah membaca tutorial ini, kita akan tahu:

  • Bagaimana cara menyimpan weight model dan arsitektur model dalam file terpisah.
  • Bagaimana cara menyimpan model architecture dalam format YAML dan JSON.
  • Bagaimana cara menyimpan model weight dan architecture dalam sebuah file untuk penggunaan kemudian.

Tutorial Overview

Keras memisahkan masalah save arsitektur model dan save weight bobot model.

Weight model disimpan ke format HDF5. Ini adalah format grid yang ideal untuk menyimpan array angka multi-dimensi.

Struktur model dapat dijelaskan dan disimpan menggunakan dua format yang berbeda: JSON dan YAML.

Dalam tulisan ini kita akan melihat dua contoh untuk save dan load model ke file:

  • Save Model ke JSON.
  • Save Model ke YAML.

Setiap contoh juga akan menunjukkan teknik save dan load weight model ke file format HDF5.

Contoh-contoh akan menggunakan network sederhana yang di train menggunakan dilatih pada Pima Indians onset of diabetes binary classification dataset. Ini adalah dataset kecil yang berisi semua data numerik dan mudah digunakan. Kita dapat mengunduh dataset ini dan meletakkannya di direktori kerja kita dengan nama file "pima-indians-diabetes.csv" (download dari: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv ).

Confirm that you have the latest version of Keras installed (e.g. v2.2.4 as of May 2019).

Note: Saving models requires that you have the h5py library installed. You can install it easily as follows:

sudo pip install h5py

Save Your Neural Network Model to JSON

JSON is a simple file format for describing data hierarchically.

Keras provides the ability to describe any model using JSON format with a to_json() function. This can be saved to file and later loaded via the model_from_json() function that will create a new model from the JSON specification.

The weights are saved directly from the model using the save_weights() function and later loaded using the symmetrical load_weights() function.

The example below trains and evaluates a simple model on the Pima Indians dataset. The model is then converted to JSON format and written to model.json in the local directory. The network weights are written to model.h5 in the local directory.

The model and weight data is loaded from the saved files and a new model is created. It is important to compile the loaded model before it is used. This is so that predictions made using the model can use the appropriate efficient computation from the Keras backend.

The model is evaluated in the same way printing the same evaluation score.

# MLP for Pima Indians Dataset Serialize to JSON and HDF5
from keras.models import Sequential
from keras.layers import Dense
from keras.models import model_from_json
import numpy
import os
# fix random seed for reproducibility
numpy.random.seed(7)
# load pima indians dataset
dataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# create model
model = Sequential()
model.add(Dense(12, input_dim=8, kernel_initializer='uniform', activation='relu'))
model.add(Dense(8, kernel_initializer='uniform', activation='relu'))
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=150, batch_size=10, verbose=0)
# evaluate the model
scores = model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))

# serialize model to JSON
model_json = model.to_json()
with open("model.json", "w") as json_file:
    json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("model.h5")
print("Saved model to disk")

# later...

# load json and create model
json_file = open('model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights("model.h5")
print("Loaded model from disk")

# evaluate loaded model on test data
loaded_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
score = loaded_model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100))

Running this example provides the output below.

acc: 78.78%
Saved model to disk
Loaded model from disk
acc: 78.78%

The JSON format of the model looks like the following:

{  
   "class_name":"Sequential",
   "config":{  
      "name":"sequential_1",
      "layers":[  
         {  
            "class_name":"Dense",
            "config":{  
               "name":"dense_1",
               "trainable":true,
               "batch_input_shape":[  
                  null,
                  8
               ],
               "dtype":"float32",
               "units":12,
               "activation":"relu",
               "use_bias":true,
               "kernel_initializer":{  
                  "class_name":"RandomUniform",
                  "config":{  
                     "minval":-0.05,
                     "maxval":0.05,
                     "seed":null
                  }
               },
               "bias_initializer":{  
                  "class_name":"Zeros",
                  "config":{    
                  }
               },
               "kernel_regularizer":null,
               "bias_regularizer":null,
               "activity_regularizer":null,
               "kernel_constraint":null,
               "bias_constraint":null
            }
         },
         {  
            "class_name":"Dense",
            "config":{  
               "name":"dense_2",
               "trainable":true,
               "units":8,
               "activation":"relu",
               "use_bias":true,
               "kernel_initializer":{  
                  "class_name":"RandomUniform",
                  "config":{  
                     "minval":-0.05,
                     "maxval":0.05,
                     "seed":null
                  }
               },
               "bias_initializer":{  
                  "class_name":"Zeros",
                  "config":{   
                  }
               },
               "kernel_regularizer":null,
               "bias_regularizer":null,
               "activity_regularizer":null,
               "kernel_constraint":null,
               "bias_constraint":null
            }
         },
         {  
            "class_name":"Dense",
            "config":{  
               "name":"dense_3",
               "trainable":true,
               "units":1,
               "activation":"sigmoid",
               "use_bias":true,
               "kernel_initializer":{  
                  "class_name":"RandomUniform",
                  "config":{  
                     "minval":-0.05,
                     "maxval":0.05,
                     "seed":null
                  }
               },
               "bias_initializer":{  
                  "class_name":"Zeros",
                  "config":{   
                  }
               },
               "kernel_regularizer":null,
               "bias_regularizer":null,
               "activity_regularizer":null,
               "kernel_constraint":null,
               "bias_constraint":null
            }
         }
      ]
   },
   "keras_version":"2.2.4",
   "backend":"tensorflow"
}

Save Your Neural Network Model to YAML

This example is much the same as the above JSON example, except the YAML format is used for the model specification.

The model is described using YAML, saved to file model.yaml and later loaded into a new model via the model_from_yaml() function. Weights are handled in the same way as above in HDF5 format as model.h5.

# MLP for Pima Indians Dataset serialize to YAML and HDF5
from keras.models import Sequential
from keras.layers import Dense
from keras.models import model_from_yaml
import numpy
import os
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load pima indians dataset
dataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# create model
model = Sequential()
model.add(Dense(12, input_dim=8, kernel_initializer='uniform', activation='relu'))
model.add(Dense(8, kernel_initializer='uniform', activation='relu'))
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=150, batch_size=10, verbose=0)
# evaluate the model
scores = model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100)) 

# serialize model to YAML
model_yaml = model.to_yaml()
with open("model.yaml", "w") as yaml_file:
    yaml_file.write(model_yaml)
# serialize weights to HDF5
model.save_weights("model.h5")
print("Saved model to disk")

# later...

# load YAML and create model
yaml_file = open('model.yaml', 'r')
loaded_model_yaml = yaml_file.read()
yaml_file.close()
loaded_model = model_from_yaml(loaded_model_yaml)
# load weights into new model
loaded_model.load_weights("model.h5")
print("Loaded model from disk")

# evaluate loaded model on test data
loaded_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
score = loaded_model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100))

Running the example displays the following output:

acc: 78.78%
Saved model to disk
Loaded model from disk
acc: 78.78%

The model described in YAML format looks like the following:

backend: tensorflow
class_name: Sequential
config:
  layers:
  - class_name: Dense
    config:
      activation: relu
      activity_regularizer: null
      batch_input_shape: !!python/tuple [null, 8]
      bias_constraint: null
      bias_initializer:
        class_name: Zeros
        config: {}
      bias_regularizer: null
      dtype: float32
      kernel_constraint: null
      kernel_initializer:
        class_name: RandomUniform
        config: {maxval: 0.05, minval: -0.05, seed: null}
      kernel_regularizer: null
      name: dense_1
      trainable: true
      units: 12
      use_bias: true
  - class_name: Dense
    config:
      activation: relu
      activity_regularizer: null
      bias_constraint: null
      bias_initializer:
        class_name: Zeros
        config: {}
      bias_regularizer: null
      kernel_constraint: null
      kernel_initializer:
        class_name: RandomUniform
        config: {maxval: 0.05, minval: -0.05, seed: null}
      kernel_regularizer: null
      name: dense_2
      trainable: true
      units: 8
      use_bias: true
  - class_name: Dense
    config:
      activation: sigmoid
      activity_regularizer: null
      bias_constraint: null
      bias_initializer:
        class_name: Zeros
        config: {}
      bias_regularizer: null
      kernel_constraint: null
      kernel_initializer:
        class_name: RandomUniform
        config: {maxval: 0.05, minval: -0.05, seed: null}
      kernel_regularizer: null
      name: dense_3
      trainable: true
      units: 1
      use_bias: true
  name: sequential_1
keras_version: 2.2.4

Save Model Weights and Architecture Together

Keras also supports a simpler interface to save both the model weights and model architecture together into a single H5 file.

Saving the model in this way includes everything we need to know about the model, including:

  • Model weights.
  • Model architecture.
  • Model compilation details (loss and metrics).
  • Model optimizer state.

This means that we can load and use the model directly, without having to re-compile it as we did in the examples above.

Note: this is the preferred way for saving and loading your Keras model. How to Save a Keras Model

You can save your model by calling the save() function on the model and specifying the filename.

The example below demonstrates this by first fitting a model, evaluating it and saving it to the file model.h5.

# MLP for Pima Indians Dataset saved to single file
from numpy import loadtxt
from keras.models import Sequential
from keras.layers import Dense
# load pima indians dataset
dataset = loadtxt("pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# define model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=150, batch_size=10, verbose=0)
# evaluate the model
scores = model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
# save model and architecture to single file
model.save("model.h5")
print("Saved model to disk")

Running the example fits the model, summarizes the models performance on the training dataset and saves the model to file.

acc: 77.73%
Saved model to disk

We can later load this model from file and use it.

How to Load a Keras Model

Your saved model can then be loaded later by calling the load_model() function and passing the filename. The function returns the model with the same architecture and weights.

In this case, we load the model, summarize the architecture and evaluate it on the same dataset to confirm the weights and architecture are the same.

# load and evaluate a saved model
from numpy import loadtxt
from keras.models import load_model

# load model
model = load_model('model.h5')
# summarize model.
model.summary()
# load dataset
dataset = loadtxt("pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# evaluate the model
score = model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], score[1]*100))

Running the example first loads the model, prints a summary of the model architecture then evaluates the loaded model on the same dataset.

The model achieves the same accuracy score which in this case is 77%.

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_1 (Dense)              (None, 12)                108       
_________________________________________________________________
dense_2 (Dense)              (None, 8)                 104       
_________________________________________________________________
dense_3 (Dense)              (None, 1)                 9         
=================================================================
Total params: 221
Trainable params: 221
Non-trainable params: 0
_________________________________________________________________

acc: 77.73%


Further Reading

   How can I save a Keras model? in the Keras documentation.
   About Keras models in the Keras documentation.

Summary

In this post, you discovered how to serialize your Keras deep learning models.

You learned how you can save your trained models to files and later load them up and use them to make predictions.

You also learned that model weights are easily stored using HDF5 format and that the network structure can be saved in either JSON or YAML format.


Referensi

Pranala Menarik