Keras: Develop Your First Neural Network in Python Step-By-Step

From OnnoWiki
Jump to navigation Jump to search

Sumber: https://machinelearningmastery.com/tutorial-first-neural-network-python-keras/

Keras adalah library Python open source free yang powerful dan mudah digunakan untuk mengembangkan dan mengevaluasi model deep learning.

Ini membungkus library perhitungan numerik yang efisien Theano dan TensorFlow dan memungkinkan kita untuk mendefinisikan dan melatih model neural network hanya dalam beberapa baris kode.

Dalam tutorial ini, kita akan menemukan cara membuat model neural network deep learning pertama dalam Python menggunakan Keras.


Keras Tutorial Overview

Tidak ada banyak kode yang diperlukan, tetapi kita akan melangkah perlahan-lahan sehingga kita akan tahu cara membuat model sendiri di masa depan.

Langkah-langkah yang akan kita bahas dalam tutorial ini adalah sebagai berikut:

  • Load Data.
  • Define Keras Model.
  • Compile Keras Model.
  • Fit Keras Model.
  • Evaluate Keras Model.
  • Tie It All Together.
  • Make Predictions

Kebutuhan untuk bisa menjalankan tutorial:

  • Python 2 atau 3
  • SciPy (termasuk NumPy)
  • Keras dan backend (Theano atau TensorFlow)

Selanjutnya kita dapat membuat file baru, misalnya,

keras_first_network.py

dan copy paste code ke file sambil kita jalan.

1. Load Data

Langkah pertama adalah mendefinisikan fungsi dan kelas yang ingin kita gunakan dalam tutorial ini.

Kita akan menggunakan library NumPy untuk memuat dataset kita dan kita akan menggunakan dua kelas dari library Keras untuk mendefinisikan model kita.

Import yang dibibutuhkan tampak pada daftar di bawah ini,

# first neural network with keras tutorial
from numpy import loadtxt
from keras.models import Sequential
from keras.layers import Dense
...

Kita sekarang dapat memuat dataset kita.

Dalam tutorial Keras ini, kita akan menggunakan dataset diabetes Pima Indian. Ini adalah dataset standar machine learning dari repositori UCI Machine Learning. Ini menggambarkan data rekam medis pasien untuk orang India Pima dan apakah mereka memiliki diabetes dalam lima tahun.

Dengan apa adanya, ini adalah masalah klasifikasi biner (dengan diabetes 1 atau tidak 0). Semua variabel input yang menggambarkan setiap pasien adalah numerik. Ini membuatnya mudah digunakan secara langsung dengan neural network yang mengharapkan input numerik dan nilai output, dan ideal untuk neural network kita yang pertama di Keras.

Dataset kita tersedia dari sini:

Download dataset dan letakkan di direktori kerja lokal anda, lokasi yang sama dengan file python anda. Save dan beri nama filename:

pima-indians-diabetes.csv

Lihatlah di dalam file, kita akan melihat deretan data seperti berikut:

6,148,72,35,0,33.6,0.627,50,1
1,85,66,29,0,26.6,0.351,31,0
8,183,64,0,0,23.3,0.672,32,1
1,89,66,23,94,28.1,0.167,21,0
0,137,40,35,168,43.1,2.288,33,1
...

Kita dapat me-load file sebagai matrix menggunakan NumPy function loadtxt().

Ada delapan variabel input dan satu variabel output (kolom terakhir). Kita akan mempelajari sebuah model untuk memetakan baris variabel input (X) ke variabel output (y), yang sering kita ringkas sebagai y = f (X).

Variabel dapat diringkas sebagai berikut:

Variable Input (X):

  • Berapa kali hamil
  • Konsentrasi Plasma glucose dalam 2 jam dalam oral glucose tolerance test
  • Diastolic blood pressure (mm Hg)
  • Triceps skin fold thickness (mm)
  • 2-Hour serum insulin (mu U/ml)
  • Body mass index (weight in kg/(height in m)^2)
  • Diabetes pedigree function
  • Umur (Tahun)

Variable Output (y):

  • Class variable (0 atau 1)

Setelah file CSV dimuat ke dalam memori, kita dapat membagi kolom data menjadi variabel input dan output.

Data akan disimpan dalam array 2D di mana dimensi pertama adalah baris dan dimensi kedua adalah kolom, mis. [baris, kolom].

Kita dapat membagi array menjadi dua array dengan memilih subset kolom menggunakan operator slice NumPy standar atau “:” Kita dapat memilih 8 kolom pertama dari indeks 0 hingga indeks 7 melalui slice 0:8. Kita kemudian dapat memilih kolom output (variabel ke-9) melalui indeks 8.

...
# load the dataset
dataset = loadtxt('pima-indians-diabetes.csv', delimiter=',')
# split into input (X) and output (y) variables
X = dataset[:,0:8]
y = dataset[:,8]
...

Kita sekarang siap untuk mendefinisikan model neural network kita.

Catatan, dataset memiliki 9 kolom dan kisaran 0:8 akan memilih kolom dari 0 hingga 7, berhenti sebelum indeks 8. Jika ini baru bagi anda, maka anda dapat mempelajari lebih lanjut tentang pengirisan array dan rentang dalam:

2. Define Keras Model

Model dalam Keras didefinisikan sebagai urutan lapisan.

Kita membuat model Sequential dan menambahkan layer satu per satu sampai kita puas dengan arsitektur network yang kita buat.

Hal pertama yang harus dilakukan adalah memastikan layer input memiliki jumlah fitur input yang tepat. Ini dapat ditentukan saat membuat layer pertama dengan argumen input_dim dan, misalnya, men-set menjadi 8 untuk 8 variabel input.

Bagaimana kita mengetahui jumlah lapisan dan jenisnya?

Ini pertanyaan yang sangat sulit. Setelah melalui proses trial dan experimen / percobaan panjang kita akan menemukan struktur network yang terbaik. Secara umum, kita memerlukan network yang cukup besar untuk menangkap struktur masalah.

Dalam contoh ini, kita akan menggunakan fully-connected network structure dengan tiga layer.

Fully connected layer di definisikan dengan menggunakan Dense class. Kita dapat menentukan jumlah neuron atau node di lapisan sebagai argumen pertama, dan menentukan fungsi aktivasi menggunakan argumen aktivasi.

We will use the rectified linear unit activation function referred to as ReLU on the first two layers and the Sigmoid function in the output layer.

Pada masa lalu fungsi aktivasi Sigmoid dan Tanh lebih disukai untuk semua lapisan. Saat ini, kinerja yang lebih baik dicapai menggunakan fungsi aktivasi ReLU. Kita menggunakan sigmoid pada layer output untuk memastikan output jaringan kami antara 0 dan 1 dan mudah dipetakan ke probabilitas class 1 atau snap ke klasifikasi keras dari kedua kelas dengan ambang batas standar 0,5.

Kita dapat menyatukan semuanya dengan menambahkan setiap layer:

  • Model mengharapkan deretan data dengan 8 variabel (argumen input_dim = 8)
  • Lapisan hidden pertama memiliki 12 node dan menggunakan fungsi aktivasi relu.
  • Lapisan hidden kedua memiliki 8 node dan menggunakan fungsi aktivasi relu.
  • Lapisan output memiliki satu node dan menggunakan fungsi aktivasi sigmoid.
...
# define the keras model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
...


Catatan, hal yang paling membingungkan di sini adalah bahwa bentuk input ke model didefinisikan sebagai argumen pada lapisan hidden pertama. Ini berarti bahwa baris kode yang menambahkan Dense Layer melakukan 2 hal, menentukan input atau lapisan yang terlihat dan lapisan hidden pertama.

3. Compile Keras Model

Now that the model is defined, we can compile it.

Compiling the model uses the efficient numerical libraries under the covers (the so-called backend) such as Theano or TensorFlow. The backend automatically chooses the best way to represent the network for training and making predictions to run on your hardware, such as CPU or GPU or even distributed.

When compiling, we must specify some additional properties required when training the network. Remember training a network means finding the best set of weights to map inputs to outputs in our dataset.

We must specify the loss function to use to evaluate a set of weights, the optimizer is used to search through different weights for the network and any optional metrics we would like to collect and report during training.

In this case, we will use cross entropy as the loss argument. This loss is for a binary classification problems and is defined in Keras as “binary_crossentropy“. You can learn more about choosing loss functions based on your problem here:

   How to Choose Loss Functions When Training Deep Learning Neural Networks

We will define the optimizer as the efficient stochastic gradient descent algorithm “adam“. This is a popular version of gradient descent because it automatically tunes itself and gives good results in a wide range of problems. To learn more about the Adam version of stochastic gradient descent see the post:

   Gentle Introduction to the Adam Optimization Algorithm for Deep Learning

Finally, because it is a classification problem, we will collect and report the classification accuracy, defined via the metrics argument.

...
# compile the keras model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
...

4. Fit Keras Model

We have defined our model and compiled it ready for efficient computation.

Now it is time to execute the model on some data.

We can train or fit our model on our loaded data by calling the fit() function on the model.

Training occurs over epochs and each epoch is split into batches.

   Epoch: One pass through all of the rows in the training dataset.
   Batch: One or more samples considered by the model within an epoch before weights are updated.

One epoch is comprised of one or more batches, based on the chosen batch size and the model is fit for many epochs. For more on the difference between epochs and batches, see the post:

   What is the Difference Between a Batch and an Epoch in a Neural Network?

The training process will run for a fixed number of iterations through the dataset called epochs, that we must specify using the epochs argument. We must also set the number of dataset rows that are considered before the model weights are updated within each epoch, called the batch size and set using the batch_size argument.

For this problem, we will run for a small number of epochs (150) and use a relatively small batch size of 10. This means that each epoch will involve (150/10) 15 updates to the model weights.

These configurations can be chosen experimentally by trial and error. We want to train the model enough so that it learns a good (or good enough) mapping of rows of input data to the output classification. The model will always have some error, but the amount of error will level out after some point for a given model configuration. This is called model convergence.

...
# fit the keras model on the dataset
model.fit(X, y, epochs=150, batch_size=10)
...

This is where the work happens on your CPU or GPU.

No GPU is required for this example, but if you’re interested in how to run large models on GPU hardware cheaply in the cloud, see this post:

   How to Setup Amazon AWS EC2 GPUs to Train Keras Deep Learning Models

5. Evaluate Keras Model

We have trained our neural network on the entire dataset and we can evaluate the performance of the network on the same dataset.

This will only give us an idea of how well we have modeled the dataset (e.g. train accuracy), but no idea of how well the algorithm might perform on new data. We have done this for simplicity, but ideally, you could separate your data into train and test datasets for training and evaluation of your model.

You can evaluate your model on your training dataset using the evaluate() function on your model and pass it the same input and output used to train the model.

This will generate a prediction for each input and output pair and collect scores, including the average loss and any metrics you have configured, such as accuracy.

The evaluate() function will return a list with two values. The first will be the loss of the model on the dataset and the second will be the accuracy of the model on the dataset. We are only interested in reporting the accuracy, so we will ignore the loss value.

...
# evaluate the keras model
_, accuracy = model.evaluate(X, y)
print('Accuracy: %.2f' % (accuracy*100))


6. Tie It All Together

You have just seen how you can easily create your first neural network model in Keras.

Let’s tie it all together into a complete code example.

# first neural network with keras tutorial
from numpy import loadtxt
from keras.models import Sequential
from keras.layers import Dense
# load the dataset
dataset = loadtxt('pima-indians-diabetes.csv', delimiter=',')
# split into input (X) and output (y) variables
X = dataset[:,0:8]
y = dataset[:,8]
# define the keras model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# compile the keras model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit the keras model on the dataset
model.fit(X, y, epochs=150, batch_size=10)
# evaluate the keras model
_, accuracy = model.evaluate(X, y)
print('Accuracy: %.2f' % (accuracy*100))

You can copy all of the code into your Python file and save it as “keras_first_network.py” in the same directory as your data file “pima-indians-diabetes.csv“. You can then run the Python file as a script from your command line (command prompt) as follows:

python keras_first_network.py


Running this example, you should see a message for each of the 150 epochs printing the loss and accuracy, followed by the final evaluation of the trained model on the training dataset.

It takes about 10 seconds to execute on my workstation running on the CPU.

Ideally, we would like the loss to go to zero and accuracy to go to 1.0 (e.g. 100%). This is not possible for any but the most trivial machine learning problems. Instead, we will always have some error in our model. The goal is to choose a model configuration and training configuration that achieve the lowest loss and highest accuracy possible for a given dataset.

...
768/768 [==============================] - 0s 63us/step - loss: 0.4817 - acc: 0.7708
Epoch 147/150
768/768 [==============================] - 0s 63us/step - loss: 0.4764 - acc: 0.7747
Epoch 148/150
768/768 [==============================] - 0s 63us/step - loss: 0.4737 - acc: 0.7682
Epoch 149/150
768/768 [==============================] - 0s 64us/step - loss: 0.4730 - acc: 0.7747
Epoch 150/150
768/768 [==============================] - 0s 63us/step - loss: 0.4754 - acc: 0.7799
768/768 [==============================] - 0s 38us/step
Accuracy: 76.56


Note, if you try running this example in an IPython or Jupyter notebook you may get an error.

The reason is the output progress bars during training. You can easily turn these off by setting verbose=0 in the call to the fit() and evaluate() functions, for example:

...
# fit the keras model on the dataset without progress bars
model.fit(X, y, epochs=150, batch_size=10, verbose=0)
# evaluate the keras model
_, accuracy = model.evaluate(X, y, verbose=0)
...


Note, the accuracy of your model will vary.

Neural networks are a stochastic algorithm, meaning that the same algorithm on the same data can train a different model with different skill each time the code is run. This is a feature, not a bug. You can learn more about this in the post:

   Embrace Randomness in Machine Learning

The variance in the performance of the model means that to get a reasonable approximation of how well your model is performing, you may need to fit it many times and calculate the average of the accuracy scores. For more on this approach to evaluating neural networks, see the post:

   How to Evaluate the Skill of Deep Learning Models

For example, below are the accuracy scores from re-running the example 5 times:

Accuracy: 75.00
Accuracy: 77.73
Accuracy: 77.60
Accuracy: 78.12
Accuracy: 76.17


We can see that all accuracy scores are around 77% and the average is 76.924%.

7. Make Predictions

The number one question I get asked is:

   After I train my model, how can I use it to make predictions on new data?

Great question.

We can adapt the above example and use it to generate predictions on the training dataset, pretending it is a new dataset we have not seen before.

Making predictions is as easy as calling the predict() function on the model. We are using a sigmoid activation function on the output layer, so the predictions will be a probability in the range between 0 and 1. We can easily convert them into a crisp binary prediction for this classification task by rounding them.

For example:

...
# make probability predictions with the model
predictions = model.predict(X)
# round predictions 
rounded = [round(x[0]) for x in predictions]


Alternately, we can call the predict_classes() function on the model to predict crisp classes directly, for example:

...
# make class predictions with the model
predictions = model.predict_classes(X)


The complete example below makes predictions for each example in the dataset, then prints the input data, predicted class and expected class for the first 5 examples in the dataset.

# first neural network with keras make predictions
from numpy import loadtxt
from keras.models import Sequential
from keras.layers import Dense
# load the dataset
dataset = loadtxt('pima-indians-diabetes.csv', delimiter=',')
# split into input (X) and output (y) variables
X = dataset[:,0:8]
y = dataset[:,8]
# define the keras model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# compile the keras model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit the keras model on the dataset
model.fit(X, y, epochs=150, batch_size=10, verbose=0)
# make class predictions with the model
predictions = model.predict_classes(X)
# summarize the first 5 cases
for i in range(5):

print('%s => %d (expected %d)' % (X[i].tolist(), predictions[i], y[i]))

Running the example does not show the progress bar as before as we have set the verbose argument to 0.

After the model is fit, predictions are made for all examples in the dataset, and the input rows and predicted class value for the first 5 examples is printed and compared to the expected class value.

We can see that most rows are correctly predicted. In fact, we would expect about 76.9% of the rows to be correctly predicted based on our estimated performance of the model in the previous section.

[6.0, 148.0, 72.0, 35.0, 0.0, 33.6, 0.627, 50.0] => 0 (expected 1)
[1.0, 85.0, 66.0, 29.0, 0.0, 26.6, 0.351, 31.0] => 0 (expected 0)
[8.0, 183.0, 64.0, 0.0, 0.0, 23.3, 0.672, 32.0] => 1 (expected 1)
[1.0, 89.0, 66.0, 23.0, 94.0, 28.1, 0.167, 21.0] => 0 (expected 0)
[0.0, 137.0, 40.0, 35.0, 168.0, 43.1, 2.288, 33.0] => 1 (expected 1)

If you would like to know more about how to make predictions with Keras models, see the post:

   How to Make Predictions with Keras

Keras Tutorial Summary

In this post, you discovered how to create your first neural network model using the powerful Keras Python library for deep learning.

Specifically, you learned the six key steps in using Keras to create a neural network or deep learning model, step-by-step including:

  • How to load data.
  • How to define a neural network in Keras.
  • How to compile a Keras model using the efficient numerical backend.
  • How to train a model on data.
  • How to evaluate a model on data.
  • How to make predictions with the model.

Keras Tutorial Extensions

Well done, you have successfully developed your first neural network using the Keras deep learning library in Python.

This section provides some extensions to this tutorial that you might want to explore.

   Tune the Model. Change the configuration of the model or training process and see if you can improve the performance of the model, e.g. achieve better than 76% accuracy.
   Save the Model. Update the tutorial to save the model to file, then load it later and use it to make predictions (see this tutorial).
   Summarize the Model. Update the tutorial to summarize the model and create a plot of model layers (see this tutorial).
   Separate Train and Test Datasets. Split the loaded dataset into a train and test set (split based on rows) and use one set to train the model and the other set to estimate the performance of the model on new data.
   Plot Learning Curves. The fit() function returns a history object that summarizes the loss and accuracy at the end of each epoch. Create line plots of this data, called learning curves (see this tutorial).
   Learn a New Dataset. Update the tutorial to use a different tabular dataset, perhaps from the UCI Machine Learning Repository.
   Use Functional API. Update the tutorial to use the Keras Functional API for defining the model (see this tutorial).





Referensi

Pranala Menarik