Openface: Instalasi Deep Learning di Ubuntu 16.04 Server

From OnnoWiki
Jump to navigation Jump to search

Sumber: http://allskyee.blogspot.co.id/2017/03/face-detection-and-recognition-using.html


Disini menggunakan dlib, yang katanya lebih baik daripada Haar-cascade based classifier OpenCV.


Instalasi Paket Pendukung

sudo su
apt -y install git \
       libopenblas-dev libopencv-dev libboost-dev \
       libboost-python-dev python-dev \
       build-essential gcc g++ cmake
apt -y install software-properties-common \
       libgraphicsmagick1-dev libfftw3-dev sox libsox-dev \
       libsox-fmt-all python-software-properties \
       build-essential gcc g++ curl \
       cmake libreadline-dev git-core libqt4-dev libjpeg-dev \
       libpng-dev ncurses-dev imagemagick libzmq3-dev gfortran \
       unzip gnuplot gnuplot-x11 ipython \
       gcc-4.9 libgfortran-4.9-dev g++-4.9

Instalasi dlib face landmark detection

sudo su
apt -y install build-essential cmake libgtk-3-dev \
       python-pip libboost-all-dev libboost-dev
apt -y install libboost-python-dev
pip install numpy
pip install scipy
pip install scikit-image
pip install dlib

Instalasi Torch

sudo su
cd /usr/local/src
git clone https://github.com/torch/distro.git torch --recursive
cd torch; bash install-deps;
./install.sh
/usr/local/src/torch/install/bin/torch-activate
source ~/.bashrc
apt install luarocks
# update common package ke versi terakhir
luarocks install torch
luarocks install nn
luarocks install graph
luarocks install cunn
luarocks install cutorch
luarocks install torchnet
luarocks install optnet
luarocks install iterm

Instalasi Openface

cd /usr/local/src
git clone https://github.com/cmusatyalab/openface.git openface
cd openface
python setup.py install
./models/get-models.sh
pip install -r requirements.txt
luarocks install csvigo 
luarocks install dpnn

Ambil Muka / Face yang sudah di label

wget http://vis-www.cs.umass.edu/lfw/lfw.tgz
tar -zxvf lfw.tgz

I'm going to select from this a list that only has an excess of 10 faces and save it into a file called big_db.

find lfw/ -mindepth 1 -maxdepth 2 -type d -exec bash -c "echo -ne '{} '; ls '{}' | wc -l" \; | awk '$NF>10{print $1}' > big_db

Next, I'm going to select 10 random people out of this list and copy them to another folder called training-images.

mkdir -p training-images
cat big_db | shuf -n 10 | xargs cp -avt training-images/

If you'd like to recognize yourself, add a folder in training-images. Be sure to include many photos in which you're the single identifiable human face.

We're now doing to run face landmark detection on each photo which will

  • Detect the biggest face
  • Detect the facial landmarks (outer eyes, nose and lower lip)
  • Warp affine to a canonical face
  • Save output (96x96) to a file in an easy to access format


To do this, run

./util/align-dlib.py ./training-images/ align outerEyesAndNose ./aligned-images/ --size 96

Next, run feature extraction on each of the images.

./batch-represent/main.lua -outDir ./generated-embeddings/ -data ./aligned-images/

As a final step, train a classifier from generated representations.

./demos/classifier.py train ./generated-embeddings/

This will create a file called classifier.pkl in generated-embeddings folder.

Operasikan

5. Run To test if everything is working properly before working with the trained model from Section 4, let's try this on a pre-trained model. There is a pre-trained model on celebrities in folder. To test using this image, run

./demos/classifier.py infer models/openface/celeb-classifier.nn4.small2.v1.pkl images/examples/adams.jpg

Artifacts of sections 1~4 done on RPI are on this link. Be sure to extract it from /home/pi/ to produce /home/pi/fd_fr as the internal scripts are hardcoded to that location. I've made a couple of changes to the base as this is run on a 32bit environment. Running "git diff" from /home/pi/fd_fr/openface will reveal the changes.

This should correctly predict the sample image as Amy Adams which predicts with 81% certainty but on RPI is 34% which I find odd...

To complete the final leg of our journey, I recommend testing on yourself. I saved a photo of myself that I didn't include in the training set as sky_chon.jpg. To test the trained model on me, I run

./demos/classifier.py infer generated-embeddings/classifier.pkl sky_chon.jpg

To run using the webcam (dev/video0) on VGA.

./demos/classifier_webcam.py --width 640 --height 480 --captureDevice 0 generated-embeddings/classifier.pkl

Works like a charm!!

References

   https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78#.lugw83dgc
   https://cmusatyalab.github.io/openface/
   http://dlib.net/
   http://blog.dlib.net/2014/02/dlib-186-released-make-your-own-object.html
   http://bamos.github.io/2016/01/19/openface-0.2.0/
   https://github.com/davisking/dlib
   http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
   https://github.com/torch/torch7/issues/966
   https://wiki.debian.org/RaspberryPi/qemu-user-static
   https://hblok.net/blog/posts/2014/02/06/chroot-to-arm/
   https://lukeplant.me.uk/blog/posts/sharing-internet-connection-to-chroot/
   https://hblok.net/blog/posts/2014/02/06/chroot-to-arm/
   https://github.com/cmusatyalab/openface/issues/42




Referensi