Face detection and recognition using OpenFace on Ubuntu 16.04 PC and Raspbian Jessie Raspberry Pi 3

From OnnoWiki
Jump to navigation Jump to search

Sumber: http://allskyee.blogspot.co.id/2017/03/face-detection-and-recognition-using.html


Having come across an interesting article on Medium titled "Machine Learning is Fun! Part 4: Modern Face Recognition with Deep Learning" by Adam Geitgey, I decided to try out this OpenFace which uses deep learning extracted features for face recognition.

An interesting discovery along the way was Dlib and how it was used to create a face detector based on HOG features that claims to beat OpenCV's Haar-cascade based classifier. It's written up nicely on their blog here.

Back to Openface. This is a project authored by Brandon Amos and the theoretical background is very well documented on one of his blog posts. Instructions to install and use can be found towards the end of Adam's blog post I mentioned above.

You can easily spin up an environment using Docker, but it does take the fun out of it. Plus, things tend to get a little odd if you're using a GPU for acceleration anyways on Docker, so I'm going to install this on a native environment but keep it compartmentalized as much as possible on both my PC and Raspberry Pi 3. 0. Install necessary packages and setup virtualenv

Some packages to install.

sudo apt-get install virtualenv git \
libopenblas-dev libopencv-dev libboost-dev libboost-python-dev python-dev \ build-essential gcc g++ cmake

Create a compartmentalized Python environment with Python 2 (has to be this version required by Openface) and copy necessary OpenCV's Python bindings onto it. Let's call the fd_fr directory VENV_ROOT.

mkdir fd_fr; cd fd_fr
export VENV_ROOT=$(pwd)
virtualenv -p /usr/bin/python2.7 .
cp /usr/lib/python2.7/dist-packages/cv* lib/python2.7/site-packages/
source bin/activate

Tip when running on RPI : As RPI devices have very little memory and the storage device being an SDcard, performance can drop dramatically when run with tons of file I/O. One way to remedy is by flushing the page cache after large file I/O operations with the following command periodically during the installation process going forward.

sync
sudo bash -c "echo 3 > /proc/sys/vm/drop_caches"

1. Install and run Dlib face landmark detection Get Dlib either by cloning it from their github repository or downloading a released version. At the time of this writing, that is version v19.3 so I'm going to clone the repository and checkout that tag.

git clone https://github.com/davisking/dlib.git dlib
cd dlib
git checkout -b v19.3

If on RPI, do not compile Dlib on the device as some files require 800+MB of memory which cause massive swap to device and CPU utilization falls below 5%. I recommend doing this on the host PC and run ARM emulation using qemu-arm-static and then copying. But if you're feeling lazy, I have done all this (and subsequent steps) that you can access via this link. More details on this in Section 5.

Tip with qemu-arm-static internet connection : Especially if behind a proxy, be sure to share contents of /etc/resolv.conf with chroot environment either by copying or bind mount

mount --bind /etc/resolv.conf /rpi/mount/etc/resolv.conf for behind proxy

The build process is done with cmake. If you have a relatively new CPU, it should have the AVX (Advanced Vector Extensions) which greatly enhances the performance. You can find either or not your CPU supports this functionality by executing "cat /proc/cpuinfo | grep avx". If you get a non-empty text, then your CPU supports it. If it does, include the "-DUSE_AVX..." statement below.

mkdir build; cd build
cmake ../tools/python -DUSE_AVX_INSTRUCTIONS=1
cmake --build . --config Release

Copy generated dlib.so file to Python's lib/ path and test it by importing it from Python.

cp dlib.so ${VENV_ROOT}/lib/python2.7/dist-packages/
python -c "import dlib"

If this second statement returns an error, you have done something wrong.

(optional) To run Dlib with a face and landmark detector on a webcam feed, first download the model for the latter from here and unzip it. I have written a short script on my gist page which requires the webcam class. You need to download the latter to run the former as well as the model. 2. Install and build Torch Torch is an opensource machine learning library based on Lua. Detailed instructions to get started available on their site. They recommend using a script to install some packages but I've been burned one too many times by erroneous scripts with sudo rights and so I will write out what it's doing in plain English.

On another folder (not Dlib), clone the repository.

git clone https://github.com/torch/distro.git torch --recursive
cd torch

To install necessary packages (if on an Ubuntu 16.04 system like moi), run

# for Ubuntu 16.04
sudo apt-get install software-properties-common \
                libgraphicsmagick1-dev libfftw3-dev sox libsox-dev \
                libsox-fmt-all
sudo apt-get install python-software-properties
sudo apt-get install build-essential gcc g++ curl \
            cmake libreadline-dev git-core libqt4-dev libjpeg-dev \
            libpng-dev ncurses-dev imagemagick libzmq3-dev gfortran \
            unzip gnuplot gnuplot-x11 ipython
sudo apt-get install -y gcc-4.9 libgfortran-4.9-dev g++-4.9

For Raspbian

# for Raspbian Jessie
sudo apt-get install -y build-essential gcc g++ curl \
            cmake libreadline-dev git-core libqt4-dev libjpeg-dev \
            libpng-dev ncurses-dev imagemagick libzmq3-dev gfortran \
            unzip gnuplot gnuplot-x11 ipython

Now run the install.sh script (which is more like a build script) and source the environment activation file.

./install.sh
source install/bin/torch-activate

Caveat 1. The necessary paths in "install/bin/torch-activate" are hard-coded absolute paths so if you move the installed directory, be sure to change these as well.

Caveat 2. Older versions of CUDA will not work. If you don't feel like updating, just install without it by adding "path_to_nvcc=" to line 82 in install.sh file.

Caveat 2-1. For old or Atom based systems that do not have AVX, the library is automatically going to use SSE which has a bug on randperm function. Rest assured, it was fixed recently so be sure to pull that patch if running on Atom.

3. Install Openface Now for the final piece of the puzzle, clone the repository.

git clone https://github.com/cmusatyalab/openface.git openface
cd openface

There has been a release on Feb 26, 2016 and it's been roughly a year since. So I think it's better to just use the tip rather than checking out the last release. Next, install and download necessary artifacts.

python setup.py install
./models/get-models.sh
pip install -r requirements.txt
luarocks install csvigo 
luarocks install dpnn

4. Get labeled faces and train on them It's now finally time to run Openface and see what it can do. To get a good batch of labeled faces on which to run the recognition task on, there is the LFW (labeled faces in the wild) dataset. Download then unzip which creates a directory lfw containing 5k directory of IDs.

wget http://vis-www.cs.umass.edu/lfw/lfw.tgz
tar -zxvf lfw.tgz

I'm going to select from this a list that only has an excess of 10 faces and save it into a file called big_db.

find lfw/ -mindepth 1 -maxdepth 2 -type d -exec bash -c "echo -ne '{} '; ls '{}' | wc -l" \; | awk '$NF>10{print $1}' > big_db

Next, I'm going to select 10 random people out of this list and copy them to another folder called training-images.

mkdir -p training-images
cat big_db | shuf -n 10 | xargs cp -avt training-images/

If you'd like to recognize yourself, add a folder in training-images. Be sure to include many photos in which you're the single identifiable human face.

We're now doing to run face landmark detection on each photo which will

  • Detect the biggest face
  • Detect the facial landmarks (outer eyes, nose and lower lip)
  • Warp affine to a canonical face
  • Save output (96x96) to a file in an easy to access format


To do this, run

./util/align-dlib.py ./training-images/ align outerEyesAndNose ./aligned-images/ --size 96

Next, run feature extraction on each of the images.

./batch-represent/main.lua -outDir ./generated-embeddings/ -data ./aligned-images/

As a final step, train a classifier from generated representations.

./demos/classifier.py train ./generated-embeddings/

This will create a file called classifier.pkl in generated-embeddings folder.


5. Run To test if everything is working properly before working with the trained model from Section 4, let's try this on a pre-trained model. There is a pre-trained model on celebrities in folder. To test using this image, run

./demos/classifier.py infer models/openface/celeb-classifier.nn4.small2.v1.pkl images/examples/adams.jpg

Artifacts of sections 1~4 done on RPI are on this link. Be sure to extract it from /home/pi/ to produce /home/pi/fd_fr as the internal scripts are hardcoded to that location. I've made a couple of changes to the base as this is run on a 32bit environment. Running "git diff" from /home/pi/fd_fr/openface will reveal the changes.

This should correctly predict the sample image as Amy Adams which predicts with 81% certainty but on RPI is 34% which I find odd...

To complete the final leg of our journey, I recommend testing on yourself. I saved a photo of myself that I didn't include in the training set as sky_chon.jpg. To test the trained model on me, I run

./demos/classifier.py infer generated-embeddings/classifier.pkl sky_chon.jpg

To run using the webcam (dev/video0) on VGA.

./demos/classifier_webcam.py --width 640 --height 480 --captureDevice 0 generated-embeddings/classifier.pkl

Works like a charm!!

References

   https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78#.lugw83dgc
   https://cmusatyalab.github.io/openface/
   http://dlib.net/
   http://blog.dlib.net/2014/02/dlib-186-released-make-your-own-object.html
   http://bamos.github.io/2016/01/19/openface-0.2.0/
   https://github.com/davisking/dlib
   http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
   https://github.com/torch/torch7/issues/966
   https://wiki.debian.org/RaspberryPi/qemu-user-static
   https://hblok.net/blog/posts/2014/02/06/chroot-to-arm/
   https://lukeplant.me.uk/blog/posts/sharing-internet-connection-to-chroot/
   https://hblok.net/blog/posts/2014/02/06/chroot-to-arm/
   https://github.com/cmusatyalab/openface/issues/42




Referensi