Created Model training (markdown)

Xucong Zhang 2019-01-28 13:41:04 +01:00
parent 80f506219e
commit a2fac31509

20
Model-training.md Normal file

@ -0,0 +1,20 @@
## Gaze Estimation Model
OpenFace by default uses a full-face model for the gaze estimation task. It is the implementation of the following research paper:<br/>
**Its Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation.** <br/>
Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling<br/>
Proc. of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017.
## Training Framework
We trained the gaze estimation model with the [Caffe](http://caffe.berkeleyvision.org/) library.
## Training Data
The provided pre-trained model was trained on both [MPIIFaceGaze](https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/research/gaze-based-human-computer-interaction/its-written-all-over-your-face-full-face-appearance-based-gaze-estimation/) and [EYEDIAP (HD videos)](https://www.idiap.ch/dataset/eyediap).
## How to train your own model
#### Extract face patch
You can call the function './bin/DataExtraction -i YOUR_INPUT_DIRECTORY -o YOUR_OUTPUT_DIRECTORY' to extract the face patch image from your data. Please note the input must be a list of images under the input directory. It will extract the normalized face patch images in the output directory.
#### Training
After successfully extracting the face image, then you can feed these images along with ground-truth labels to train your own model. For detailed instructions please refer to the [Caffe](http://caffe.berkeleyvision.org/) website.
Please note the last gaze output layer must be `gaze_output` or `fc8` to be able to load in OpenGaze.