Created Model training (markdown)
parent
80f506219e
commit
a2fac31509
1 changed files with 20 additions and 0 deletions
20
Model-training.md
Normal file
20
Model-training.md
Normal file
|
@ -0,0 +1,20 @@
|
|||
## Gaze Estimation Model
|
||||
OpenFace by default uses a full-face model for the gaze estimation task. It is the implementation of the following research paper:<br/>
|
||||
|
||||
**It’s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation.** <br/>
|
||||
Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling<br/>
|
||||
Proc. of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017.
|
||||
|
||||
## Training Framework
|
||||
We trained the gaze estimation model with the [Caffe](http://caffe.berkeleyvision.org/) library.
|
||||
|
||||
## Training Data
|
||||
The provided pre-trained model was trained on both [MPIIFaceGaze](https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/research/gaze-based-human-computer-interaction/its-written-all-over-your-face-full-face-appearance-based-gaze-estimation/) and [EYEDIAP (HD videos)](https://www.idiap.ch/dataset/eyediap).
|
||||
|
||||
## How to train your own model
|
||||
#### Extract face patch
|
||||
You can call the function './bin/DataExtraction -i YOUR_INPUT_DIRECTORY -o YOUR_OUTPUT_DIRECTORY' to extract the face patch image from your data. Please note the input must be a list of images under the input directory. It will extract the normalized face patch images in the output directory.
|
||||
|
||||
#### Training
|
||||
After successfully extracting the face image, then you can feed these images along with ground-truth labels to train your own model. For detailed instructions please refer to the [Caffe](http://caffe.berkeleyvision.org/) website.
|
||||
Please note the last gaze output layer must be `gaze_output` or `fc8` to be able to load in OpenGaze.
|
Loading…
Reference in a new issue