Gaze Estimation Model
OpenFace by default uses a full-face model for the gaze estimation task. It is the implementation of the following research paper:
It’s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation.
Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling
Proc. of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017.
We trained the gaze estimation model with the Caffe library.
How to train your own model
Extract face patch
You can call the function './bin/DataExtraction -i YOUR_INPUT_DIRECTORY -o YOUR_OUTPUT_DIRECTORY' to extract the face patch image from your data. Please note the input must be a list of images under the input directory. It will extract the normalized face patch images in the output directory.
After successfully extracting the face image, then you can feed these images along with ground-truth labels to train your own model. For detailed instructions please refer to the Caffe website.
Please note the last gaze output layer must be
fc8 to be able to load in OpenGaze.