This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
Gaze Estimation Model
OpenFace by default uses a full-face model for the gaze estimation task. It is the implementation of the following research paper:
It’s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation.
Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling
Proc. of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017.
Training Framework
We trained the gaze estimation model with the Caffe library.
You can call the function './bin/DataExtraction -i YOUR_INPUT_DIRECTORY -o YOUR_OUTPUT_DIRECTORY' to extract the face patch image from your data. Please note the input must be a list of images under the input directory. It will extract the normalized face patch images in the output directory.
Training
After successfully extracting the face image, then you can feed these images along with ground-truth labels to train your own model. For detailed instructions please refer to the Caffe website.
Please note the last gaze output layer must be gaze_output or fc8 to be able to load in OpenGaze.
Delete page
Deleting the wiki page "Model training" cannot be undone. Continue?