2 Model training
Andreas Bulling edited this page 2020-06-04 06:44:24 +02:00
This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

Gaze Estimation Model

OpenFace by default uses a full-face model for the gaze estimation task. It is the implementation of the following research paper:

Its Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation.
Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling
Proc. of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017.

Training Framework

We trained the gaze estimation model with the Caffe library.

Training Data

The provided pre-trained model was trained on both MPIIFaceGaze and EYEDIAP (HD videos).

How to train your own model

Extract face patch

You can call the function './bin/DataExtraction -i YOUR_INPUT_DIRECTORY -o YOUR_OUTPUT_DIRECTORY' to extract the face patch image from your data. Please note the input must be a list of images under the input directory. It will extract the normalized face patch images in the output directory.

Training

After successfully extracting the face image, then you can feed these images along with ground-truth labels to train your own model. For detailed instructions please refer to the Caffe website. Please note the last gaze output layer must be gaze_output or fc8 to be able to load in OpenGaze.