Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Sign in
  • O opengaze
  • Project information
    • Project information
    • Activity
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Deployments
    • Deployments
    • Releases
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • Repository
    • Value stream
  • Wiki
    • Wiki
  • Activity
  • Graph
  • Commits
Collapse sidebar
  • Public Projects
  • opengaze
  • Wiki
  • Model training

Model training · Changes

Page history
Created Model training (markdown) authored Jan 28, 2019 by Xucong Zhang's avatar Xucong Zhang
Hide whitespace changes
Inline Side-by-side
Showing with 20 additions and 0 deletions
+20 -0
  • Model-training.md Model-training.md +20 -0
  • No files found.
Model-training.md 0 → 100644
View page @ a2fac315
## Gaze Estimation Model
OpenFace by default uses a full-face model for the gaze estimation task. It is the implementation of the following research paper:<br/>
**It’s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation.** <br/>
Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling<br/>
Proc. of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017.
## Training Framework
We trained the gaze estimation model with the [Caffe](http://caffe.berkeleyvision.org/) library.
## Training Data
The provided pre-trained model was trained on both [MPIIFaceGaze](https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/research/gaze-based-human-computer-interaction/its-written-all-over-your-face-full-face-appearance-based-gaze-estimation/) and [EYEDIAP (HD videos)](https://www.idiap.ch/dataset/eyediap).
## How to train your own model
#### Extract face patch
You can call the function './bin/DataExtraction -i YOUR_INPUT_DIRECTORY -o YOUR_OUTPUT_DIRECTORY' to extract the face patch image from your data. Please note the input must be a list of images under the input directory. It will extract the normalized face patch images in the output directory.
#### Training
After successfully extracting the face image, then you can feed these images along with ground-truth labels to train your own model. For detailed instructions please refer to the [Caffe](http://caffe.berkeleyvision.org/) website.
Please note the last gaze output layer must be `gaze_output` or `fc8` to be able to load in OpenGaze.
Clone repository
  • API calls
  • Command line arguments
  • Home
  • Model training
  • Output format
  • Unix installation