- Python 100%
| assets | ||
| config | ||
| save | ||
| dataset_nymeria.py | ||
| diff_models.py | ||
| environment.yaml | ||
| evaluate.py | ||
| LICENSE | ||
| main_model.py | ||
| preprocess_aria.py | ||
| README.md | ||
| train.py | ||
| train_wrist.py | ||
| utils.py | ||
HAGI++: Head-Assisted Gaze Imputation and Generation
[Arxiv]
HAGI: Head-Assisted Gaze Imputation for Mobile Eye Trackers
Chuhan Jiao, Zhiming Hu, Andreas Bulling
ACM UIST 2025, Busan, Republic of Korea
[Project] [Paper]
Updates
- Dec. 11, 2025: Initial release.
Getting started
- Clone the repository.
git clone https://git.hcics.simtech.uni-stuttgart.de/public-projects/HAGI.git - Install general dependencies.
cd HAGI conda create -f environment.yaml -n hagi - Download the preprocessed datasets. Download the preprocessed datasets from this link (password: `GyBQ1r7'{). After downloading, extract the folders and place them inside the datasets directory.
Training
- HAGI Training.
python train.py --loss_type partcial_mix --fourier --xattn
- HAGI++ Training. Use head rotation and translation.
python train.py --config base_hagi++.yaml --loss_type partcial_noise --fourier --dt
Use head rotation only (still achieve competitive performance).
python train.py --config base_hagi++.yaml --loss_type partcial_noise --head_type Rotation --fourier --dt
For gaze generation (100% missing values), although previous models also works, training one without obervation masks can achieve better performance.
python train.py --config base_hagi++.yaml --loss_type partcial_noise --fourier --dt --missing_ratio 1 --generation
To train 3-point HAGI++,
python train_wrist.py --config base_hagi++.yaml --loss_type partcial_noise --fourier --generation
Inference
We provide several pretrained checkpoints. Those trained with only head movements data are under save/head and those trained on Nymeria subset that contains wrist motions are under save/head_wrists. Please change the config file, modelfolder and dataset name accordingly.
python evaluate.py --config base_hagi++.yaml --modelfolder head/hagi++_imputation --dataset nymeria
Run HAGI/HAGI++ on your own data
We provide an example function in preprocess_aria.py to process data captured by Meta Project Aria Glasses (Gen 1) to obtain gaze and head movements. To run this, projectaria_tools package is required. The function is tested with projectaria-tools 1.5.5.
Citation
If you find our code useful or use it in your own projects, please cite our papers:
@inproceedings{jiao25_uist,
title = {HAGI: Head-Assisted Gaze Imputation for Mobile Eye Trackers},
author = {Jiao, Chuhan and Hu, Zhiming and Bulling, Andreas},
year = {2025},
pages = {1--14},
booktitle = {Proc. ACM Symposium on User Interface Software and Technology (UIST)},
doi = {10.1145/3746059.3747749}
}
@article{jiao2025hagi++,
title={HAGI++: Head-Assisted Gaze Imputation and Generation},
author={Jiao, Chuhan and Hu, Zhiming and Bulling, Andreas},
journal={arXiv preprint arXiv:2511.02468},
year={2025}
}
Acknowledgements
We acknowledge that HAGI and HAGI++ are built upon CSDI. We have also incorporated and adapted components from Motion Diffuse, EgoAllo, and SP-EyeGAN. We thank the developers of these projects for making their code available to the community.