DisMouse/README.md
2024-10-08 16:38:05 +02:00

55 lines
No EOL
2 KiB
Markdown

<div align="center">
<h1> DisMouse: Disentangling Information from Mouse Movement Data </h1>
**[Guanhua Zhang][4], &nbsp; [Zhiming Hu][5], &nbsp; [Andreas Bulling][6]** <br>
**ACM UIST 2024**, Pittsburgh, USA <br>
**[[Project][2]]** **[[Paper][7]]** </div>
----------------
# Virtual environment setup
We recommend to setup a virtual environment using Anaconda. <br>
1. Create a conda environment and install dependencies
```shell
conda env create --name dismouse --file=environment.yaml
conda activate dismouse
```
2. Clone our repository to download our code
```shell
git clone this_repo.git
```
# Run the code to train DisMouse
Our code supports training using GPUs. You can also assign a particular card via CUDA_VISIBLE_DEVICES (e.g., the following commands use GPU card no.3). Simply execute
```shell
CUDA_VISIBLE_DEVICES=3 python main.py
```
## Use your own configurations
Change parameters in ```templates.py -> mouse_autoenc``` and ```config.py -> TrainConfig```
## Train on your mouse dataset
Plug in your data loader function in ```dataset.py -> loadDataset``` (line 6)
# Citation
If you find our code useful or use it in your own projects, please cite our paper:
```
@inproceedings{zhang24_uist,
title = {DisMouse: Disentangling Information from Mouse Movement Data},
author = {Zhang, Guanhua and Hu, Zhiming and Bulling, Andreas},
year = {2024},
pages = {1--13},
booktitle = {Proc. ACM Symposium on User Interface Software and Technology (UIST)},
doi = {10.1145/3654777.3676411}
}
```
# Acknowledgements
Our work is built on the codebase of [Diffusion Autoencoders][1]. Thanks to the authors for sharing their code.
[1]: https://diff-ae.github.io/
[2]: https://perceptualui.org/publications/zhang24_uist/
[4]: https://scholar.google.com/citations?user=NqkK0GwAAAAJ&hl=en
[5]: https://scholar.google.com/citations?hl=en&user=OLB_xBEAAAAJ
[6]: https://www.perceptualui.org/people/bulling/
[7]: https://perceptualui.org/publications/zhang24_uist.pdf