Official code for "Mouse2Vec: Learning Reusable Semantic Representations of Mouse Behaviour" published at CHI'24
base_models.py | ||
CRT.py | ||
env.yaml | ||
main.py | ||
pretrained_model.pkl | ||
README.md | ||
utils.py |
Mouse2Vec: Learning Reusable Semantic Representations of Mouse Behaviour
Guanhua Zhang, Zhiming Hu, Mihai Bâce, Andreas Bulling
ACM CHI 2024, Honolulu, Hawaii
[Project] [Paper]
Setup
We recommend to setup a virtual environment using Anaconda.
- Create a conda environment and install dependencies
conda env create --name mouse2vec --file=env.yaml conda activate mouse2vec
- Clone our repository to download our code and a pretrained model
git clone this_repo.git
Run the code
Our code supports training using GPUs or CPUs. It will prioritise GPUs if available (line 45 in main.py). You can also assign a particular card via CUDA_VISIBLE_DEVICES (e.g., the following commands use GPU card no.3).
Train Mouse2Vec Autoencoder
Execute
CUDA_VISIBLE_DEVICES=3 python main.py --ssl True
Use Mouse2Vec for Downstream Tasks
We enable two ways to use Mouse2Vec on your datasets for downstream tasks.
- Use the (frozen) pretrained model and only train a MLP-based classifier
Execute
CUDA_VISIBLE_DEVICES=3 python main.py --ssl True --load True --stage 0 --testDataset [Your Dataset]
- Finetune both Mouse2Vec and the classifier
Execute
CUDA_VISIBLE_DEVICES=3 python main.py --ssl True --load True --stage 1 --testDataset [Your Dataset]
Citation
If you find our code useful or use it in your own projects, please cite our paper:
@inproceedings{zhang24_chi,
title = {Mouse2Vec: Learning Reusable Semantic Representations of Mouse Behaviour},
author = {Zhang, Guanhua and Hu, Zhiming and B{\^a}ce, Mihai and Bulling, Andreas},
year = {2024},
pages = {1--17},
booktitle = {Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI)},
doi = {10.1145/3613904.3642141}
}
Acknowledgements
Our work relied on the codebase of Cross Reconstruction Transformer. Thanks to the authors for sharing their code.