Mouse2Vec/README.md
2024-10-10 14:01:06 +02:00

2.5 KiB

Mouse2Vec: Learning Reusable Semantic Representations of Mouse Behaviour

Guanhua Zhang,   Zhiming Hu,   Mihai Bâce,   Andreas Bulling
ACM CHI 2024, Honolulu, Hawaii
[Project] [Paper]

Setup

We recommend to setup a virtual environment using Anaconda.

  1. Create a conda environment and install dependencies
    conda env create --name mouse2vec --file=env.yaml
    conda activate mouse2vec
    
  2. Clone our repository to download our code and a pretrained model
    git clone this_repo.git
    

Run the code

Our code supports training using GPUs or CPUs. It will prioritise GPUs if available (line 45 in main.py). You can also assign a particular card via CUDA_VISIBLE_DEVICES (e.g., the following commands use GPU card no.3).

Train Mouse2Vec Autoencoder

Execute

CUDA_VISIBLE_DEVICES=3 python main.py --ssl True

Use Mouse2Vec for Downstream Tasks

We enable two ways to use Mouse2Vec on your datasets for downstream tasks.

  1. Use the (frozen) pretrained model and only train a MLP-based classifier

Execute

CUDA_VISIBLE_DEVICES=3 python main.py --sl True --load True --stage 0 --testDataset [Your Dataset]
  1. Finetune both Mouse2Vec and the classifier

Execute

CUDA_VISIBLE_DEVICES=3 python main.py --sl True --load True --stage 1 --testDataset [Your Dataset]

Citation

If you find our code useful or use it in your own projects, please cite our paper:

@inproceedings{zhang24_chi,
  title = {Mouse2Vec: Learning Reusable Semantic Representations of Mouse Behaviour},
  author = {Zhang, Guanhua and Hu, Zhiming and B{\^a}ce, Mihai and Bulling, Andreas},
  year = {2024},
  pages = {1--17},
  booktitle = {Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI)},
  doi = {10.1145/3613904.3642141}
}

Acknowledgements

Our work relied on the codebase of Cross Reconstruction Transformer. Thanks to the authors for sharing their code.