Code for the paper "Neural Reasoning About Agents' Goals, Preferences, and Actions", AAAI 2024
tom | ||
utils | ||
README.md | ||
run_test.sh | ||
run_train.sh | ||
test_tom.py | ||
train_tom.py |
Neural Reasoning about Agents' Goals, Preferences, and Actions
Matteo Bortoletto, Lei Shi, Andreas Bulling
AAAI'24, Vancouver, CA
[Paper]
Citation
If you find our code useful or use it in your own projects, please cite our paper:
@inproceedings{bortoletto2024neural,
title={Neural Reasoning About Agents’ Goals, Preferences, and Actions},
author={Bortoletto, Matteo and Shi, Lei and Bulling, Andreas},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={38},
number={1},
pages={456--464},
year={2024}
}
Setup
This code is based on the original implementation of the BIB benchmark.
Using virtualenv
python -m virtualenv /path/to/env
source /path/to/env/bin/activate
pip install -r requirements.txt
Using conda
conda create --name <env_name> python=3.8.10 pip=20.0.2 cudatoolkit=10.2.89
conda activate <env_name>
pip install -r requirements_conda.txt
pip install dgl-cu102 dglgo -f https://data.dgl.ai/wheels/repo.html
Running the code
Activate the environment
Run source bibdgl/bin/activate
.
Index data
This will create the json files with all the indexed frames for each episode in each video.
python utils/index_data.py
You need to manually set mode
in the dataset class (in main).
Generate graphs
This will generate the graphs from the videos:
python /utils/build_graphs.py --mode MODE --cpus NUM_CPUS
MODE
can be train
, val
or test
. NOTE: check utils/build_graphs.py
to make sure you're loading the correct dataset to generate the graphs you want.
Training
Use CUDA_VISIBLE_DEVICES=0 run_train.sh
.
Testing
Use CUDA_VISIBLE_DEVICES=0 run_test.sh
.
Hardware setup
All models are trained on an NVIDIA Tesla V100-SXM2-32GB GPU.