Code for the paper "Neural Reasoning About Agents' Goals, Preferences, and Actions", AAAI 2024
Find a file
2024-02-01 15:40:47 +01:00
tom up 2024-02-01 15:40:47 +01:00
utils up 2024-02-01 15:40:47 +01:00
README.md up 2024-02-01 15:40:47 +01:00
run_test.sh up 2024-02-01 15:40:47 +01:00
run_train.sh up 2024-02-01 15:40:47 +01:00
test_tom.py up 2024-02-01 15:40:47 +01:00
train_tom.py up 2024-02-01 15:40:47 +01:00

Neural Reasoning about Agents' Goals, Preferences, and Actions

Matteo Bortoletto,   Lei Shi,   Andreas Bulling

AAAI'24, Vancouver, CA
[Paper]

Citation

If you find our code useful or use it in your own projects, please cite our paper:

@inproceedings{bortoletto2024neural,
  author = {Bortoletto, Matteo and Lei, Shi and Bulling, Andreas},
  title = {{Neural Reasoning about Agents' Goals, Preferences, and Actions}},
  booktitle = {Proc. 38th AAAI Conference on Artificial Intelligence (AAAI)},
  year = {2024},
}

Setup

This code is based on the original implementation of the BIB benchmark.

Using virtualenv

python -m virtualenv /path/to/env
source /path/to/env/bin/activate
pip install -r requirements.txt

Using conda

conda create --name <env_name> python=3.8.10 pip=20.0.2 cudatoolkit=10.2.89
conda activate <env_name>
pip install -r requirements_conda.txt
pip install dgl-cu102 dglgo -f https://data.dgl.ai/wheels/repo.html

Running the code

Activate the environment

Run source bibdgl/bin/activate.

Index data

This will create the json files with all the indexed frames for each episode in each video.

python utils/index_data.py

You need to manually set mode in the dataset class (in main).

Generate graphs

This will generate the graphs from the videos:

python /utils/build_graphs.py --mode MODE --cpus NUM_CPUS

MODE can be train, val or test. NOTE: check utils/build_graphs.py to make sure you're loading the correct dataset to generate the graphs you want.

Training

You can use the gtbc.sh.

Testing

Use run_test_tom.sh.

Hardware setup

All models are trained on an NVIDIA Tesla V100-SXM2-32GB GPU.