Update 'README.md'

This commit is contained in:
Adnen Abdessaied 2023-06-14 21:50:36 +02:00
parent ce9c902570
commit 9cbaf48bd6

View file

@ -1,29 +1,22 @@
# NSVD <div align="center">
<h1> Neuro-Symbolic Visual Dialog </h1>
This repository contains the official code of the paper: **[Adnen Abdessaied][1], &nbsp; [Mihai Bâce][2], &nbsp; [Andreas Bulling][3]** <br>
## Neuro-Symbolic Visual Dialog [[PDF](https://perceptualui.org/publications/abdessaied22_coling.pdf)] **Published at [COLING 2022][4] :kr: [[Paper][5]]** <br>
:loudspeaker: **Oral Presentation** :loudspeaker:
[Adnen Abdessaied](https://adnenabdessaied.de), [Mihai Bace](https://perceptualui.org/people/bace/), [Andreas Bulling](https://perceptualui.org/people/bulling/) </div>
International Conferenc on Computational Linguistics (COLING), 2022 / Gyeongju, Republic of Korea :kr:
:loudspeaker: **Oral Presentation** :loudspeaker:
# Citation
If you find our code useful or use it in your own projects, please cite our paper: If you find our code useful or use it in your own projects, please cite our paper:
``` ```bibtex
@inproceedings{abdessaied22_coling, @inproceedings{abdessaied22_coling,
author = {Abdessaied, Adnen and Bâce, Mihai and Bulling, Andreas}, author = {Abdessaied, Adnen and Bâce, Mihai and Bulling, Andreas},
title = {{Neuro-Symbolic Visual Dialog}}, title = {{Neuro-Symbolic Visual Dialog}},
booktitle = {Proceedings of the 29th International Conference on Computational Linguistics (COLING)}, booktitle = {COLING},
year = {2022}, year = {2022},
pages = {192--217},
month = {oct},
year = {2022},
address = {Gyeongju, Republic of Korea},
publisher = {International Committee on Computational Linguistics},
url = {https://aclanthology.org/2022.coling-1.17}, url = {https://aclanthology.org/2022.coling-1.17},
pages = "192--217",
} }
``` ```
@ -81,13 +74,13 @@ cd preprocess_dialogs
For the stack encoder, execute For the stack encoder, execute
```python ```bash
python preprocess.py --input_dialogs_json <path_to_raw_dialog_file> --input_vocab_json '' --output_vocab_json <path_where_to_save_the_vocab> --output_h5_file <path_of_the_output_file> --split <train/val/test> --mode stack python preprocess.py --input_dialogs_json <path_to_raw_dialog_file> --input_vocab_json '' --output_vocab_json <path_where_to_save_the_vocab> --output_h5_file <path_of_the_output_file> --split <train/val/test> --mode stack
``` ```
For the concat encoder, execute For the concat encoder, execute
```python ```bash
python preprocess.py --input_dialogs_json <path_to_raw_dialog_file> --input_vocab_json '' --output_vocab_json <path_where_to_save_the_vocab> --output_h5_file <path_of_the_output_file> --split <train/val/test> --mode concat python preprocess.py --input_dialogs_json <path_to_raw_dialog_file> --input_vocab_json '' --output_vocab_json <path_where_to_save_the_vocab> --output_h5_file <path_of_the_output_file> --split <train/val/test> --mode concat
``` ```
@ -103,7 +96,7 @@ cd ../prog_generator
To train the caption parser, execute To train the caption parser, execute
```python ```bash
python train_caption_parser.py --mode train --run_dir <experiment_dir> --res_path <path_to_store_results> --dataPathTr <path_to_preprocessed_training_data> --dataPathVal <path_to_preprocessed_val_data> --dataPathTest <path_to_preprocessed_test_data> --vocab_path <path_where_to_save_the_vocab> python train_caption_parser.py --mode train --run_dir <experiment_dir> --res_path <path_to_store_results> --dataPathTr <path_to_preprocessed_training_data> --dataPathVal <path_to_preprocessed_val_data> --dataPathTest <path_to_preprocessed_test_data> --vocab_path <path_where_to_save_the_vocab>
``` ```
@ -111,13 +104,13 @@ python train_caption_parser.py --mode train --run_dir <experiment_dir> --res_pat
To train the question program parser with the stack encoder, execute To train the question program parser with the stack encoder, execute
```python ```bash
python train_question_parser.py --mode train --run_dir <experiment_dir> --text_log_dir <log_dir_path> --dataPathTr <path_to_preprocessed_training_data> --dataPathVal <path_to_preprocessed_val_data> --dataPathTest <path_to_preprocessed_test_data> --scenePath <path_to_derendered_scenes> --vocab_path <path_where_to_save_the_vocab> --encoder_type 2 python train_question_parser.py --mode train --run_dir <experiment_dir> --text_log_dir <log_dir_path> --dataPathTr <path_to_preprocessed_training_data> --dataPathVal <path_to_preprocessed_val_data> --dataPathTest <path_to_preprocessed_test_data> --scenePath <path_to_derendered_scenes> --vocab_path <path_where_to_save_the_vocab> --encoder_type 2
``` ```
To train the question program parser with the concat encoder, execute To train the question program parser with the concat encoder, execute
```python ```bash
python train_question_parser.py --mode train --run_dir <experiment_dir> --text_log_dir <log_dir_path> --dataPathTr <path_to_preprocessed_training_data> --dataPathVal <path_to_preprocessed_val_data> --dataPathTest <path_to_preprocessed_test_data> --scenePath <path_to_derendered_scenes> --vocab_path <path_where_to_save_the_vocab> --encoder_type 1 python train_question_parser.py --mode train --run_dir <experiment_dir> --text_log_dir <log_dir_path> --dataPathTr <path_to_preprocessed_training_data> --dataPathVal <path_to_preprocessed_val_data> --dataPathTest <path_to_preprocessed_test_data> --scenePath <path_to_derendered_scenes> --vocab_path <path_where_to_save_the_vocab> --encoder_type 1
``` ```
@ -131,13 +124,13 @@ python train_question_parser.py --mode train --run_dir <experiment_dir> --text_l
To evaluate using the *Hist+GT* scheme, execute To evaluate using the *Hist+GT* scheme, execute
```python ```bash
python train_question_parser.py --mode test_with_gt --run_dir <experiment_dir> --text_log_dir <log_dir_path> --dataPathTr <path_to_preprocessed_training_data> --dataPathVal <path_to_preprocessed_val_data> --dataPathTest <path_to_preprocessed_test_data> --scenePath <path_to_derendered_scenes> --vocab_path <path_where_to_save_the_vocab> --encoder_type <1/2> --questionNetPath <path_to_pretrained_question_parser> --captionNetPath <path_to_pretrained_caption_parser> --dialogLen <total_number_of_dialog_rounds> --last_n_rounds <number_of_last_rounds_to_considered_in_history> python train_question_parser.py --mode test_with_gt --run_dir <experiment_dir> --text_log_dir <log_dir_path> --dataPathTr <path_to_preprocessed_training_data> --dataPathVal <path_to_preprocessed_val_data> --dataPathTest <path_to_preprocessed_test_data> --scenePath <path_to_derendered_scenes> --vocab_path <path_where_to_save_the_vocab> --encoder_type <1/2> --questionNetPath <path_to_pretrained_question_parser> --captionNetPath <path_to_pretrained_caption_parser> --dialogLen <total_number_of_dialog_rounds> --last_n_rounds <number_of_last_rounds_to_considered_in_history>
``` ```
To evaluate using the *Hist+Pred* scheme, execute To evaluate using the *Hist+Pred* scheme, execute
```python ```bash
python train_question_parser.py --mode test_with_pred --run_dir <experiment_dir> --text_log_dir <log_dir_path> --dataPathTr <path_to_preprocessed_training_data> --dataPathVal <path_to_preprocessed_val_data> --dataPathTest <path_to_preprocessed_test_data> --scenePath <path_to_derendered_scenes> --vocab_path <path_where_to_save_the_vocab> --encoder_type <1/2> --questionNetPath <path_to_pretrained_question_parser> --captionNetPath <path_to_pretrained_caption_parser> --dialogLen <total_number_of_dialog_rounds> --last_n_rounds <number_of_last_rounds_to_considered_in_history> python train_question_parser.py --mode test_with_pred --run_dir <experiment_dir> --text_log_dir <log_dir_path> --dataPathTr <path_to_preprocessed_training_data> --dataPathVal <path_to_preprocessed_val_data> --dataPathTest <path_to_preprocessed_test_data> --scenePath <path_to_derendered_scenes> --vocab_path <path_where_to_save_the_vocab> --encoder_type <1/2> --questionNetPath <path_to_pretrained_question_parser> --captionNetPath <path_to_pretrained_caption_parser> --dialogLen <total_number_of_dialog_rounds> --last_n_rounds <number_of_last_rounds_to_considered_in_history>
``` ```
@ -175,6 +168,12 @@ We thank [Ahmed Shah](https://www.linkedin.com/in/mahmedshah/) for his MAC-XXX i
# Contributors # Contributors
- [Adnen Abdessaied](https://adnenabdessaied.de) - [Adnen Abdessaied][1]
For any questions or enquiries, don't hesitate to contact the above contributor. For any questions or enquiries, don't hesitate to contact the above contributor.
[1]: https://adnenabdessaied.de
[2]: https://perceptualui.org/people/bace/
[3]: https://perceptualui.org/people/bulling/
[4]: https://coling2022.org/
[5]: https://aclanthology.org/2022.coling-1.17.pdf