diff --git a/README.md b/README.md index 8e4e56c..aa04979 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,14 @@ -# Citation +# VLCN +This repository contains the official code of the paper: + +## Video Language Co-Attention with Fast-Learning Feature Fusion for VideoQA [[PDF](https://aclanthology.org/2022.repl4nlp-1.15.pdf)] + +[Adnen Abdessaied](https://adnenabdessaied.de), [Ekta Sood](https://perceptualui.org/people/sood/), [Andreas Bulling](https://perceptualui.org/people/bulling/) +**Poster** +Represnetation Learning for NLP (RepL4NLP) @ ACL 2022 / Dublin, Ireland. + +If you find our code useful or use it in your own projects, please cite our paper: -This is the official code of the paper **Video Language Co-Attention with Fast-Learning Feature Fusion for VideoQA**. -If you find our code useful, please cite our paper: ``` @inproceedings{abdessaied22_repl4NLP, author = {Abdessaied, Adnen and Sood, Ekta and Bulling, Andreas}, @@ -133,3 +140,10 @@ Our pre-trained models are available here [⬇](https://drive.google.com/drive/f # Acknowledgements We thank the Vision and Language Group@ MIL for their [MCAN](https://github.com/MILVLG/mcan-vqa) open source implementation, [DavidA](https://github.com/DavideA/c3d-pytorch/blob/master/C3D_model.py) for his pretrained C3D model and finally [ixaxaar](https://github.com/ixaxaar/pytorch-dnc) for his DNC implementation. + +# Contributors + +- [Adnen Abdessaied](https://adnenabdessaied.de) + +For any questions or enquiries, don't not hesitate to contact the above contributor. +