# ConAn This is the official repository for [ConAn: A Usable Tool for Multimodal Conversation Analysis](https://www.perceptualui.org/publications/penzkofer21_icmi.pdf)
ConAn – our graphical tool for multimodal conversation analysis – takes 360 degree videos recorded during multiperson group interactions as input. ConAn integrates state-of-the-art models for gaze estimation, active speaker detection, facial action unit detection, and body movement detection and can output quantitative reports both at individual and group level, as well as different visualizations that provide qualitative insights into group interaction. ## Installation For the graphical user interface (GUI) you need python>3.6 to install the [requirements](requirements.txt) via pip: ``` pip install requirements.txt ``` ## Get Started To test the GUI you can download our example use case videos from [https://www.perceptualui.org/research/datasets/ConAn/](https://www.perceptualui.org/research/datasets/ConAn/)
As well as the respective processed ``.dat`` files which include all the analyses.
You can then run [main.py](main.py) and import the video file you would like to analyze. ## Processing If you would like to analyze your own 360° video you can find the processing pipeline at [processing/](processing). Please note the processing pipeline requires a GPU. ## Citation Please cite this paper if you use ConAn or parts of this publication in your research: ``` @inproceedings{penzkofer21_icmi, author = {Penzkofer, Anna and Müller, Philipp and Bühler, Felix and Mayer, Sven and Bulling, Andreas}, title = {ConAn: A Usable Tool for Multimodal Conversation Analysis}, booktitle = {Proc. ACM International Conference on Multimodal Interaction (ICMI)}, year = {2021}, doi = {10.1145/3462244.3479886}, video = {https://www.youtube.com/watch?v=H2KfZNgx6CQ} } ```