This is the official repository for [ConAn: A Usable Tool for Multimodal <u>Con</u>versation <u>An</u>alysis](https://www.perceptualui.org/publications/penzkofer21_icmi.pdf) <br>
ConAn – our graphical tool for multimodal conversation analysis – takes 360 degree videos recorded during multiperson group interactions as input. ConAn integrates state-of-the-art models for gaze estimation, active speaker detection,
facial action unit detection, and body movement detection and can output quantitative reports both at individual and group
level, as well as different visualizations that provide qualitative insights into group interaction.
To test the GUI you can download our example use case videos from [https://perceptualui.org/open-area/conan/](https://perceptualui.org/open-area/conan/) <br>