ConAn: A Usable Tool for Multimodal Conversation Analysis
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
apenzko 9d77168954
Updated README
1 year ago
exampledata Added processing 1 year ago
icons Added GUI 1 year ago
processing Updated README 1 year ago
uiwidget Added GUI 1 year ago
utils Added GUI 1 year ago
README.md Added processing 1 year ago
container.py Added GUI 1 year ago
gui.ui Added GUI 1 year ago
main.py Added GUI 1 year ago
processing.py Added GUI 1 year ago
requirements.txt Added GUI 1 year ago

README.md

ConAn

This is the official repository for ConAn: A Usable Tool for Multimodal Conversation Analysis
ConAn our graphical tool for multimodal conversation analysis takes 360 degree videos recorded during multiperson group interactions as input. ConAn integrates state-of-the-art models for gaze estimation, active speaker detection, facial action unit detection, and body movement detection and can output quantitative reports both at individual and group level, as well as different visualizations that provide qualitative insights into group interaction.

Installation

For the graphical user interface (GUI) you need python>3.6 to install the requirements via pip:

pip install requirements.txt 

Get Started

To test the GUI you can download our example use case videos from googledrive:
As well as the respective processed .dat files which include all the analyses.
You can then run main.py and import the video file you would like to analyze.

Processing

If you would like to analyze your own 360° video you can find the processing pipeline at processing/. Please note the processing pipeline requires a GPU.

Citation

Please cite this paper if you use ConAn or parts of this publication in your research:

@inproceedings{penzkofer21_icmi,
  author = {Penzkofer, Anna and Müller, Philipp and Bühler, Felix and Mayer, Sven and Bulling, Andreas},
  title = {ConAn: A Usable Tool for Multimodal Conversation Analysis},
  booktitle = {Proc. ACM International Conference on Multimodal Interaction (ICMI)},
  year = {2021},
  doi = {10.1145/3462244.3479886},
  video = {https://www.youtube.com/watch?v=H2KfZNgx6CQ}
}