Go to file
Yao Wang f9844dba10 feat: update web Interface 2022-07-29 14:22:57 +02:00
Dataset feat: update web Interface 2022-07-29 14:22:57 +02:00
RecallNet init commit 2022-05-09 14:32:31 +02:00
WebInterface feat: update web Interface 2022-07-29 14:22:57 +02:00
README.md init commit 2022-05-09 14:32:31 +02:00

README.md

VisRecall: Quantifying Information Visualisation Recallability via Question Answering

Yao Wang, Chuhan Jiao(Aalto University), Mihai Bâce and Andreas Bulling

submitted to The IEEE Transactions on Visualization and Computer Graphics (TVCG2022)

This repository contains the dataset and models for predicting visualisation recallability.

$Root Directory
│
│─ README.md —— this file
│
|─ RecallNet —— Source code of the network to predict infovis recallability 
│  │
│  │─ environment.yaml —— conda environments
│  │
│  │─ notebooks 
│  │  │
│  │  │─ train_RecallNet.ipynb —— main notebook for training and validation
│  │  │
│  │  └─ massvis_recall.json —— saved recallability scores for MASSVIS dataset
│  │
│  └─ src
│     │
│     │─ singleduration_models.py —— RecallNet model
│     │
│     │─ sal_imp_utilities.py —— image processing utilities
│     │
│     │─ losses_keras2.py —— loss functions
│     │
│    ...
│   
└─ VisRecall —— The dataset
   │
   │─ answer_raw —— raw answers from AMT workers
   │  
   │─ merged
   │  │
   │  │─ src —— original images
   │  │
   │  │─ qa —— question annotations
   │  │
   │  └─ image_annotation —— other metadata annotations
   │     
   └─ training_data
      │
      │─ all —— all averaged questions
      │
      └─ X-question —— a specific type of question (T-, FE-, F-, RV-, U-)

contact: yao.wang@vis.uni-stuttgart.de