Updated README
This commit is contained in:
parent
9e0ac5daa8
commit
9d77168954
1 changed files with 45 additions and 16 deletions
|
@ -4,20 +4,49 @@
|
||||||
conda env create -f conan_windows.yml
|
conda env create -f conan_windows.yml
|
||||||
conda activate conan_windows_env
|
conda activate conan_windows_env
|
||||||
```
|
```
|
||||||
|
## Usage
|
||||||
### OpenPose
|
Run [ConAn_RunProcessing.ipynb](ConAn_RunProcessing.ipynb) to extract all frames from video and run processing models.
|
||||||
### RT-Gene
|
### Body Movement
|
||||||
- Run [processing/install_RTGene.py](/processing/install_RTGene.py)
|
For body movement detection we selected [OpenPose](https://github.com/CMU-Perceptual-Computing-Lab/openpose). For our case, we used the 18-keypoint model,
|
||||||
- [OPTIONAL] Provide camera calibration file calib.pkl
|
which takes the full frame as input and jointly predicts anatomical keypoints and a measurement
|
||||||
- Provide maximum number of people in the video
|
for the degree of association between them.<br>
|
||||||
### JAA-Net
|
If you're using this processing step in your research please cite:
|
||||||
### AVA-Active Speaker
|
|
||||||
### Apriltag
|
|
||||||
|
|
||||||
[https://www.wikihow.com/Install-FFmpeg-on-Windows](https://www.wikihow.com/Install-FFmpeg-on-Windows)
|
|
||||||
### Training
|
|
||||||
```
|
```
|
||||||
conda install -c anaconda cupy
|
@article{8765346,
|
||||||
conda install -c anaconda chainer
|
author = {Z. {Cao} and G. {Hidalgo Martinez} and T. {Simon} and S. {Wei} and Y. A. {Sheikh}},
|
||||||
conda install -c anaconda ipykernel
|
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
|
||||||
|
title = {OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields},
|
||||||
|
year = {2019}
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
### Eye Gaze
|
||||||
|
For eye gaze estimation we selected [RT-GENE](https://github.com/Tobias-Fischer/rt_gene). In addition to feeding each video frame to the model,
|
||||||
|
we also input a version of the frame where the left side and the right side are wrapped together.
|
||||||
|
This enables us to detect when a person moves over the edge of the video, as none of the models account for this.
|
||||||
|
As this is a single frame estimation, we then track all subjects throughout the video using a minimal euclidean distance heuristic. <br>
|
||||||
|
<br>
|
||||||
|
If you're using this processing step in your research please cite:
|
||||||
|
```
|
||||||
|
@inproceedings{FischerECCV2018,
|
||||||
|
author = {Tobias Fischer and Hyung Jin Chang and Yiannis Demiris},
|
||||||
|
title = {{RT-GENE: Real-Time Eye Gaze Estimation in Natural Environments}},
|
||||||
|
booktitle = {European Conference on Computer Vision},
|
||||||
|
year = {2018},
|
||||||
|
month = {September},
|
||||||
|
pages = {339--357}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
Notes:
|
||||||
|
- Before using [process_RTGene.py](process_RTGene.py) you need to run [install_RTGene.py](install_RTGene.py)!
|
||||||
|
- [OPTIONAL] You can provide a camera calibration file calib.pkl to improve detections.
|
||||||
|
- You need to provide maximum number of people in the video for the sorting algorithm.
|
||||||
|
### Facial Expression
|
||||||
|
Under construction
|
||||||
|
### Speaking Activity
|
||||||
|
Under construction
|
||||||
|
### Object Tracking
|
||||||
|
We assume that you are most likely able to define your own study procedure,
|
||||||
|
therefore we decided to simplify object tracking by employing the visual fiducial system [AprilTag 2](https://github.com/AprilRobotics/apriltag),
|
||||||
|
where the tag positions are extracted with their tailored detector.
|
||||||
|
|
||||||
|
Note: For Windows we use [pupil_apriltags](https://github.com/pupil-labs/apriltags).
|
||||||
|
|
Loading…
Reference in a new issue