updated readme

This commit is contained in:
mohsen-mansouryar 2016-03-09 20:20:36 +01:00
parent f34dc653e5
commit cede6b6ca5

View file

@ -3,20 +3,39 @@
published at ETRA 2016 published at ETRA 2016
code -> Contains main scripts. below you can see a list of commands and the results they produce: > `python parallax_analysis.py pts` >> plot of calibration and test points.
- cmd: python parallax_analysis.py pts |-> result: plot of calibration and test points.
- cmd: python parallax_analysis.py 2d3d |-> result: plot of 2D-to-2D againt 3D-to-3D mapping over all number of calibration depths.
- cmd: python parallax_analysis.py 2d2d_2d3d |-> result: plot comparing parallax error over five different test depths for three calibration depths of 1.0m, 1.5m, and 2.0m between 2D-to-2D and 3D-to-3D mapping.
- cmd: python parallax_2D3D_3Cdepths.py |-> result: plot comparing average angular error of the two mapping techniques when 3 calibration depths are used together. (depths 1 to 5 correspond to test depths 1.0m to 2.0m)
code/pupil -> Modules directly used from PUPIL source code for baseline 2D-to-2D mapping and data stream correlation. > `python parallax_analysis.py 2d3d` >> plot of 2D-to-2D againt 3D-to-3D mapping over all number of calibration depths.
code/recording -> Scripts related to dataset recording and marker visualization and tracking. script dependencies are python 2's openCV and ArUco library. more information regarding each module is documented where required. > `python parallax_analysis.py 2d2d_2d3d` >> plot comparing parallax error over five different test depths for three calibration depths of 1.0m, 1.5m, and 2.0m between 2D-to-2D and 3D-to-3D mapping.
code/results -> Contains gaze estimation results for both 2D-to-2D and 2D-to-3D mapping approaches with multiple calibration depths on data from participants. data files in the root directory of each method correspond to single depth calibration results. data format is described inside README.txt inside each method directory. the results are also available via /BS/3D_Gaze_Tracking/work/results > `python parallax_2D3D_3Cdepths.py` >> plot comparing average angular error of the two mapping techniques when 3 calibration depths are used together. (depths 1 to 5 correspond to test depths 1.0m to 2.0m)
code/Visualization -> Creation of figures for the paper > code/pupil -> Modules directly used from PUPIL source code for baseline 2D-to-2D mapping and data stream correlation.
1CalibrationDepth.py -> 2D-to-2D vs. 2D-to-3D with one calibration depth
3CalibrationDepths.py -> 2D-to-2D vs. 2D-to-3D with three calibration depth > code/recording -> Scripts related to dataset recording and marker visualization and tracking. script dependencies are python 2's openCV and ArUco library. more information regarding each module is documented where required.
EffectDistanceDifference1CalibrationDepth.py -> Effect of different distances to the original calibration depth
EffectNumberofClusters.py -> Effect of the number of clusters > code/results -> Contains gaze estimation results for both 2D-to-2D and 2D-to-3D mapping approaches with multiple calibration depths on data from participants. data files in the root directory of each method correspond to single depth calibration results. data format is described inside README.txt inside each method directory. the results are also available via /BS/3D_Gaze_Tracking/work/results
> code/Visualization -> Creation of figures for the paper
>> `python 1CalibrationDepth.py` -> 2D-to-2D vs. 2D-to-3D with one calibration depth
>> `python 3CalibrationDepths.py` -> 2D-to-2D vs. 2D-to-3D with three calibration depth
>> `python EffectDistanceDifference1CalibrationDepth.py` -> Effect of different distances to the original calibration depth
>> `python EffectNumberofClusters.py` -> Effect of the number of clusters
### Publication
If you use or extend our code, please cite the following
> *Mansouryar, Mohsen, et al. "3D Gaze Estimation from 2D Pupil Positions on Monocular Head-Mounted Eye Trackers." arXiv preprint arXiv:1601.02644 (2016)*
## Dependencies
* [OpenCV](http://opencv.org/) – a multi-purpose computer vision library
* [ArUco](http://www.uco.es/investiga/grupos/ava/node/26) – minimal library for AR applications based on OpenCV
* [SciPy](http://www.scipy.org/) – for minimization, statistical and matrix operations as well as plotting
* [scikit-learn](http://scikit-learn.org/stable/) – Machine Learning tools in Python
* [Processing](https://processing.org/) – for visualizing AR markers