Updated test2 (markdown)
parent
61a711aa45
commit
c962fc7776
2 changed files with 46 additions and 1 deletions
46
API-calls.md
Normal file
46
API-calls.md
Normal file
|
@ -0,0 +1,46 @@
|
||||||
|
# Introduction
|
||||||
|
OpenGaze has the following modules:
|
||||||
|
* **opengaze**: the high-level class to call other functions, including three common use examples (see below).
|
||||||
|
* **input_handler**: a basic utility module handling the input types and data.
|
||||||
|
* **gaze_estimator**: a module to perform the gaze estimation pipeline, including face detection, head pose estimation, data normalization and predict gaze from the input face/eye image patch.
|
||||||
|
* **face_detector**: a face detection module with the third-party library to perform both face and facial landmark detection.
|
||||||
|
* **normalizer**: the data normalization module is used to extract the face/eye image patch from the input image according to 3D head pose and target center. It can compensate part of variation caused by head poses.
|
||||||
|
* **gaze_predictor**: the core module for gaze estimation. It takes an input eye/face patch and outputs the 3D gaze direction in the (normalized) camera coordinate system.
|
||||||
|
* **data**: a module that defines the stored data structure across different modules.
|
||||||
|
* **personal_calibrator**: a module to allow the user to calibrate the gaze estimation results on the 2D screen plane.
|
||||||
|
|
||||||
|
# Get started
|
||||||
|
You can familiarize yourself with the API by exploring the main executable class OpenGaze with files `opengaze.hpp` and `opengaze.cpp`.<br/>
|
||||||
|
You will find three examples we wrote to demonstrate how to use OpenGaze for gaze visualization, gaze estimation, and personal calibration in the "exe" directory.
|
||||||
|
|
||||||
|
## Gaze Visualization
|
||||||
|
The main function to call for this example is `OpenGaze::runGazeEstimation()`. After initializing the input by the `InputHandler` class, a gaze estimation task can be simply run as:<br/>
|
||||||
|
```vector<Sample> output;
|
||||||
|
Mat input_image = input_handler_.getNextSample();
|
||||||
|
Mat undist_img;
|
||||||
|
undistort(input_image, undist_img, input_handler_.camera_matrix_, input_handler_.camera_distortion_);
|
||||||
|
gaze_estimator_.estimateGaze(undist_img, output);
|
||||||
|
```
|
||||||
|
The 3D gaze direction vector will be stored in `output[i].gaze_data.gaze3d` where `i` indicates the `ith` user inthe scene.
|
||||||
|
|
||||||
|
## Gaze Estimation
|
||||||
|
The main function to call for this example is `OpenGaze::runGazeOnScreen()`.
|
||||||
|
You still need to estimate a 3D gaze vector first, and then simply run:<br/>
|
||||||
|
`input_handler_.projectToDisplay(output, true);`<br/>
|
||||||
|
to calculate the intersection of the gaze vector and screen plane. This will give you the estimated 2D location on the screen stored in `output[i].gaze_data.gaze2d`.<br/>
|
||||||
|
Note that before this calculation, you have to provide correct calibration information. this calibration information is:
|
||||||
|
* `calibration.yml` - located in OpenGaze/content/calib/ directory, and stores the camera intrinsic parameters. These parameters can be set with the standard camera calibration procedure provided by [OpenCV](https://docs.opencv.org/3.1.0/dc/dbb/tutorial_py_calibration.html).<br/>
|
||||||
|
* `monitor.yml` - located in OpenGaze/content/calib/ directory, and stores the rotation and translation between camera and screen. This information can be obtained with the [mirror-based camera-screen calibration method](https://dl.acm.org/citation.cfm?id=1888118).
|
||||||
|
|
||||||
|
## Personal Calibration
|
||||||
|
The main function to call for this example is `OpenGaze::runPersonalCalibration(int num_calibration_point)`. <br/>
|
||||||
|
The main class `PersonalCalibrator` can be initialized by: <br/>
|
||||||
|
`PersonalCalibrator m_calibrator(input_handler_.getScreenWidth(), input_handler_.getScreenHeight());` <br/>
|
||||||
|
It needs to know the screen size in pixels in order to show the corresponding stimuli on the screen. <br/>
|
||||||
|
The locations of stimuli are generated randomly by specifying the number of stimulis: <br/>
|
||||||
|
`m_calibrator.generatePoints(num_calibration_point);` <br/>
|
||||||
|
Then each stimulus can be shown by calling: <br/>
|
||||||
|
`m_calibrator.showNextPoint()` <br/>
|
||||||
|
For each stimulus point, the gaze estimation module estimates the coresponding gaze point on the screen. The stimuli locations and estimated gaze points can be used to generate a personal model by: <br/>
|
||||||
|
`m_calibrator.generateModel(prediction, ground-truth, 1)`, <br/>
|
||||||
|
where the last parameter indicates the order of the mapping function.<br/>
|
1
test2.md
1
test2.md
|
@ -1 +0,0 @@
|
||||||
test2
|
|
Loading…
Reference in a new issue