GD
Gaze-Detection
Introduced 2020-02-03
These images were generated using UnityEyes simulator, after including essential eyeball physiology elements and modeling binocular vision dynamics. The images are annotated with head pose and gaze direction information, besides 2D and 3D landmarks of eye's most important features. Additionally, the images are distributed into eight classes denoting the gaze direction of a driver's eyes (TopLeft, TopRight, TopCenter, MiddleLeft, MiddleRight, BottomLeft, BottomRight, BottomCenter). This dataset was used to train a DNN model for estimating the gaze direction. The dataset contains 61,063 training images, 132,630 testing images and additional 72,000 images for improvement.
Related Benchmarks
GD-VCR/Visual Reasoning/AccuracyGD-VCR/Visual Reasoning/Gap (West)GDA/Information Extraction/Relation F1GDA/Relation Extraction/F1GDA/Relation Extraction/Relation F1GDELT/Link Prediction/MRRGDSC/Drug Discovery/Pearson correlation coefficient (PCC)GDSCv2/Drug Discovery/Pearson correlation coefficient (PCC)GDSCv2/Drug Discovery/mRMSEGDSCv2/Zero-Shot Learning/Pearson correlation coefficient (PCC)