TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Datasets/GTEA

GTEA

Georgia Tech Egocentric Activity

ImagesVideosUnknownIntroduced 2011-01-01

The Georgia Tech Egocentric Activities (GTEA) dataset contains seven types of daily activities such as making sandwich, tea, or coffee. Each activity is performed by four different people, thus totally 28 videos. For each video, there are about 20 fine-grained action instances such as take bread, pour ketchup, in approximately one minute.

Source: TricorNet: A Hybrid Temporal Convolutional and Recurrent Network for Video Action Segmentation Image Source: http://cbs.ic.gatech.edu/fpv/

Benchmarks

Action Localization/mAP@0.1:0.7Action Localization/mAP@0.5Action Localization/F1@50%Action Localization/F1@25%Action Localization/F1@10%Action Localization/AccAction Localization/EditAction Segmentation/F1@50%Action Segmentation/F1@25%Action Segmentation/F1@10%Action Segmentation/AccAction Segmentation/EditTemporal Action Localization/mAP@0.1:0.7Temporal Action Localization/mAP@0.5Video/mAP@0.1:0.7Video/mAP@0.5Weakly Supervised Action Localization/mAP@0.1:0.7Weakly Supervised Action Localization/mAP@0.5Zero-Shot Learning/mAP@0.1:0.7Zero-Shot Learning/mAP@0.5

Statistics

Papers
120
Benchmarks
20

Links

Homepage

Tasks

Action LocalizationAction SegmentationFine-Grained Action DetectionTemporal Action LocalizationVideoWeakly Supervised Action LocalizationZero-Shot Learning