TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Datasets

3,275 machine learning datasets

Filter by Modality

  • Images3,275
  • Texts3,148
  • Videos1,019
  • Audio486
  • Medical395
  • 3D383
  • Time series298
  • Graphs285
  • Tabular271
  • Speech199
  • RGB-D192
  • Environment148
  • Point cloud135
  • Biomedical123
  • LiDAR95
  • RGB Video87
  • Tracking78
  • Biology71
  • Actions68
  • 3d meshes65
  • Tables52
  • Music48
  • EEG45
  • Hyperspectral images45
  • Stereo44
  • MRI39
  • Physics32
  • Interactive29
  • Dialog25
  • Midi22
  • 6D17
  • Replay data11
  • Financial10
  • Ranking10
  • Cad9
  • fMRI7
  • Parallel6
  • Lyrics2
  • PSG2
Clear filter

3,275 dataset results

7-Scenes

The 7-Scenes dataset is a collection of tracked RGB-D camera frames. The dataset may be used for evaluation of methods for different applications such as dense tracking and mapping and relocalization techniques. All scenes were recorded from a handheld Kinect RGB-D camera at 640×480 resolution. The dataset creators use an implementation of the KinectFusion system to obtain the ‘ground truth’ camera tracks, and a dense 3D model. Several sequences were recorded per scene by different users, and split into distinct training and testing sequence sets.

7 papers0 benchmarksImages, RGB-D

Cross-Dataset Testbed

The Cross-dataset Testbed is a Decaf7 based cross-dataset image classification dataset, which contains 40 categories of images from 3 domains: 3,847 images in Caltech256, 4,000 images in ImageNet, and 2,626 images for SUN. In total there are 10,473 images of 40 categories from these three domains.

7 papers0 benchmarksImages

TUM Kitchen

The TUM Kitchen dataset is an action recognition dataset that contains 20 video sequences captured by 4 cameras with overlapping views. The camera network captures the scene from four viewpoints with 25 fps, and every RGB frame is of the resolution 384×288 by pixels. The action labels are frame-wise, and provided for the left arm, the right arm and the torso separately.

7 papers0 benchmarksImages, Videos

SynthHands

The SynthHands dataset is a dataset for hand pose estimation which consists of real captured hand motion retargeted to a virtual hand with natural backgrounds and interactions with different objects. The dataset contains data for male and female hands, both with and without interaction with objects. While the hand and foreground object are synthtically generated using Unity, the motion was obtained from real performances as described in the accompanying paper. In addition, real object textures and background images (depth and color) were used. Ground truth 3D positions are provided for 21 keypoints of the hand.

7 papers0 benchmarksImages, RGB-D, Videos

YUD+ (Additional Vanishing Point Labels for the York Urban Database)

YUD+ is a dataset containing additional Vanishing Point Labels for the York Urban Database.

7 papers0 benchmarksImages

IQUAD (Interactive Question Answering Dataset)

IQUAD is a dataset for Visual Question Answering in interactive environments. It is built upon AI2-THOR, a simulated photo-realistic environment of configurable indoor scenes with interactive object. IQUAD V1 has 75,000 questions, each paired with a unique scene configuration.

7 papers0 benchmarksImages, Texts

CASME II (Chinese Academy of Sciences Micro-Expression II)

The Chinese Academy of Sciences Micro-Expression dataset (CASME II) consists of 255 videos, elicited from 26 participants. The videos are recorded using Point Gray GRAS-03K2C camera which has a frame rate of 200fps. The average video length is 0.34s, equivalent to 68 frames. Each video’s emotion label is annotated by two coders, where the reliability is 0.846.

7 papers12 benchmarksImages

Atari Grand Challenge

The Atari Grand Challenge dataset is a large dataset of human Atari 2600 replays. It consists of replays for 5 different games: * Space Invaders (445 episodes, 2M frames) * Q*bert (659 episodes, 1.6M frames) * Ms.Pacman (384 episodes, 1.7M frames) * Video Pinball (211 episodes, 1.5M frames) * Montezuma’s revenge (668 episodes, 2.7M frames)

7 papers0 benchmarksImages, Videos

DFW (Disguised Faces in the Wild)

Contains over 11000 images of 1000 identities with different types of disguise accessories. The dataset is collected from the Internet, resulting in unconstrained face images similar to real world settings.

7 papers0 benchmarksImages

FPL (First-Person Locomotion)

Supports new task that predicts future locations of people observed in first-person videos.

7 papers0 benchmarksImages

Grocery Store

Grocery Store is a dataset of natural images of grocery items. All natural images were taken with a smartphone camera in different grocery stores. It contains 5,125 natural images from 81 different classes of fruits, vegetables, and carton items (e.g. juice, milk, yoghurt). The 81 classes are divided into 42 coarse-grained classes, where e.g. the fine-grained classes 'Royal Gala' and 'Granny Smith' belong to the same coarse-grained class 'Apple'. Additionally, each fine-grained class has an associated iconic image and a product description of the item.

7 papers0 benchmarksImages

HandNet

The HandNet dataset contains depth images of 10 participants' hands non-rigidly deforming in front of a RealSense RGB-D camera. The annotations are generated by a magnetic annotation technique. 6D pose is available for the center of the hand as well as the five fingertips (i.e. position and orientation of each).

7 papers0 benchmarksImages, RGB-D

Hindi Visual Genome

Hindi Visual Genome is a multimodal dataset consisting of text and images suitable for English-Hindi multimodal machine translation task and multimodal research.

7 papers0 benchmarksImages, Texts

Hotels-50K

The Hotels-50K dataset consists of over 1 million images from 50,000 different hotels around the world. These images come from both travel websites, as well as the TraffickCam mobile application, which allows every day travelers to submit images of their hotel room in order to help combat trafficking. The TraffickCam images are more visually similar to images from trafficking investigations than the images from travel websites.

7 papers0 benchmarksImages

ISIA Food-500

Includes 500 categories from the list in the Wikipedia and 399,726 images, a more comprehensive food dataset that surpasses existing popular benchmark datasets by category coverage and data volume.

7 papers0 benchmarksImages

Kannada-MNIST

The Kannada-MNIST dataset is a drop-in substitute for the standard MNIST dataset for the Kannada language.

7 papers0 benchmarksImages

MegaAge

MegaAge is a large dataset that consists of 41,941 faces annotated with age posterior distributions.

7 papers0 benchmarksImages

NDD20 (Northumberland Dolphin Dataset 2020)

Northumberland Dolphin Dataset 2020 (NDD20) is a challenging image dataset annotated for both coarse and fine-grained instance segmentation and categorisation. This dataset, the first release of the NDD, was created in response to the rapid expansion of computer vision into conservation research and the production of field-deployable systems suited to extreme environmental conditions -- an area with few open source datasets. NDD20 contains a large collection of above and below water images of two different dolphin species for traditional coarse and fine-grained segmentation.

7 papers0 benchmarksImages

OpenEDS2020

OpenEDS2020 is a dataset of eye-image sequences captured at a frame rate of 100 Hz under controlled illumination, using a virtual-reality head-mounted display mounted with two synchronized eye-facing cameras. The dataset, which is anonymized to remove any personally identifiable information on participants, consists of 80 participants of varied appearance performing several gaze-elicited tasks, and is divided in two subsets: 1) Gaze Prediction Dataset, with up to 66,560 sequences containing 550,400 eye-images and respective gaze vectors, created to foster research in spatio-temporal gaze estimation and prediction approaches; and 2) Eye Segmentation Dataset, consisting of 200 sequences sampled at 5 Hz, with up to 29,500 images, of which 5% contain a semantic segmentation label, devised to encourage the use of temporal information to propagate labels to contiguous frames.

7 papers0 benchmarksImages

PFN-PIC (PFN Picking Instructions for Commodities Dataset)

This dataset is a collection of spoken language instructions for a robotic system to pick and place common objects. Text instructions and corresponding object images are provided. The dataset consists of situations where the robot is instructed by the operator to pick up a specific object and move it to another location: for example, Move the blue and white tissue box to the top right bin. This dataset consists of RGBD images, bounding box annotations, destination box annotations, and text instructions.

7 papers0 benchmarksImages, Texts
PreviousPage 59 of 164Next