TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Datasets

19,997 machine learning datasets

Filter by Modality

  • Images3,275
  • Texts3,148
  • Videos1,019
  • Audio486
  • Medical395
  • 3D383
  • Time series298
  • Graphs285
  • Tabular271
  • Speech199
  • RGB-D192
  • Environment148
  • Point cloud135
  • Biomedical123
  • LiDAR95
  • RGB Video87
  • Tracking78
  • Biology71
  • Actions68
  • 3d meshes65
  • Tables52
  • Music48
  • EEG45
  • Hyperspectral images45
  • Stereo44
  • MRI39
  • Physics32
  • Interactive29
  • Dialog25
  • Midi22
  • 6D17
  • Replay data11
  • Financial10
  • Ranking10
  • Cad9
  • fMRI7
  • Parallel6
  • Lyrics2
  • PSG2

19,997 dataset results

XQLFW (Cross-Quality Labeled Faces in the Wild)

An evaluation protocol for face verification focusing on a large intra-pair image quality difference.

16 papers6 benchmarksImages

COLD

COLDataset is a dataset to facilitate Chinese offensive language detection and model evaluation. It include a Chinese offensive language dataset containing 37k annotated sentences.

16 papers0 benchmarks

IndicGLUE (Indic General Language Understanding Evaluation Benchmark)

We now introduce IndicGLUE, the Indic General Language Understanding Evaluation Benchmark, which is a collection of various NLP tasks as de- scribed below. The goal is to provide an evaluation benchmark for natural language understanding ca- pabilities of NLP models on diverse tasks and mul- tiple Indian languages.

16 papers0 benchmarks

Wukong

Wukong is a large-scale Chinese cross-modal dataset for benchmarking different multi-modal pre-training methods to facilitate the Vision-Language Pre-training (VLP). This dataset contains 100 million Chinese image-text pairs from the web. This base query list is taken from and is filtered according to the frequency of Chinese words and phrases.

16 papers0 benchmarksImages, Texts

SKM-TEA (Stanford Knee MRI with Multi-Task Evaluation)

The SKM-TEA dataset pairs raw quantitative knee MRI (qMRI) data, image data, and dense labels of tissues and pathology for end-to-end exploration and evaluation of the MR imaging pipeline. This 1.6TB dataset consists of raw-data measurements of ~25,000 slices (155 patients) of anonymized patient knee MRI scans, the corresponding scanner-generated DICOM images, manual segmentations of four tissues, and bounding box annotations for sixteen clinically relevant pathologies.

16 papers0 benchmarksImages, MRI, Medical

MATRES (Multi-Axis Temporal RElations for Start-points)

This is the Multi-Axis Temporal RElations for Start-points (i.e., MATRES) dataset

16 papers3 benchmarksTexts

FEMNIST (Federated Extended MNIST)

See paper:

16 papers3 benchmarks

SR-RAW

Raw sensor dataset where each sequence captures 7 (few contain 6) images (RAW and JPG) taken by different focal lengths.

16 papers0 benchmarks

CholecT45

CholecT45 is a subset of CholecT50 consisting of 45 videos from the Cholec80 dataset. It is the first public release of part of CholecT50 dataset. CholecT50 is a dataset of 50 endoscopic videos of laparoscopic cholecystectomy surgery introduced to enable research on fine-grained action recognition in laparoscopic surgery. It is annotated with 100 triplet classes in the form of <instrument, verb, target>.

16 papers2 benchmarksImages, Videos

GRIT (General Robust Image Task Benchmark)

The General Robust Image Task (GRIT) Benchmark is an evaluation-only benchmark for evaluating the performance and robustness of vision systems across multiple image prediction tasks, concepts, and data sources. GRIT hopes to encourage our research community to pursue the following research directions:

16 papers9 benchmarksImages, Texts

MFR (Ongoing version of ICCV-2021 Masked Face Recognition Challenge & Workshop(MFR))

During the COVID-19 coronavirus epidemic, almost everyone wears a facial mask, which poses a huge challenge to face recognition. Traditional face recognition systems may not effectively recognize the masked faces, but removing the mask for authentication will increase the risk of virus infection. Inspired by the COVID-19 pandemic response, the widespread requirement that people wear protective face masks in public places has driven a need to understand how face recognition technology deals with occluded faces, often with just the periocular area and above visible.

16 papers36 benchmarks

VideoLQ

VideoLQ consists of videos downloaded from various video hosting sites such as Flickr and YouTube, with a Creative Common license.

16 papers3 benchmarksImages, Videos

Cornell (60%/20%/20% random splits)

Node classification on Cornell with 60%/20%/20% random splits for training/validation/test.

16 papers1 benchmarksGraphs

TripClick

TripClick is a large-scale dataset of click logs in the health domain, obtained from user interactions of the Trip Database health web search engine.

16 papers0 benchmarksTexts

Texas(60%/20%/20% random splits)

Node classification on Texas with 60%/20%/20% random splits for training/validation/test.

16 papers1 benchmarksGraphs

Cornell (48%/32%/20% fixed splits)

Node classification on Cornell with the fixed 48%/32%/20% splits provided by Geom-GCN.

16 papers2 benchmarksGraphs

FaithDial

FaithDial is a new benchmark for hallucination-free dialogues, by editing hallucinated responses in the Wizard of Wikipedia (WoW) benchmark.

16 papers0 benchmarksTexts

Traffic (Traffic Flow Forecasting Data Set)

Abstract: The task for this dataset is to forecast the spatio-temporal traffic volume based on the historical traffic volume and other features in neighboring locations.

16 papers3 benchmarksTime series

Jester (Gesture Recognition)

Jester Gesture Recognition dataset includes 148,092 labeled video clips of humans performing basic, pre-defined hand gestures in front of a laptop camera or webcam. It is designed for training machine learning models to recognize human hand gestures like sliding two fingers down, swiping left or right and drumming fingers.

16 papers5 benchmarksVideos

GSV-Cities

GSV-Cities is a large-scale dataset for training deep neural network for the task of Visual Place Recognition.

16 papers0 benchmarksImages
PreviousPage 119 of 1000Next