TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Datasets

19,997 machine learning datasets

Filter by Modality

  • Images3,275
  • Texts3,148
  • Videos1,019
  • Audio486
  • Medical395
  • 3D383
  • Time series298
  • Graphs285
  • Tabular271
  • Speech199
  • RGB-D192
  • Environment148
  • Point cloud135
  • Biomedical123
  • LiDAR95
  • RGB Video87
  • Tracking78
  • Biology71
  • Actions68
  • 3d meshes65
  • Tables52
  • Music48
  • EEG45
  • Hyperspectral images45
  • Stereo44
  • MRI39
  • Physics32
  • Interactive29
  • Dialog25
  • Midi22
  • 6D17
  • Replay data11
  • Financial10
  • Ranking10
  • Cad9
  • fMRI7
  • Parallel6
  • Lyrics2
  • PSG2

19,997 dataset results

MIMIC-IV-ED

MIMIC-IV-ED is a large, freely available database of emergency department (ED) admissions at the Beth Israel Deaconess Medical Center between 2011 and 2019. As of MIMIC-ED v1.0, the database contains 448,972 ED stays. Vital signs, triage information, medication reconciliation, medication administration, and discharge diagnoses are available. All data are deidentified to comply with the Health Information Portability and Accountability Act (HIPAA) Safe Harbor provision. MIMIC-ED is intended to support a diverse range of education initiatives and research studies.

4 papers0 benchmarksMedical, Tabular

RealMCVSR (Real-world Multi-Camera Video Super-Resolution)

Our RealMCVSR dataset provides real-world HD video triplets concurrently recorded by Apple iPhone 12 Pro Max equipped with triple cameras having fixed focal lengths: ultra-wide (30mm), wide-angle (59mm), and telephoto (147mm). To concurrently record video triplets, we built an iOS app that provides full control over exposure parameters (i.e., shutter speed and ISO) of the cameras. For recording each scene, we set the cameras in the auto-exposure mode, where the shutter speeds of the three cameras are synced to avoid varying motion blur across a video triplet. ISOs are adjusted accordingly for each camera to pick up the same exposure. Each video is saved in the MOV format using HEVC/H.265 encoding with the HD resolution (1080 x 1920). The dataset contains triplets of 161 video clips with 23,107 frames in total. The video triplets are split into training, validation, and testing sets, each of which has 137, 8, and 16 triplets with 19,426, 1,141, and 2,540 frames, respectively.

4 papers2 benchmarks

Relative Human

Relative Human (RH) contains multi-person in-the-wild RGB images with rich human annotations, including:

4 papers22 benchmarksImages

Heavy Snowfall (DENSE)

We introduce an object detection dataset in challenging adverse weather conditions covering 12000 samples in real-world driving scenes and 1500 samples in controlled weather conditions within a fog chamber. The dataset includes different weather conditions like fog, snow, and rain and was acquired by over 10,000 km of driving in northern Europe. The driven route with cities along the road is shown on the right. In total, 100k Objekts were labeled with accurate 2D and 3D bounding boxes. The main contributions of this dataset are: - We provide a proving ground for a broad range of algorithms covering signal enhancement, domain adaptation, object detection, or multi-modal sensor fusion, focusing on the learning of robust redundancies between sensors, especially if they fail asymmetrically in different weather conditions. - The dataset was created with the initial intention to showcase methods, which learn of robust redundancies between the sensor and enable a raw data sensor fusion in cas

4 papers6 benchmarksLiDAR

NILoc (Neural Inertial Localizatio)

IMU, WiFi data along with aligned Visual SLAM groundtruth locations from a smartphone carried during natural human motion

4 papers0 benchmarksTime series

HC-STVG1 (Human-centric Spatio-Temporal Video Grounding)

The newly proposed HC-STVG task aims to localize the target person spatio-temporally in an untrimmed video. For this task, we collect a new benchmark dataset, which has spatio temporal annotations related to the target persons in complex multi-person scenes, together with full interaction and rich action information.

4 papers3 benchmarks

DIBCO 2019

DIBCO 2019 is the international Competition on Document Image Binarization organized in conjunction with the ICDAR 2019 conference. The general objective of the contest is to identify current advances in document image binarization of machine-printed and handwritten document images using performance evaluation measures that are motivated by document image analysis and recognition requirements.

4 papers0 benchmarks

VISUELLE2.0

Visuelle 2.0 is a dataset containing real data for 5355 clothing products of the retail fast-fashion Italian company, Nuna Lie. Specifically, Visuelle 2.0 provides data from 6 fashion seasons (partitioned in Autumn-Winter and Spring-Summer) from 2017-2019, right before the Covid-19 pandemic. Each product is accompanied by an HD image, textual tags and more. The time series data are disaggregated at the shop level, and include the sales, inventory stock, max-normalized prices (for the sake of confidentiality} and discounts. Exogenous time series data is also provided, in the form of Google Trends based on the textual tags and multivariate weather conditions of the stores’ locations. Finally, we also provide purchase data for 667K customers whose identity has been anonymized, to capture personal preferences. With these data, Visuelle 2.0 allows to cope with several problems which characterize the activity of a fast fashion company: new product demand forecasting, short-observation new pr

4 papers4 benchmarksImages, Texts, Time series

TimeHetNet (Meta Dataset for Time Series with heterogeneous networks)

This meta-dataset is composed of previously known datasets.

4 papers0 benchmarksTime series

Monash

Time Series Forecasting Repository containing datasets of related time series for global forecasting.

4 papers0 benchmarksTime series

SYMON (Synopses of Movie Narratives)

Contains 5,193 video summaries of popular movies and TV series. SyMoN captures naturalistic storytelling videos for human audience made by human creators, and has higher story coverage and more frequent mental-state references than similar video-language story datasets.

4 papers0 benchmarksTexts, Videos

SemEval-2022 Task-12

Symlink is a SemEval shared task of extracting mathematical symbols and their descriptions from LaTeX source of scientific documents. This is a new task in SemEval 2022, which attracted 180 individual registrations and 59 final submissions from 7 participant teams.

4 papers4 benchmarks

ExVo2022 (ICML ExVo 2022 Workshop & Competition Data)

Baseline code for the three tracks of ExVo 2022 competition.

4 papers0 benchmarksSpeech

NHA12D (A New Pavement Crack Dataset)

NHA12D is an annotated pavement crack dataset that contains images with different viewpoints and pavements types. This dataset is composed of 80 pavement images, including 40 concrete pavement images and 40 asphalt pavement images, captured by digital survey vehicles on the A12 network in the UK.

4 papers0 benchmarksImages

TemporalWiki

TemporalWiki is a lifelong benchmark for ever-evolving LMs that utilizes the difference between consecutive snapshots of English Wikipedia and English Wikidata for training and evaluation, respectively. The benchmark hence allows researchers to periodically track an LM's ability to retain previous knowledge and acquire updated/new knowledge at each point in time.

4 papers0 benchmarksTexts

D3 (DBLP Discovery Dataset)

DBLP is the largest open-access repository of scientific articles on computer science and provides metadata associated with publications, authors, and venues. We retrieved more than 6 million publications from DBLP and extracted pertinent metadata (e.g., abstracts, author affiliations, citations) from the publication texts to create the D3 Discovery Dataset (D3). D3 can be used to identify trends in research activity, productivity, focus, bias, accessibility, and impact of computer science research.

4 papers0 benchmarksTexts

CoVERT (A Corpus of Fact-checked Biomedical COVID-19 Tweets)

CoVERT is a fact-checked corpus of tweets with a focus on the domain of biomedicine and COVID-19-related (mis)information. The corpus consists of 300 tweets, each annotated with medical named entities and relations. Employs a novel crowdsourcing methodology to annotate all tweets with fact-checking labels and supporting evidence, which crowdworkers search for online. This methodology results in moderate inter-annotator agreement.

4 papers0 benchmarksBiomedical, Texts

CelebA+masks

The COVID-19 pandemic raises the problem of adapting face recognition systems to the new reality, where people may wear surgical masks to cover their noses and mouths. Traditional data sets (e.g., CelebA, CASIA-WebFace) used for training these systems were released before the pandemic, so they now seem unsuited due to the lack of examples of people wearing masks. We propose a method for enhancing data sets containing faces without masks by creating synthetic masks and overlaying them on faces in the original images. Our method relies on Spark AR Studio, a developer program made by Facebook that is used to create Instagram face filters. In our approach, we use 9 masks of different colors, shapes and fabrics. We employ our method to generate a number of 196,254 (96.8%) masks for the CelebA data set.

4 papers6 benchmarksImages

CASIA-WebFace+masks

The COVID-19 pandemic raises the problem of adapting face recognition systems to the new reality, where people may wear surgical masks to cover their noses and mouths. Traditional data sets (e.g., CelebA, CASIA-WebFace) used for training these systems were released before the pandemic, so they now seem unsuited due to the lack of examples of people wearing masks. We propose a method for enhancing data sets containing faces without masks by creating synthetic masks and overlaying them on faces in the original images. Our method relies on Spark AR Studio, a developer program made by Facebook that is used to create Instagram face filters. In our approach, we use 9 masks of different colors, shapes and fabrics. We employ our method to generate a number of 445,446 (90%) samples of masks for the CASIA-WebFace data set.

4 papers6 benchmarksImages

ReMASC

We introduce a new database of voice recordings with the goal of supporting research on vulnerabilities and protection of voice-controlled systems. In contrast to prior efforts, the proposed database contains genuine and replayed recordings of voice commands obtained in realistic usage scenarios and using state-of-the-art voice assistant development kits. Specifically, the database contains recordings from four systems (each with a different microphone array) in a variety of environmental conditions with different forms of background noise and relative positions between speaker and device. To the best of our knowledge, this is the first database that has been specifically designed for the protection of voice controlled systems (VCS) against various forms of replay attacks.

4 papers0 benchmarksSpeech
PreviousPage 244 of 1000Next