TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Datasets

3,148 machine learning datasets

Filter by Modality

  • Images3,275
  • Texts3,148
  • Videos1,019
  • Audio486
  • Medical395
  • 3D383
  • Time series298
  • Graphs285
  • Tabular271
  • Speech199
  • RGB-D192
  • Environment148
  • Point cloud135
  • Biomedical123
  • LiDAR95
  • RGB Video87
  • Tracking78
  • Biology71
  • Actions68
  • 3d meshes65
  • Tables52
  • Music48
  • EEG45
  • Hyperspectral images45
  • Stereo44
  • MRI39
  • Physics32
  • Interactive29
  • Dialog25
  • Midi22
  • 6D17
  • Replay data11
  • Financial10
  • Ranking10
  • Cad9
  • fMRI7
  • Parallel6
  • Lyrics2
  • PSG2
Clear filter

3,148 dataset results

COVID-19 Disinfo (COVID-19 Disinformation Twitter Dataset)

With the emergence of the COVID-19 pandemic, the political and the medical aspects of disinformation merged as the problem got elevated to a whole new level to become the first global infodemic. Fighting this infodemic has been declared one of the most important focus areas of the World Health Organization, with dangers ranging from promoting fake cures, rumors, and conspiracy theories to spreading xenophobia and panic. Addressing the issue requires solving a number of challenging problems such as identifying messages containing claims, determining their check-worthiness and factuality, and their potential to do harm as well as the nature of that harm, to mention just a few. To address this gap, we release a large dataset of 16K manually annotated tweets for fine-grained disinformation analysis that focuses on COVID-19, combines the perspectives and the interests of journalists, fact-checkers, social media platforms, policy makers, and society, and covers Arabic, Bulgarian, Dutch, and

0 papers0 benchmarksTexts

EU-ADR

The EU-ADR corpus is a biomedical relation extraction dataset that contains 100 abstracts, with relations between drug, disorder, and targets.

0 papers0 benchmarksTexts

The Reddit COVID Dataset

The Reddit COVID Dataset is a dataset of 4.51M Reddit posts and 17.8M comments - all mentions of COVID until 2021-10-25 across the entire Reddit social network. Both were procured with SocialGrep's export feature and released as part of SocialGrep Reddit datasets. The posts are labeled with their subreddit, title, creation date, domain, selftext, and score. The comments are labeled with their subreddit, body, creation date, sentiment (calculated for you using a VADER pipeline), and score.

0 papers0 benchmarksTabular, Texts

ASL-Phono

The ASL-Phono introduces a novel linguistics-based representation, which describes the signs in the ASLLVD dataset in terms of a set of attributes of the American Sign Language phonology.

0 papers0 benchmarksTexts

CVIT PIB

We present sentence aligned parallel corpora across 10 Indian Languages - Hindi, Telugu, Tamil, Malayalam, Gujarati, Urdu, Bengali, Oriya, Marathi, Punjabi, and English - many of which are categorized as low resource. The corpora are compiled from online sources which have content shared across languages. The corpora presented significantly extends present resources that are either not large enough or are restricted to a specific domain (such as health). We also provide a separate test corpus compiled from an independent online source that can be independently used for validating the performance in 10 Indian languages. Alongside, we report on the methods of constructing such corpora using tools enabled by recent advances in machine translation and cross-lingual retrieval using deep neural network based methods.

0 papers0 benchmarksTexts

SMCOVID19-CT (Contact Tracing Data (from Italian SM-COVID-19 App))

We present a real data analysis of a CT experiment that was conducted in Italy for 8 months and involved more than 100,000 CT app users.

0 papers0 benchmarksTabular, Texts

SportsSum

SportsSum is a Chinese sports game summarization dataset that contains 5,428 soccer games of live commentaries and the corresponding news articles.

0 papers0 benchmarksTexts

STEM-ECR

Grounding Scientific Entity References in STEM Scholarly Content to Authoritative Encyclopedic and Lexicographic Sources The STEM ECR v1.0 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction, classification, and resolution tasks in a domain-independent fashion. It comprises annotations for scientific entities in scientific Abstracts drawn from 10 disciplines in Science, Technology, Engineering, and Medicine. The annotated entities are further grounded to Wikipedia and Wiktionary, respectively.

0 papers0 benchmarksTexts

TRECVID-AVS21 (V3C1)

The dataset has been designed to represent true web videos in the wild, with good visual quality and diverse content characteristics, The test video collection for TRECVID-AVS2019-TRECVID-AVS2021, which contains 1,082,649 web video clips, with even more diverse content, no predominant characteristics and low self-similarity.

0 papers0 benchmarksTexts, Videos

SEN (Sentiment analysis of Entities in News headlines)

SEN is a novel publicly available human-labelled dataset for training and testing machine learning algorithms for the problem of entity level sentiment analysis of political news headlines.

0 papers0 benchmarksTexts

Pistachio Image Dataset

Citation Request : 1. OZKAN IA., KOKLU M. and SARACOGLU R. (2021). Classification of Pistachio Species Using Improved K-NN Classifier. Progress in Nutrition, Vol. 23, N. 2, pp. DOI:10.23751/pn.v23i2.9686. (Open Access) https://www.mattioli1885journals.com/index.php/progressinnutrition/article/view/9686/9178

0 papers0 benchmarksImages, Texts

CzechNewsDatasetForSTS

The data originate from the journalistic domain in the Czech language. We describe the process of collecting and annotating the data in detail. The dataset contains 138,556 human annotations divided into train and test sets. In total, 485 journalism students participated in the creation process. To increase the reliability of the test set, we compute the annotation as an average of 9 individual annotations. We evaluate the quality of the dataset by measuring inter and intra annotation annotators' agreements. Beside agreement numbers, we provide detailed statistics of the collected dataset. We conclude our paper with a baseline experiment of building a system for predicting the semantic similarity of sentences. Due to the massive number of training annotations (116 956), the model can perform significantly better than an average annotator (0,92 versus 0,86 of Person's correlation coefficients).

0 papers0 benchmarksTexts

DigiLeTs (Digit- and Letter Trajectories)

A dataset with $23\,870$ digital trajectories (i.e. time series) of handwritten lower- and uppercase Latin letters and Arabic numbers ($a$-$z$, $A$-$Z$, $0$-$9$), generated by $77$ experts using a Wacom Pen Tablet. An expert is considered a proficient user of the recorded symbols, in this case adult native German speakers.

0 papers0 benchmarksTexts, Time series

TFH_Annotated_Dataset (Thin_Film_head_relevant_Patent_Annotated_Dataset)

Dataset Introduction TFH_Annotated_Dataset is an annotated patent dataset pertaining to thin film head technology in hard-disk. To the best of our knowledge, this is the second labeled patent dataset public available in technology management domain that annotates both entities and the semantic relations between entities, the first one is [1].

0 papers0 benchmarksTexts

UMass Citation Field Extraction

The University of Massachusetts Amherst citation field extraction dataset contains labels and segments for extracted citations from articles found on arXiv. Compared to previous standard datasets in citation field extraction, this one had 4 times more data and provided detailed nested labels rather than coarse-grained flat labels, alongside drawing from 4 different academic disciplines versus 1 - namely computer science, mathematics, physics, and quantitative biology.

0 papers0 benchmarksTexts

Coronavirus (COVID-19) Tweets Dataset

This dataset includes CSV files that contain IDs and sentiment scores of the tweets related to the COVID-19 pandemic. The real-time Twitter feed is monitored for coronavirus-related tweets using 90+ different keywords and hashtags that are commonly used while referencing the pandemic. The oldest tweets in this dataset date back to October 01, 2019. This dataset has been wholly re-designed on March 20, 2020, to comply with the content redistribution policy set by Twitter. Twitter's policy restricts the sharing of Twitter data other than IDs; therefore, only the tweet IDs are released through this dataset. You need to hydrate the tweet IDs in order to get complete data.

0 papers0 benchmarksTexts

STVD-FC (Fact-checking dataset)

STVD-FC is the largest public dataset on the political content analysis and fact-checking tasks. It consists of more than 1,200 fact-checked claims that have been scraped from a fact-checking service with associated metadata. For the video counterpart, the dataset contains nearly 6,730 TV programs, having a total duration of 6,540 hours, with metadata. These programs have been collected during the 2022 French presidential election with a dedicated workstation and protocol. The dataset is delivered as different parts for accessibility of the 2 TB of data and proper indexes. More information about the STVD-FC dataset can be found into the publication [1].

0 papers0 benchmarksAudio, Texts, Videos

ChatGPT-software-testing (ChatGPT Software Testing)

Dataset Description Our dataset contains questions from a well-known software testing book Introduction to Software Testing 2nd Edition by Ammann and Offutt. We use all the text-book questions in Chapters 1 to 5 that have solutions available on the book’s official website.

0 papers0 benchmarksTexts

ASR-ETeleCSC: An English Telephone Conversational Speech Corpus

This open-source dataset consists of 5.04 hours of transcribed English conversational speech beyond telephony, where 13 conversations were contained.

0 papers0 benchmarksAudio, Texts

ChatGPT Paraphrases

This is a dataset of paraphrases created by ChatGPT.

0 papers0 benchmarksTexts
PreviousPage 155 of 158Next