TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Datasets

383 machine learning datasets

Filter by Modality

  • Images3,275
  • Texts3,148
  • Videos1,019
  • Audio486
  • Medical395
  • 3D383
  • Time series298
  • Graphs285
  • Tabular271
  • Speech199
  • RGB-D192
  • Environment148
  • Point cloud135
  • Biomedical123
  • LiDAR95
  • RGB Video87
  • Tracking78
  • Biology71
  • Actions68
  • 3d meshes65
  • Tables52
  • Music48
  • EEG45
  • Hyperspectral images45
  • Stereo44
  • MRI39
  • Physics32
  • Interactive29
  • Dialog25
  • Midi22
  • 6D17
  • Replay data11
  • Financial10
  • Ranking10
  • Cad9
  • fMRI7
  • Parallel6
  • Lyrics2
  • PSG2
Clear filter

383 dataset results

ClimateSet (ClimateSet - : A Large-Scale Climate Model Dataset for Machine Learning)

Climate models are critical tools for analyzing climate change and projecting its future impact. The machine learning (ML) community has taken an increased interest in supporting climate scientists’ efforts on various tasks such as climate model emulation, downscaling, and prediction tasks. However, traditional datasets based on single climate models are limiting. We thus present ClimateSet — a comprehensive collection of inputs and outputs from 36 climate models sourced from the Input4MIPs and CMIP6 archives, designed for large-scale ML applications.

5 papers0 benchmarks3D, Time series

FAUST-partial

FAUST-partial is a 3D registration benchmark dataset created to provide a more informative evaluation of 3D registration methods. The dataset addresses two main limitations of current 3D registration benchmarks:

5 papers0 benchmarks3D

HARPER (Exploring 3D Human Pose Estimation and Forecasting from the Robot’s Perspective: The HARPER Dataset)

We introduce HARPER, a novel dataset for 3D body pose estimation and forecast in dyadic interactions between users and \spot, the quadruped robot manufactured by Boston Dynamics. The key-novelty is the focus on the robot's perspective, i.e., on the data captured by the robot's sensors. These make 3D body pose analysis challenging because being close to the ground captures humans only partially. The scenario underlying HARPER includes 15 actions, of which 10 involve physical contact between the robot and users. The Corpus contains not only the recordings of the built-in stereo cameras of Spot, but also those of a 6-camera OptiTrack system (all recordings are synchronized). This leads to ground-truth skeletal representations with a precision lower than a millimeter. In addition, the Corpus includes reproducible benchmarks on 3D Human Pose Estimation, Human Pose Forecasting, and Collision Prediction, all based on publicly available baseline approaches. This enables future HARPER users to

5 papers18 benchmarks3D, Images, RGB-D, Videos

DD100

A large-scale and diverse duet interactive dance dataset. Recording about 117 minutes of professional dancers' performances.

5 papers0 benchmarks3D, Music

RefRef (RefRef: A Synthetic Dataset and Benchmark for Reconstructing Refractive and Reflective Objects)

RefRef is a synthetic dataset and benchmark designed for the task of reconstructing scenes with complex refractive and reflective objects. Our dataset consists of 50 objects categorized based on their geometric and material complexity: single-material convex objects, single-material non-convex objects, and multi-material non-convex objects, where the materials have different colors, opacities, and refractive indices. Each object is placed in three distinct bounded environments and one unbounded environment, resulting in 150 unique scenes with diverse geometries, material properties, and backgrounds. Our dataset provides a controlled setting for evaluating and developing 3D reconstruction and novel view synthesis methods that handle complex optical effects.

5 papers1 benchmarks3D, Images

CREMI

MICCAI Challenge on Circuit Reconstruction from Electron Microscopy Images.

4 papers2 benchmarks3D, Biomedical

Sydney Urban Objects

This dataset contains a variety of common urban road objects scanned with a Velodyne HDL-64E LIDAR, collected in the CBD of Sydney, Australia. There are 631 individual scans of objects across classes of vehicles, pedestrians, signs and trees.

4 papers3 benchmarks3D, LiDAR, Point cloud

Kitchen Scenes

Kitchen Scenes is a multi-view RGB-D dataset of nine kitchen scenes, each containing several objects in realistic cluttered environments including a subset of objects from the BigBird dataset. The viewpoints of the scenes are densely sampled and objects in the scenes are annotated with bounding boxes and in the 3D point cloud.

4 papers0 benchmarks3D, Images, Videos

SWORD ('Scenes with occluded regions' dataset)

The new dataset contains around 1,500 train videos and 290 test videos, with 50 frames per video on average. The dataset was obtained after processing the manually captured video sequences of static real-life urban scenes. The main property of the dataset is the abundance of close objects and, consequently, the larger prevalence of occlusions. According to the introduced heuristic, the mean area of occluded image parts for SWORD is approximately five times larger than for RealEstate10k data (14% vs 3% respectively). This rationalizes the collection and usage of SWORD and explains that SWORD allows training more powerful models despite being of smaller size.

4 papers3 benchmarks3D, Images, Videos

Foot3D

A dataset of high resolution, textured scans of articulated left feet, useful for 3D shape representation learning.

4 papers0 benchmarks3D

HOMER (Household Object Movements from Everyday Routines)

The Household Object Movements from Everyday Routines (HOMER) dataset is composed of routine behaviors for five households, spanning 50 days for the train split and 10 days for test split. The households are based on an identical apartment setting with four rooms and 108 objects and 33 atomic actions such as find, grab, etc.

4 papers0 benchmarks3D, Time series

FLAG3D

FLAG3D is a large-scale 3D fitness activity dataset with language instruction containing 180K sequences of 60 categories. FLAG3D features the following three aspects: 1) accurate and dense 3D human pose captured from advanced MoCap system to handle the complex activity and large movement, 2) detailed and professional language instruction to describe how to perform a specific activity, 3) versatile video resources from a high-tech MoCap system, rendering software, and cost-effective smartphones in natural environments.

4 papers0 benchmarks3D, Texts

SportsPose (SportsPose - A Dynamic 3D sports pose dataset)

Accurate 3D human pose estimation is essential for sports analytics, coaching, and injury prevention. However, existing datasets for monocular pose estimation do not adequately capture the challenging and dynamic nature of sports movements. In response, we introduce SportsPose, a large-scale 3D human pose dataset consisting of highly dynamic sports movements. With more than 176,000 3D poses from 24 different subjects performing 5 different sports activities, SportsPose provides a diverse and comprehensive set of 3D poses that reflect the complex and dynamic nature of sports movements. Contrary to other markerless datasets we have quantitatively evaluated the precision of SportsPose by comparing our poses with a commercial marker-based system and achieve a mean error of 34.5 mm across all evaluation sequences. This is comparable to the error reported on the commonly used 3DPW dataset. We further introduce a new metric, local movement, which describes the movement of the wrist and ankle

4 papers0 benchmarks3D, Actions, Images, RGB Video, Videos

3D MM-Vet

We established a 3D evaluation benchmark, 3D MM-Vet, to assess the 4-level capacity in embodied interaction scenarios, varying from basic perception to control statements generation.

4 papers1 benchmarks3D, Point cloud

DrivAerNet (A Parametric Car Dataset for Data-driven Aerodynamic Design and Graph-Based Drag Prediction)

DrivAerNet is a large-scale, high-fidelity CFD dataset of 3D industry-standard car shapes designed for data-driven aerodynamic design. It comprises 4000 high-quality 3D car meshes and their corresponding aerodynamic performance coefficients, alongside full 3D flow field information.

4 papers1 benchmarks3D, 3d meshes, Physics, Point cloud, Tabular

PARIS Dataset (PARIS Two-Part Object Dataset)

From PARIS: Part-level Reconstruction and Motion Analysis for Articulated Objects: 5.1. Dataset Synthetic dataset. The synthetic 3D models we use for evaluation are from the PartNet-Mobility dataset [49, 27, 4], a large-scale dataset for articulated objects across 46 categories. We select instances across 10 categories to conduct our experiments. For each articulation state, we randomly sample 64-100 views covering the upper hemisphere of the object to simulate capturing in the real world. Then we render RGB images and acquire camera parameters and object masks using Blender [6] to create our training data. Real-world dataset. The real data we use for experiments is from the MultiScan dataset [25], scanning real-world indoor scenes with articulated objects in multiple states. We use the reconstructed mesh of an object in two states as ground truth for evaluation, and the real RGB frames as training data.

4 papers0 benchmarks3D, Images, RGB-D

Human Optical Flow (Human Optical Flow dataset)

A synthetic data of videos of human action sequences and the corresponding optical flow.

3 papers0 benchmarks3D

A Billion Ways to Grasp

Robot grasping is often formulated as a learning problem. With the increasing speed and quality of physics simulations, generating large-scale grasping data sets that feed learning algorithms is becoming more and more popular. An often overlooked question is how to generate the grasps that make up these data sets. In this paper, we review, classify, and compare different grasp sampling strategies. Our evaluation is based on a fine-grained discretization of SE(3) and uses physics-based simulation to evaluate the quality and robustness of the corresponding parallel-jaw grasps. Specifically, we consider more than 1 billion grasps for each of the 21 objects from the YCB data set. This dense data set lets us evaluate existing sampling schemes w.r.t. their bias and efficiency. Our experiments show that some popular sampling schemes contain significant bias and do not cover all possible ways an object can be grasped.

3 papers0 benchmarks3D, 6D

DIPS-Plus (The Enhanced Database of Interacting Protein Structures for Interface Prediction)

How and where proteins interface with one another can ultimately impact the proteins' functions along with a range of other biological processes. As such, precise computational methods for protein interface prediction (PIP) come highly sought after as they could yield significant advances in drug discovery and design as well as protein function analysis. However, the traditional benchmark dataset for this task, Docking Benchmark 5 (DB5), contains only a paltry 230 complexes for training, validating, and testing different machine learning algorithms. In this work, we expand on a dataset recently introduced for this task, the Database of Interacting Protein Structures (DIPS), to present DIPS-Plus, an enhanced, feature-rich dataset of 42,112 complexes for geometric deep learning of protein interfaces. The previous version of DIPS contains only the Cartesian coordinates and types of the atoms comprising a given protein complex, whereas DIPS-Plus now includes a plethora of new residue-level

3 papers0 benchmarks3D, Biomedical, Graphs

PackIt

The ability to jointly understand the geometry of objects and plan actions for manipulating them is crucial for intelligent agents. This ability is referred to as geometric planning. Recently, many interactive environments have been proposed to evaluate intelligent agents on various skills, however, none of them cater to the needs of geometric planning. PackIt is a virtual environment to evaluate and potentially learn the ability to do geometric planning, where an agent needs to take a sequence of actions to pack a set of objects into a box with limited space.

3 papers1 benchmarks3D
PreviousPage 10 of 20Next