TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Datasets

65 machine learning datasets

Filter by Modality

  • Images3,275
  • Texts3,148
  • Videos1,019
  • Audio486
  • Medical395
  • 3D383
  • Time series298
  • Graphs285
  • Tabular271
  • Speech199
  • RGB-D192
  • Environment148
  • Point cloud135
  • Biomedical123
  • LiDAR95
  • RGB Video87
  • Tracking78
  • Biology71
  • Actions68
  • 3d meshes65
  • Tables52
  • Music48
  • EEG45
  • Hyperspectral images45
  • Stereo44
  • MRI39
  • Physics32
  • Interactive29
  • Dialog25
  • Midi22
  • 6D17
  • Replay data11
  • Financial10
  • Ranking10
  • Cad9
  • fMRI7
  • Parallel6
  • Lyrics2
  • PSG2
Clear filter

65 dataset results

BuildingNet

BuildingNet is a large-scale dataset of 3D building models whose exteriors are consistently labeled. The dataset consists on 513K annotated mesh primitives, grouped into 292K semantic part components across 2K building models. The dataset covers several building categories, such as houses, churches, skyscrapers, town halls, libraries, and castles.

11 papers0 benchmarks3D, 3d meshes

CAPE (Clothed Auto Person Encoding)

The CAPE dataset is a 3D dynamic dataset of clothed humans, featuring:

10 papers3 benchmarks3d meshes

Building3D

Building3D is an urban-scale dataset consisting of more than 160 thousands buildings along with corresponding point clouds, mesh and wireframe models, covering 16 cities in Estonia about 998 Km2. Besides mesh models and real-world LiDAR point clouds, it also includes wireframe models.

8 papers0 benchmarks3d meshes, Point cloud

HANDAL (HANDAL: A Dataset of Real-World Manipulable Object Categories with Pose Annotations, Affordances, and Reconstructions)

We present the HANDAL dataset for category-level object pose estimation and affordance prediction. Unlike previous datasets, ours is focused on robotics-ready manipulable objects that are of the proper size and shape for functional grasping by robot manipulators, such as pliers, utensils, and screwdrivers. Our annotation process is streamlined, requiring only a single off-the-shelf camera and semi-automated processing, allowing us to produce high-quality 3D annotations without crowd-sourcing. The dataset consists of 308k annotated image frames from 2.2k videos of 212 real-world objects in 17 categories. We focus on hardware and kitchen tool objects to facilitate research in practical scenarios in which a robot manipulator needs to interact with the environment beyond simple pushing or indiscriminate grasping. We outline the usefulness of our dataset for 6-DoF category-level pose+scale estimation and related tasks. We also provide 3D reconstructed meshes of all objects, and we outline s

8 papers0 benchmarks3D, 3d meshes, Images, Point cloud, RGB Video, RGB-D, Videos

MMBody

The MMBody dataset provides human body data with motion capture, GT mesh, Kinect RGBD, and millimeter wave sensor data. See homepage for more details.

7 papers0 benchmarks3D, 3d meshes, Images, Point cloud, RGB-D

ObjectFolder Real

The ObjectFolder Real dataset contains multisensory data collected from 100 real-world household objects. The visual data for each object include three high-quality 3D meshes of different resolutions and an HD video recording of the object rotating in a lightbox; The acoustic data for each object include impact sound recordings recorded at 30–50 points of the object, each of which is 6s long and is accompanied by the coordinate of the striking location on the object mesh, ground-truth contact force profile, and the accompanying video for the impact. The tactile data for each object include tactile readings at the same 30–50 points of the object, with each tactile reading as a video of the tactile RGB images that record the entire gel deformation process and is accompanied by two videos of the contact process from an in-hand camera and a third-view camera.

6 papers0 benchmarks3d meshes, Audio, Videos

Biwi 3D Audiovisual Corpus of Affective Communication - B3D(AC)^2 (BIWI 3D)

BIWI 3D corpus comprises a total of 1109 sentences uttered by 14 native English speakers (6 males and 8 females). A real time 3D scanner and a professional microphone were used to capture the facial movements and the speech of the speakers. The dense dynamic face scans were acquired at 25 frames per second and the RMS error in the 3D reconstruction is about 0.5 mm. In order to ease automatic speech segmentation, we carried out the recordings in a anechoic room, with walls covered by sound wave-absorbing materials.

5 papers14 benchmarks3d meshes, Audio, Texts

AMA (Articulated Mesh Animation)

Articulated Mesh Animation (AMA) is a real-world dataset containing 10 mesh sequences depicting 3 different humans performing various actions

4 papers0 benchmarks3d meshes, Images

PsyMo (PsyMo: A Dataset for Estimating Self-Reported Psychological Traits from Gait)

Psychological trait estimation from external factors such as movement and appearance is a challenging and long-standing problem in psychology, and is principally based on the psychological theory of embodiment. To date, attempts to tackle this problem have utilized private small-scale datasets with intrusive body-attached sensors. Potential applications of an automated system for psychological trait estimation include estimation of occupational fatigue and psychology, and marketing and advertisement. In this work, we propose PsyMo (Psychological traits from Motion), a novel, multi-purpose and multi-modal dataset for exploring psychological cues manifested in walking patterns. We gathered walking sequences from 312 subjects in 7 different walking variations and 6 camera angles. In conjunction with walking sequences, participants filled in 6 psychological questionnaires, totalling 17 psychometric attributes related to personality, self-esteem, fatigue, aggressiveness and mental health. W

4 papers0 benchmarks3d meshes, Images

P2S (Points2Surf)

We introduced this dataset in Points2Surf, a method that turns point clouds into meshes.

4 papers0 benchmarks3d meshes, Point cloud

DrivAerNet (A Parametric Car Dataset for Data-driven Aerodynamic Design and Graph-Based Drag Prediction)

DrivAerNet is a large-scale, high-fidelity CFD dataset of 3D industry-standard car shapes designed for data-driven aerodynamic design. It comprises 4000 high-quality 3D car meshes and their corresponding aerodynamic performance coefficients, alongside full 3D flow field information.

4 papers1 benchmarks3D, 3d meshes, Physics, Point cloud, Tabular

HOPE-Image (Household Objects for Pose Estimation)

The NVIDIA HOPE datasets consist of RGBD images and video sequences with labeled 6-DoF poses for 28 toy grocery objects. The toy grocery objects are readily available for purchase and have ideal size and weight for robotic manipulation. 3D textured meshes for generating synthetic training data are provided.

3 papers0 benchmarks3D, 3d meshes, Images

HOPE-Video (Household Objects for Pose Estimation)

The HOPE-Video dataset contains 10 video sequences (2038 frames) with 5-20 objects on a tabletop scene captured by a robot arm-mounted RealSense D415 RGBD camera. In each sequence, the camera is moved to capture multiple views of a set of objects in the robotic workspace. First COLMAP was applied to refine the camera poses (keyframes at 6~fps) provided by forward kinematics and RGB calibration from RealSense to Baxter's wrist camera. 3D dense point cloud was then generated via CascadeStereo (included for each sequence in 'scene.ply'). Ground truth poses for the HOPE objects models in the world coordinate system were annotated manually using the CascadeStereo point clouds. The following are provided for each frame:

3 papers0 benchmarks3D, 3d meshes, Videos

DAD-3DHeads (DAD-3DHeads dataset)

DAD-3DHeads dataset consists of 44,898 images collected from various sources (37,840 in the training set, 4,312 in the validation set, and 2,746 in the test set).

3 papers0 benchmarks3D, 3d meshes, Images

GarmentCodeData (GarmentCodeData: A Dataset of 3D Made-to-Measure Garments With Sewing Patterns)

GarmentCodeData contains 115,000 data points that cover a variety of designs in many common garment categories: tops, shirts, dresses, jumpsuits, skirts, pants, etc., fitted to a variety of body shapes sampled from a custom statistical body model based on CAESAR, as well as a standard reference body shape, applying three different textile materials.

3 papers0 benchmarks3D, 3d meshes

Teeth3DS+ (An Extended Benchmark for Intraoral 3D Scans Analysis)

Intraoral 3D scans analysis is a fundamental aspect of Computer-Aided Dentistry (CAD) systems, playing a crucial role in various dental applications, including teeth segmentation, detection, labeling, and dental landmark identification. Accurate analysis of 3D dental scans is essential for orthodontic and prosthetic treatment planning, as it enables automated processing and reduces the need for manual adjustments by dental professionals. However, developing robust automated tools for these tasks remains a significant challenge due to the limited availability of high-quality public datasets and benchmarks. This article introduces Teeth3DS+, the first comprehensive public benchmark designed to advance the field of intraoral 3D scan analysis. Developed as part of the 3DTeethSeg 2022 and 3DTeethLand 2024 MICCAI challenges, Teeth3DS+ aims to drive research in teeth identification, segmentation, labeling, 3D modeling, and dental landmarks identification. The dataset includes at least 1,800 i

2 papers0 benchmarks3d meshes, Point cloud

E.T. the Exceptional Trajectories

Click to add a brief description of the dataset (Markdown and LaTeX enabled).

2 papers6 benchmarks3D, 3d meshes, Texts, Videos

Aria Digital Twin Dataset

A real-world dataset, with hyper-accurate digital counterpart & comprehensive ground-truth annotation.

2 papers6 benchmarks3D, 3d meshes, Point cloud, RGB Video, Videos

The RBO Dataset of Articulated Objects and Interactions

The RBO dataset of articulated objects and interactions is a collection of 358 RGB-D video sequences (67:18 minutes) of humans manipulating 14 articulated objects under varying conditions (light, perspective, background, interaction). All sequences are annotated with ground truth of the poses of the rigid parts and the kinematic state of the articulated object (joint states) obtained with a motion capture system. We also provide complete kinematic models of these objects (kinematic structure and three-dimensional textured shape models). In 78 sequences the contact wrenches during the manipulation are also provided.

1 papers0 benchmarks3d meshes, Point cloud, RGB-D, Time series, Videos

Fast Linking Numbers of Loopy Structures Dataset

Copyright (C) 2021 Ante Qu antequ@cs.stanford.edu.

1 papers0 benchmarks3D, 3d meshes
PreviousPage 2 of 4Next