TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Datasets

95 machine learning datasets

Filter by Modality

  • Images3,275
  • Texts3,148
  • Videos1,019
  • Audio486
  • Medical395
  • 3D383
  • Time series298
  • Graphs285
  • Tabular271
  • Speech199
  • RGB-D192
  • Environment148
  • Point cloud135
  • Biomedical123
  • LiDAR95
  • RGB Video87
  • Tracking78
  • Biology71
  • Actions68
  • 3d meshes65
  • Tables52
  • Music48
  • EEG45
  • Hyperspectral images45
  • Stereo44
  • MRI39
  • Physics32
  • Interactive29
  • Dialog25
  • Midi22
  • 6D17
  • Replay data11
  • Financial10
  • Ranking10
  • Cad9
  • fMRI7
  • Parallel6
  • Lyrics2
  • PSG2
Clear filter

95 dataset results

Apolloscape Inpainting

The Inpainting dataset consists of synchronized Labeled image and LiDAR scanned point clouds. It's captured by HESAI Pandora All-in-One Sensing Kit. It is collected under various lighting conditions and traffic densities in Beijing, China.

1 papers2 benchmarksImages, LiDAR

THÖR

THÖR is a dataset with human motion trajectory and eye gaze data collected in an indoor environment with accurate ground truth for position, head orientation, gaze direction, social grouping, obstacles map and goal coordinates. THOR also contains sensor data collected by a 3D lidar and involves a mobile robot navigating the space.

1 papers0 benchmarksLiDAR

Near-Collision

Near-Collision is a large-scale dataset of 13,658 egocentric video snippets of humans navigating in indoor hallways. In order to obtain ground truth annotations of human pose, the videos are provided with the corresponding 3D point cloud from LIDAR.

1 papers0 benchmarksLiDAR, Point cloud, Videos

EviLOG (Evidential Lidar Occupancy Grid Mapping)

The dataset contains synthetic training, validation and test data for occupancy grid mapping from lidar point clouds. Additionally, real-world lidar point clouds from a test vehicle with the same lidar setup as the simulated lidar sensor is provided. Point clouds are stored as PCD files and occupancy grid maps are stored as PNG images whereas one image channel describes evidence for a free and another one describes evidence for occupied cell state.

1 papers0 benchmarksEnvironment, LiDAR, Point cloud

CODD (Cooperative Driving Dataset)

The Cooperative Driving dataset is a synthetic dataset generated using CARLA that contains lidar data from multiple vehicles navigating simultaneously through a diverse set of driving scenarios. This dataset was created to enable further research in multi-agent perception (cooperative perception) including cooperative 3D object detection, cooperative object tracking, multi-agent SLAM and point cloud registration. Towards that goal, all the frames have been labelled with ground-truth sensor pose and 3D object bounding boxes.

1 papers0 benchmarksLiDAR, Point cloud

PolyU-BPCoMa (HK PolyU Backpack Colorized Mapping)

PolyU-BPCoMa: A Dataset and Benchmark Towards Mobile Colorized Mapping Using a Backpack Multisensorial System

1 papers0 benchmarks3D, Images, LiDAR

nuScenes (Cross-City UDA)

A cross-city UDA benchmark built upon nuScenes.

1 papers0 benchmarks3D, LiDAR, Point cloud

LiPC (LiDAR Point Cloud Clustering Benchmark Suite)

LiPC (LiDAR Point Cloud Clustering Benchmark Suite) is a benchmark suite for point cloud clustering algorithms based on open-source software and open datasets. It aims to provide the community with a collection of methods and datasets that are easy to use, comparable, and that experimental results are traceable and reproducible.

1 papers0 benchmarks3D, LiDAR

L-CAS 3D Point Cloud People Dataset

L-CAS 3D Point Cloud People Dataset contains 28,002 Velodyne scan frames acquired in one of the main buildings (Minerva Building) of the University of Lincoln, UK. Total length of the recorded data is about 49 minutes. Data were grouped into two classes according to whether the robot was stationary or moving.

1 papers0 benchmarks3D, LiDAR

iV2V and iV2I+ (AI4Mobile Industrial Wireless Datasets: iV2V and iV2I+)

This dataset provides wireless measurements from two industrial testbeds: iV2V (industrial Vehicle-to-Vehicle) and iV2I+ (industrial Vehicular-to-Infrastructure plus sensor).

1 papers0 benchmarksLiDAR, Point cloud, Tabular, Time series

LiDAR-CS

LiDAR-CS is a dataset for 3D object detection in real traffic. It contains 84,000 point cloud frames under 6 groups of different sensors but with same corresponding scenarios, captured from hybrid realistic LivDAR simulator.

1 papers0 benchmarksLiDAR

FEE Corridor

The data set contains point cloud data captured in an indoor environment with precise localization and ground truth mapping information. Two ”stop-and-go” data sequences of a robot with mounted Ouster OS1-128 lidar are provided. This data-capturing strategy allows recording lidar scans that do not suffer from an error caused by sensor movement. Individual scans from static robot positions are recorded. Additionally, point clouds recorded with the Leica BLK360 scanner are provided as mapping ground-truth data.

1 papers0 benchmarks3D, LiDAR, Point cloud, Tracking

Robot@Home2 (Robot@Home2, a robotic dataset of home environments)

Robot@Home2, is an enhanced version aimed at improving usability and functionality for developing and testing mobile robotics and computer vision algorithms. Robot@Home2 consists of three main components. Firstly, a relational database that states the contextual information and data links, compatible with Standard Query Language. Secondly,a Python package for managing the database, including downloading, querying, and interfacing functions. Finally, learning resources in the form of Jupyter notebooks, runnable locally or on the Google Colab platform, enabling users to explore the dataset without local installations. These freely available tools are expected to enhance the ease of exploiting the Robot@Home dataset and accelerate research in computer vision and robotics.

1 papers0 benchmarks3D, 3d meshes, Images, LiDAR, Point cloud, RGB Video, Videos

IRV2V (IRregular V2V Dataset)

To facilitate research on asynchrony for collaborative perception, we simulate the first collaborative perception dataset with different temporal asynchronies based on CARLA, named IRregular V2V(IRV2V). We set 100ms as ideal sampling time interval and simulate various asynchronies in real-world scenarios from two main aspects: i) considering that agents are unsynchronized with the unified global clock, we uniformly sample a time shift $\delta_s\sim \mathcal{U}(-50,50)\text{ms}$ for each agent in the same scene, and ii) considering the trigger noise of the sensors, we uniformly sample a time turbulence $\delta_d\sim \mathcal{U}(-10,10)\text{ms}$ for each sampling timestamp. The final asynchronous time interval between adjacent timestamps is the summation of the time shift and time turbulence. In experiments, we also sample the frame intervals to achieve large-scale and diverse asynchrony. Each scene includes multiple collaborative agents ranging from 2 to 5. Each agent is equipped with

1 papers12 benchmarksImages, LiDAR

ULS labeled data (UVA laser scanning labelled las data over tropical moist forest classified as leaf or wood points)

UAV Laser Scanning data collected over neotropical forest (Paracou French Guiana). Four flights conducted over one ha plot in 2021 and 2022.

1 papers3 benchmarks3D, LiDAR, Point cloud

Polarimetric Imaging for Perception

The dataset includes polarimetric, RGB and depth automotive (on the road) data.

1 papers0 benchmarksImages, LiDAR

V2AIX (A Multi-Modal Real-World Dataset of ETSI ITS V2X Messages in Public Road Traffic)

Connectivity is a main driver for the ongoing megatrend of automated mobility: future Cooperative Intelligent Transport Systems (C-ITS) will connect road vehicles, traffic signals, roadside infrastructure, and even vulnerable road users, sharing data and compute for safer, more efficient, and more comfortable mobility. In terms of communication technology for realizing such vehicle-to-everything (V2X) communication, the WLAN-based peer-to-peer approach (IEEE 802.11p, ITS-G5 in Europe) competes with C-V2X based on cellular technologies (4G and beyond). Irrespective of the underlying communication standard, common message interfaces are crucial for a common understanding between vehicles, especially from different manufacturers. Targeting this issue, the European Telecommunications Standards Institute (ETSI) has been standardizing V2X message formats such as the Cooperative Awareness Message (CAM). In this work, we present V2AIX, a multi-modal real-world dataset of ETSI ITS messages gath

1 papers0 benchmarksImages, LiDAR, Point cloud

MuSoHu (Toward human-like social robot navigation: A large-scale, multi-modal, social human navigation dataset)

A large-scale, egocentric, multimodal, and context-aware dataset of human demonstrations of social navigation.

1 papers0 benchmarks3D, Actions, LiDAR, Point cloud, RGB-D, Stereo, Videos

ConSLAM (Construction Dataset for SLAM)

ConSLAM is a real-world dataset collected periodically on a construction site to measure the accuracy of mobile scanners' SLAM algorithms.

1 papers0 benchmarks3D, LiDAR, Point cloud, RGB Video, Tracking, Videos

MVX (Multimodal V2X)

MVX incorporates realistic physical world simulation with a differentiable accurate ray tracing wireless simulation that includes multi-agent and multimodal datasets for AI-driven digital twin applications in vehicular communication systems.

1 papers1 benchmarksImages, LiDAR, Tabular, Videos
PreviousPage 4 of 5Next