TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Datasets

95 machine learning datasets

Filter by Modality

  • Images3,275
  • Texts3,148
  • Videos1,019
  • Audio486
  • Medical395
  • 3D383
  • Time series298
  • Graphs285
  • Tabular271
  • Speech199
  • RGB-D192
  • Environment148
  • Point cloud135
  • Biomedical123
  • LiDAR95
  • RGB Video87
  • Tracking78
  • Biology71
  • Actions68
  • 3d meshes65
  • Tables52
  • Music48
  • EEG45
  • Hyperspectral images45
  • Stereo44
  • MRI39
  • Physics32
  • Interactive29
  • Dialog25
  • Midi22
  • 6D17
  • Replay data11
  • Financial10
  • Ranking10
  • Cad9
  • fMRI7
  • Parallel6
  • Lyrics2
  • PSG2
Clear filter

95 dataset results

SimBEV

The SimBEV dataset is a collection of 320 scenes spread across all 11 CARLA maps and contains data from a variety of sensors, including five camera types (RGB, semantic segmentation, instance segmentation, depth, and optical flow), lidar, semantic lidar, radar, GNSS, and IMU, along with 3D object bounding boxes and accurate bird's-eye view (BEV) ground truth. With each scene lasting 16 seconds at a frame rate of 20 Hz, the SimBEV dataset contains 102,400 annotated frames, over 8 million 3D object bounding boxes, and more than 2.5 billion BEV ground truth labels.

1 papers72 benchmarksImages, LiDAR, Point cloud

DARai (Daily Activity Recordings for AI and ML applications)

Daily Activity Recordings for Artificial Intelligence (DARai, pronounced "Dahr-ree") is a multimodal, hierarchically annotated dataset constructed to understand human activities in real-world settings. DARai consists of continuous scripted and unscripted recordings of 50 participants in 10 different environments, totaling over 200 hours of data from 20 sensors including multiple camera views, depth and radar sensors, wearable inertial measurement units (IMUs), electromyography (EMG), insole pressure sensors, biomonitor sensors, and gaze tracker. To capture the complexity in human activities, DARai is annotated at three levels of hierarchy: (i) high-level activities (L1) that are independent tasks, (ii) lower-level actions (L2) that are patterns shared between activities, and (iii) fine-grained procedures (L3) that detail the exact execution steps for actions. The dataset annotations and recordings are designed so that 22.7% of L2 actions are shared between L1 activities and 14.2% of L3

1 papers0 benchmarksBiomedical, Environment, Images, LiDAR, RGB-D, Time series, Videos

Remote Flash LiDAR Vehicles Dataset

This dataset includes 3D point-cloud and 2D imagery from a flash LiDAR...

1 papers6 benchmarks3D, Images, LiDAR, Point cloud, Videos

Matador (Matador: A Material Image Dataset)

The Matador dataset is a material image dataset with hierarchical labels. The hierarchical labels are derived from a new taxonomy. For each sample of a material, we collect a local appearance image, local surface structure LiDAR scan, global context image, and record any camera motion that takes place during the capture sequence. The dataset is intended to grow over time. To date, Matador contains 57 different material categories and a total of ~7,200 images, averaging 126 samples of intraclass variance.

1 papers0 benchmarksImages, LiDAR, RGB-D

((Claim~in-easy~way))How to make a claim against Expedia?

To file a claim against π‘¬π’™π’‘π’†π’…π’Šπ’‚, start by contacting their customer support via phone or through their Help & Support section on their website. You can call them at + +πŸ™-888-829-0881 . If the issue is not resolved, you can request to speak with a supervisor or manager. If the problem persists, you can submit a formal complaint through the Help & Support section.+ +πŸ™-888-829-0881 . For travel protection claims, contact the insurance provider listed in your policy. If the issue remains unresolved, you can also file a complaint with the Better Business Bureau (BBB) https://www.bbb.org/ or the Federal Trade Commission (FTC)

1 papers0 benchmarksLiDAR

USYD CAMPUS

USYD CAMPUS is a driving dataset collected by Zhou et al at the University of Sydney (USyd) campus and surroundings. This USYD Campus Dataset contains more than 60 weeks of drives and is continuously updated. It includes multiple sensor modalities (camera, lidar, GPS, IMU, wheel encoder, steering angle, etc.) and covers various environmental conditions as well as diverse changes to illumination, scene structure, and pedestrian/vehicle traffic volumes.

0 papers0 benchmarksImages, LiDAR

Robot@Home dataset

The Robot-at-Home dataset (Robot@Home) is a collection of raw and processed data from five domestic settings compiled by a mobile robot equipped with 4 RGB-D cameras and a 2D laser scanner. Its main purpose is to serve as a testbed for semantic mapping algorithms through the categorization of objects and/or rooms.

0 papers0 benchmarksImages, LiDAR, RGB-D, Videos

Multifog KITTI dataset

we propose the augmented KITTI dataset with fog for both camera and LiDAR sensors with different visibility ranges from 20 to 80 meters to best match realistic fog environment.

0 papers0 benchmarksImages, LiDAR, Point cloud

Sparse LiDAR KITTI dataset

Sparse LiDAR extracted from velodyne 64 beams in KITTI dataset. It contains severals LiDAR: LiDAR 2 beams, LiDAR 4 beams, LiDAR 8 beams, LiDAR 16 beams, LiDAR 32 beams

0 papers0 benchmarksLiDAR, Point cloud

MapAI Dataset

MapAI: Precision in Building Segmentation Dataset The dataset comprises 7500 training images and 1500 validation images from Denmark. The test dataset is split into two tasks, where the first task (1368 images) is to segment the buildings only using aerial images. In contrast, the second task (978 images) allows using aerial images and lidar data. All data samples have a resolution of 500x500. The aerial images are RGB images, while the lidar data are rasterized. The ground truth masks have two classes, building, and background.

0 papers0 benchmarksImages, LiDAR

KAIST multi-spectral Day/Night 2018

We introduce the KAIST multi-spectral dataset, which covers a greater range of drivable regions, from urban to residential, for autonomous systems. Our dataset provides different perspectives of the world captured in coarse time slots (day and night) in addition to fine time slots (sunrise, morning, afternoon, sunset, night and dawn). For all-day perception of autonomous systems, we propose the use of a different spectral sensor, i.e., a thermal imaging camera. Toward this goal, we develop a multi-sensor platform, which supports the use of a co-aligned RGB/Thermal camera, RGB stereo, 3D LiDAR and inertial sensors (GPS/IMU) and a related calibration technique. We design a wide range of visual perception tasks including the object detection, drivable region detection, localization, image enhancement, depth estimation and colorization using a single/multi-spectral approach. In this paper, we provide a description of our benchmark with the recording platform, data format, development toolk

0 papers0 benchmarksImages, LiDAR, Stereo

Multi-Spectral Stereo Dataset (RGB, NIR, thermal images, LiDAR, GPS/IMU)

Abstract: We introduce the multi-spectral stereo (MS2) outdoor dataset, including stereo RGB, stereo NIR, stereo thermal, stereo LiDAR data, and GPS/IMU information. Our dataset provides rectified and synchronized 184K data pairs taken from city, residential, road, campus, and suburban areas in the morning, daytime, and nighttime under clear-sky, cloudy, and rainy conditions. We designed the dataset to explore various computer vision algorithms from multi-spectral sensor data to achieve high-level performance, reliability, and robustness against challenging environments.

0 papers0 benchmarksImages, LiDAR, Point cloud, Stereo

InLUT3D (Indoor Lodz University of Technology Point Cloud Dataset)

This dataset called Indoor Lodz University of Technology Point Cloud Dataset (InLUT3D) is a point cloud set tailored for real object classification and both semantic and instance segmentation tasks. Comprising of 321 scans, some areas in the dataset are covered by multiple scans. All of them are captured using the Leica BLK360 scanner.

0 papers0 benchmarks3D, Graphs, LiDAR, Point cloud

OUTBACK: A Multimodal Synthetic Dataset for Rural Australian Off-road Robot Navigation

One of the most important aspects of robot scene understanding is semantic segmentation of external environments. Urban environment semantic segmentation has been extensively investigated by researchers and many real-world and synthetic datasets have been utilised to develop highly accurate segmentation results. However, the number of off-road datasets available for robot navigation research remains limited. To address this, we introduce a novel framework [1] to generate varied photorealistic synthetic off-road datasets capable of supporting multiple sensor modalities.

0 papers0 benchmarksImages, LiDAR

LiSu (LiSu: A Dataset and Method for LiDAR Surface Normal Estimation)

We present LiSu, a novel synthetic LiDAR dataset targeted for research on surface normal estimation. We leverage CARLA, a versatile simulation environment offering diverse urban and rural landscapes, including downtown areas, small towns, and multi-lane highways. By extending CARLA’s LiDAR sensor to capture not only point locations but also surface normal vectors, we curate an extensive dataset of roughly 50k frames.

0 papers0 benchmarksLiDAR
PreviousPage 5 of 5