95 machine learning datasets
95 dataset results
The SimBEV dataset is a collection of 320 scenes spread across all 11 CARLA maps and contains data from a variety of sensors, including five camera types (RGB, semantic segmentation, instance segmentation, depth, and optical flow), lidar, semantic lidar, radar, GNSS, and IMU, along with 3D object bounding boxes and accurate bird's-eye view (BEV) ground truth. With each scene lasting 16 seconds at a frame rate of 20 Hz, the SimBEV dataset contains 102,400 annotated frames, over 8 million 3D object bounding boxes, and more than 2.5 billion BEV ground truth labels.
Daily Activity Recordings for Artificial Intelligence (DARai, pronounced "Dahr-ree") is a multimodal, hierarchically annotated dataset constructed to understand human activities in real-world settings. DARai consists of continuous scripted and unscripted recordings of 50 participants in 10 different environments, totaling over 200 hours of data from 20 sensors including multiple camera views, depth and radar sensors, wearable inertial measurement units (IMUs), electromyography (EMG), insole pressure sensors, biomonitor sensors, and gaze tracker. To capture the complexity in human activities, DARai is annotated at three levels of hierarchy: (i) high-level activities (L1) that are independent tasks, (ii) lower-level actions (L2) that are patterns shared between activities, and (iii) fine-grained procedures (L3) that detail the exact execution steps for actions. The dataset annotations and recordings are designed so that 22.7% of L2 actions are shared between L1 activities and 14.2% of L3
This dataset includes 3D point-cloud and 2D imagery from a flash LiDAR...
The Matador dataset is a material image dataset with hierarchical labels. The hierarchical labels are derived from a new taxonomy. For each sample of a material, we collect a local appearance image, local surface structure LiDAR scan, global context image, and record any camera motion that takes place during the capture sequence. The dataset is intended to grow over time. To date, Matador contains 57 different material categories and a total of ~7,200 images, averaging 126 samples of intraclass variance.
To file a claim against π¬ππππ ππ, start by contacting their customer support via phone or through their Help & Support section on their website. You can call them at + +π-888-829-0881 . If the issue is not resolved, you can request to speak with a supervisor or manager. If the problem persists, you can submit a formal complaint through the Help & Support section.+ +π-888-829-0881 . For travel protection claims, contact the insurance provider listed in your policy. If the issue remains unresolved, you can also file a complaint with the Better Business Bureau (BBB) https://www.bbb.org/ or the Federal Trade Commission (FTC)
USYD CAMPUS is a driving dataset collected by Zhou et al at the University of Sydney (USyd) campus and surroundings. This USYD Campus Dataset contains more than 60 weeks of drives and is continuously updated. It includes multiple sensor modalities (camera, lidar, GPS, IMU, wheel encoder, steering angle, etc.) and covers various environmental conditions as well as diverse changes to illumination, scene structure, and pedestrian/vehicle traffic volumes.
The Robot-at-Home dataset (Robot@Home) is a collection of raw and processed data from five domestic settings compiled by a mobile robot equipped with 4 RGB-D cameras and a 2D laser scanner. Its main purpose is to serve as a testbed for semantic mapping algorithms through the categorization of objects and/or rooms.
we propose the augmented KITTI dataset with fog for both camera and LiDAR sensors with different visibility ranges from 20 to 80 meters to best match realistic fog environment.
Sparse LiDAR extracted from velodyne 64 beams in KITTI dataset. It contains severals LiDAR: LiDAR 2 beams, LiDAR 4 beams, LiDAR 8 beams, LiDAR 16 beams, LiDAR 32 beams
MapAI: Precision in Building Segmentation Dataset The dataset comprises 7500 training images and 1500 validation images from Denmark. The test dataset is split into two tasks, where the first task (1368 images) is to segment the buildings only using aerial images. In contrast, the second task (978 images) allows using aerial images and lidar data. All data samples have a resolution of 500x500. The aerial images are RGB images, while the lidar data are rasterized. The ground truth masks have two classes, building, and background.
We introduce the KAIST multi-spectral dataset, which covers a greater range of drivable regions, from urban to residential, for autonomous systems. Our dataset provides different perspectives of the world captured in coarse time slots (day and night) in addition to fine time slots (sunrise, morning, afternoon, sunset, night and dawn). For all-day perception of autonomous systems, we propose the use of a different spectral sensor, i.e., a thermal imaging camera. Toward this goal, we develop a multi-sensor platform, which supports the use of a co-aligned RGB/Thermal camera, RGB stereo, 3D LiDAR and inertial sensors (GPS/IMU) and a related calibration technique. We design a wide range of visual perception tasks including the object detection, drivable region detection, localization, image enhancement, depth estimation and colorization using a single/multi-spectral approach. In this paper, we provide a description of our benchmark with the recording platform, data format, development toolk
Abstract: We introduce the multi-spectral stereo (MS2) outdoor dataset, including stereo RGB, stereo NIR, stereo thermal, stereo LiDAR data, and GPS/IMU information. Our dataset provides rectified and synchronized 184K data pairs taken from city, residential, road, campus, and suburban areas in the morning, daytime, and nighttime under clear-sky, cloudy, and rainy conditions. We designed the dataset to explore various computer vision algorithms from multi-spectral sensor data to achieve high-level performance, reliability, and robustness against challenging environments.
This dataset called Indoor Lodz University of Technology Point Cloud Dataset (InLUT3D) is a point cloud set tailored for real object classification and both semantic and instance segmentation tasks. Comprising of 321 scans, some areas in the dataset are covered by multiple scans. All of them are captured using the Leica BLK360 scanner.
One of the most important aspects of robot scene understanding is semantic segmentation of external environments. Urban environment semantic segmentation has been extensively investigated by researchers and many real-world and synthetic datasets have been utilised to develop highly accurate segmentation results. However, the number of off-road datasets available for robot navigation research remains limited. To address this, we introduce a novel framework [1] to generate varied photorealistic synthetic off-road datasets capable of supporting multiple sensor modalities.
We present LiSu, a novel synthetic LiDAR dataset targeted for research on surface normal estimation. We leverage CARLA, a versatile simulation environment offering diverse urban and rural landscapes, including downtown areas, small towns, and multi-lane highways. By extending CARLAβs LiDAR sensor to capture not only point locations but also surface normal vectors, we curate an extensive dataset of roughly 50k frames.