TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Datasets

1,019 machine learning datasets

Filter by Modality

  • Images3,275
  • Texts3,148
  • Videos1,019
  • Audio486
  • Medical395
  • 3D383
  • Time series298
  • Graphs285
  • Tabular271
  • Speech199
  • RGB-D192
  • Environment148
  • Point cloud135
  • Biomedical123
  • LiDAR95
  • RGB Video87
  • Tracking78
  • Biology71
  • Actions68
  • 3d meshes65
  • Tables52
  • Music48
  • EEG45
  • Hyperspectral images45
  • Stereo44
  • MRI39
  • Physics32
  • Interactive29
  • Dialog25
  • Midi22
  • 6D17
  • Replay data11
  • Financial10
  • Ranking10
  • Cad9
  • fMRI7
  • Parallel6
  • Lyrics2
  • PSG2
Clear filter

1,019 dataset results

[[request~refund]]How do I request a refund from Expedia?

๐‘ฌ๐’™๐’‘๐’†๐’…๐’Š๐’‚ ๐’‚๐’๐’๐’๐’” ๐’š๐’๐’– ๐’•๐’ ๐’…๐’ ๐’”๐’ ๐’‡๐’๐’“ ๐’‡๐’“๐’†๐’†. ๐‘พ๐’‰๐’†๐’•๐’‰๐’†๐’“ ๐’š๐’ ๐’ƒ๐’๐’๐’Œ๐’†๐’… ๐’‚ ๐’“๐’†๐’‡๐’–๐’๐’…๐’‚๐’ƒ๐’๐’† ๐’ ๐’๐’๐’-๐’“๐’†๐’‡๐’–๐’๐’…๐’‚๐’ƒ๐’๐’† ๐’•๐’Š๐’„๐’Œ๐’†๐’•, ๐’š๐’ ๐’‚๐’“๐’† ๐™ƒ๐™ค๐™ฌ ๐™ฉ๐™๐™–๐™ฉ ๐™ฌ๐™ž๐™ก๐™ก ๐™˜๐™–๐™ช๐™จ๐™š ๐™จ๐™ช๐™›๐™›๐™š๐™ง ๐™ฎ๐™ค๐™ช ๐™จ๐™ช๐™›๐™›๐™š๐™ง ๐™–๐™ฃ๐™™ ๐™ฌ๐™ž๐™ก๐™ก ๐™˜๐™–๐™ช๐™จ๐™š ๐™–๐™ง๐™š ๐™จ๐™ช๐™›๐™›๐™š๐™ง ๐™ฎ๐™ค๐™ช. ๐Ÿ๐Ÿ’-๐‘ฏ๐’๐’–๐’“ ๐‘ญ๐’“๐’†๐’† ๐‘ช๐’‚๐’๐’„๐’†๐’๐’๐’‚๐’•๐’Š๐’๐’: ๐‘ฐ๐’‡ ๐’š๐’๐’– ๐’๐’†๐’†๐’… ๐’•๐’ ๐’„๐’‚๐’๐’„๐’†๐’ ๐’š๐’๐’–๐’“ ๐’‡๐’๐’Š๐’ˆ๐’‰๐’• ๐’˜๐’Š๐’•๐’‰๐’Š๐’ ๐Ÿ๐Ÿ’ Yes Do you like it, tell me [ [+1-888-829-0881 (time) (time)] ] If you like it โœˆ๐Ÿ“ž[+1-888-829-0881 (time) (time)] Will I get a refund if I cancel?

1 papers0 benchmarksVideos

[FaQ's--Help]How do I speak to someone on Expedia?

How do I speak to someone on Expedia?, reach out to their customer support and request to speak with a supervisor or manager. (+1-888-829-0881 OR +1-805-330-4056 For quicker assistance, call Expedia's customer service at +1-888-829-0881 OR +1-805-330-4056 (US) for support in resolving your issue.

1 papers0 benchmarksAudio, Images, Texts, Videos

[QUESTioN~Agent~Calling]How do I get a human at Expedia?

How do I get a human at Expedia immediately? (2025 Complete Guide) Most travelers run into a point where self-service isnโ€™t enough, and speaking to a real person becomes the only way forward +1(805) 330 (4056) in urgent situations. Whether you're dealing with last-minute changes, missed confirmations, or technical errors, direct help always works faster +1(805) 330 (4056) than virtual support. Once youโ€™ve passed through chatbots and FAQ pages without results, the only productive step left is real communication +1(805) 330 (4056) with someone who can actually access your booking. Time-sensitive issues like flight cancellations or hotel no-shows require immediate assistance +1(805) 330 (4056), not generic articles. Many travelers donโ€™t realize that representatives can do far more than you see in the app, from rebooking to refund processing +1(805) 330 (4056) in real time. If youโ€™re traveling within 24 hours, getting through to someone becomes even more critical +1(805) 330 (4056) to avoi

1 papers0 benchmarksVideos

MoCap (CMU Graphics Lab Motion Capture Database)

Collection of various motion capture recordings (walking, dancing, sports, and others) performed by over 140 subjects. The database contains free motions which you can download and use. There is a zip file of all asf/amc's on the FAQs page.

0 papers0 benchmarksImages, Videos

BMS-26 (Berkeley Motion Segmentation)

The Berkeley Motion Segmentation Dataset (BMS-26) is a dataset for motion segmentation, which consists of 26 video sequences with pixel-accurate segmentation annotation of moving objects. A total of 189 frames is annotated. 12 of the sequences are taken from the Hopkins 155 dataset and new annotation is added.

0 papers0 benchmarksImages, Videos

Plant Centroids

Plant Centroids is a dataset for stem emerging points (SEP) detection in RGB and NIR image data. The dataset is meant to aid the construction of agricultural robots, where detecting SEPs is an important perception task (to position weeding or fertilizing tools at the plantโ€™s center and finding natural landmarks in the field environment). The dataset contains annotations for ~2000 image sets with a broad variance of plant species and growth stages.

0 papers0 benchmarksImages, Videos

Freiburg Block Tasks

Freiburg Block Tasks is a dataset for robot skill learning. It consists of two datasets. The first data set consisted of three simulated robot tasks: stacking (A), color pushing (B) and color stacking (C). The data set contains 300 multi-view demonstration videos per task. The tasks are simulated with PyBullet. Of these 300 demonstrations, 150 represent unsuccessful executions of the different tasks. The authors found it helpful to add unsuccessful demonstrations in the training of the embedding to enable training RL agents on it. Without fake examples, the distances in the embedding space for states not seen during training might be noisy. The test set contains the manipulation of blocks. Within the validation set, the blocks are replaced by cylinders of different colors. The second data set includes real-world human executions of the simulated robot tasks (A, B and C), as well as demonstrations for a task where one has to first separate blocks in order to stack them (D). For each tas

0 papers0 benchmarksImages, Videos

Freiburg Poking

The Freiburg Poking dataset is a dataset for learning intuitive physics from physical interaction. It consists of 40K of interaction data with a KUKA LBR iiwa manipulator and a fixed Azure Kinect RGB-D camera. The dataset creators built an arena of styrofoam with walls for preventing objects from falling down. At any given time there were 3-7 objects randomly chosen from a set of 34 distinct objects present on the arena. The objects differed from each other in shape, appearance, material, mass and friction.

0 papers0 benchmarksImages, Videos

Couples Therapy (Couples Therapy Corpus)

The Couples Therapy corpus contains audio, video recordings and manual transcriptions of conversations between 134 real-life couples attending marital therapy. In each session, one person selected a topic that was discussed over 10 minutes with the spouse. At the end of the session, both speakers were rated separately on 33 โ€œbehavior codesโ€ by multiple annotators based on the Couples Interaction and Social Support Rating Systems. Each behavior was rated on a Likert scale from 1, indicating absence, to 9, indicating strong presence. A session-level rating was obtained for each speaker by averaging the annotator ratings. This process was repeated for the spouse, resulting in 2 sessions per couple at a time. The total number of sessions per couple varied between 2 and 6.

0 papers0 benchmarksAudio, Texts, Videos

iQIYI-VID-2019

iQIYI-VID-2019 dataset is the first video dataset for multi-model person identification. This dataset aims to encourage the research of multi-modal based person identification. To get close to real applications, video clips are extracted from real online videos of extensive types. All the clips are labeled by human annotators, and use automatic algorithms to accelerate the collection and labeling process. The iQIYI-VID-2019 dataset is more challenging comparing to the iQIYI-VID-2018 dataset, since most hard examples are selected from iQIYI-VID-2018 while more person ids is added. The dataset contains 100K~200K video clips, divided into three parts, 40% for training, 30% for validation, and 30% for test. The dataset contains about 10, 000 identities, 5,000 of which come from the iQIYI celebrity database and mainly extracts from iQIYI-VID-2018.

0 papers0 benchmarksVideos

BAVL (Blind Audio-Visual Localization (BAVL))

Blind Audio-Visual Localization (BAVL) Dataset consists of 20 audio-visual recordings of sound sources, which could be talking faces or music instruments. Most audio-visual recordings (19) are videos from Youtube except V8, which is from 1. Besides, the video V7 was also used in2, and V16 used in 3. All 20 videos are annotated by ourselves in a uniform manner. Details of the video sequences are listed in Table 1.

0 papers0 benchmarksAudio, Videos

VOT2013 (Visual Object Tracking Challenge 2013)

The dataset comprises 16 short sequences showing various objects in challenging backgrounds. The sequences were chosen from a large pool of sequences using a methodology based on clustering visual features of object and background so that those 16 sequences sample evenly well the existing pool. The sequences were annotated by the VOT committee using axis-aligned bounding boxes.

0 papers0 benchmarksVideos

Mouse Embryo Tracking Database

The Mouse Embryo Tracking Database is a dataset for tracking mouse embryos. The dataset contains, for each of the 100 examples: (1) the uncompressed frames, up to the 10th frame after the appearance of the 8th cell; (2) a text file with the trajectories of all the cells, from appearance to division (for cells of generations 1 to 3), where a trajectory is a sequence of pairs (center, radius); (3) a movie file showing the trajectories of the cells.

0 papers0 benchmarksImages, Tracking, Videos

Nagoya University Extremely Low-resolution FIR Image Action Dataset

A pedestrian dataset for Person Re-identification.

0 papers0 benchmarksVideos

Robot@Home dataset

The Robot-at-Home dataset (Robot@Home) is a collection of raw and processed data from five domestic settings compiled by a mobile robot equipped with 4 RGB-D cameras and a 2D laser scanner. Its main purpose is to serve as a testbed for semantic mapping algorithms through the categorization of objects and/or rooms.

0 papers0 benchmarksImages, LiDAR, RGB-D, Videos

DUS (Daimler Urban Segmentation)

The Daimler Urban Segmentation Dataset is a dataset for semantic segmentation. It consists of video sequences recorded in urban traffic. The dataset consists of 5000 rectified stereo image pairs with a resolution of 1024x440. 500 frames (every 10th frame of the sequence) come with pixel-level semantic class annotations into 5 classes: ground, building, vehicle, pedestrian, sky. Dense disparity maps are provided as a reference, however these are not manually annotated but computed using semi-global matching (sgm).

0 papers0 benchmarksImages, Videos

LASIESTA

LASIESTA (Labeled and Annotated Sequences for Integral Evaluation of SegmenTation Algorithms) is a segmentation and detection dataset composed by many real indoor and outdoor sequences organized into categories, each of one covering a specific challenge in moving object detection strategies.

0 papers0 benchmarksImages, Videos

PIROPO

The PIROPO database (People in Indoor ROoms with Perspective and Omnidirectional cameras) comprises multiple sequences recorded in two different indoor rooms, using both omnidirectional and perspective cameras. The sequences contain people in a variety of situations, including people walking, standing, and sitting. Both annotated and non-annotated sequences are provided, where ground truth is point-based (each person in the scene is represented by the point located in the center of its head). In total, more than 100,000 annotated frames are available.

0 papers0 benchmarksVideos

OTCBVS

OCTCBVS is a benchmark dataset for testing and evaluating novel and state-of-the-art computer vision algorithms. The benchmark contains videos and images recorded in and beyond the visible spectrum and is available for free to all researchers in the international computer vision communities.

0 papers0 benchmarksImages, Videos

DogCentric Activity

The DogCentric Activity dataset is composed of dog activity videos taken from a first-person animal viewpoint. The dataset contains 10 different types of activities, including activities performed by the dog himself/herself, interactions between people and the dog, and activities performed by people or cars.

0 papers0 benchmarksVideos
PreviousPage 49 of 51Next