1,019 machine learning datasets
1,019 dataset results
๐ฌ๐๐๐๐ ๐๐ ๐๐๐๐๐ ๐๐๐ ๐๐ ๐ ๐ ๐๐ ๐๐๐ ๐๐๐๐. ๐พ๐๐๐๐๐๐ ๐๐ ๐๐๐๐๐๐ ๐ ๐๐๐๐๐๐ ๐๐๐๐ ๐ ๐๐๐-๐๐๐๐๐๐ ๐๐๐๐ ๐๐๐๐๐๐, ๐๐ ๐๐๐ ๐๐ค๐ฌ ๐ฉ๐๐๐ฉ ๐ฌ๐๐ก๐ก ๐๐๐ช๐จ๐ ๐จ๐ช๐๐๐๐ง ๐ฎ๐ค๐ช ๐จ๐ช๐๐๐๐ง ๐๐ฃ๐ ๐ฌ๐๐ก๐ก ๐๐๐ช๐จ๐ ๐๐ง๐ ๐จ๐ช๐๐๐๐ง ๐ฎ๐ค๐ช. ๐๐-๐ฏ๐๐๐ ๐ญ๐๐๐ ๐ช๐๐๐๐๐๐๐๐๐๐๐: ๐ฐ๐ ๐๐๐ ๐๐๐๐ ๐๐ ๐๐๐๐๐๐ ๐๐๐๐ ๐๐๐๐๐๐ ๐๐๐๐๐๐ ๐๐ Yes Do you like it, tell me [ [+1-888-829-0881 (time) (time)] ] If you like it โ๐[+1-888-829-0881 (time) (time)] Will I get a refund if I cancel?
How do I speak to someone on Expedia?, reach out to their customer support and request to speak with a supervisor or manager. (+1-888-829-0881 OR +1-805-330-4056 For quicker assistance, call Expedia's customer service at +1-888-829-0881 OR +1-805-330-4056 (US) for support in resolving your issue.
How do I get a human at Expedia immediately? (2025 Complete Guide) Most travelers run into a point where self-service isnโt enough, and speaking to a real person becomes the only way forward +1(805) 330 (4056) in urgent situations. Whether you're dealing with last-minute changes, missed confirmations, or technical errors, direct help always works faster +1(805) 330 (4056) than virtual support. Once youโve passed through chatbots and FAQ pages without results, the only productive step left is real communication +1(805) 330 (4056) with someone who can actually access your booking. Time-sensitive issues like flight cancellations or hotel no-shows require immediate assistance +1(805) 330 (4056), not generic articles. Many travelers donโt realize that representatives can do far more than you see in the app, from rebooking to refund processing +1(805) 330 (4056) in real time. If youโre traveling within 24 hours, getting through to someone becomes even more critical +1(805) 330 (4056) to avoi
Collection of various motion capture recordings (walking, dancing, sports, and others) performed by over 140 subjects. The database contains free motions which you can download and use. There is a zip file of all asf/amc's on the FAQs page.
The Berkeley Motion Segmentation Dataset (BMS-26) is a dataset for motion segmentation, which consists of 26 video sequences with pixel-accurate segmentation annotation of moving objects. A total of 189 frames is annotated. 12 of the sequences are taken from the Hopkins 155 dataset and new annotation is added.
Plant Centroids is a dataset for stem emerging points (SEP) detection in RGB and NIR image data. The dataset is meant to aid the construction of agricultural robots, where detecting SEPs is an important perception task (to position weeding or fertilizing tools at the plantโs center and finding natural landmarks in the field environment). The dataset contains annotations for ~2000 image sets with a broad variance of plant species and growth stages.
Freiburg Block Tasks is a dataset for robot skill learning. It consists of two datasets. The first data set consisted of three simulated robot tasks: stacking (A), color pushing (B) and color stacking (C). The data set contains 300 multi-view demonstration videos per task. The tasks are simulated with PyBullet. Of these 300 demonstrations, 150 represent unsuccessful executions of the different tasks. The authors found it helpful to add unsuccessful demonstrations in the training of the embedding to enable training RL agents on it. Without fake examples, the distances in the embedding space for states not seen during training might be noisy. The test set contains the manipulation of blocks. Within the validation set, the blocks are replaced by cylinders of different colors. The second data set includes real-world human executions of the simulated robot tasks (A, B and C), as well as demonstrations for a task where one has to first separate blocks in order to stack them (D). For each tas
The Freiburg Poking dataset is a dataset for learning intuitive physics from physical interaction. It consists of 40K of interaction data with a KUKA LBR iiwa manipulator and a fixed Azure Kinect RGB-D camera. The dataset creators built an arena of styrofoam with walls for preventing objects from falling down. At any given time there were 3-7 objects randomly chosen from a set of 34 distinct objects present on the arena. The objects differed from each other in shape, appearance, material, mass and friction.
The Couples Therapy corpus contains audio, video recordings and manual transcriptions of conversations between 134 real-life couples attending marital therapy. In each session, one person selected a topic that was discussed over 10 minutes with the spouse. At the end of the session, both speakers were rated separately on 33 โbehavior codesโ by multiple annotators based on the Couples Interaction and Social Support Rating Systems. Each behavior was rated on a Likert scale from 1, indicating absence, to 9, indicating strong presence. A session-level rating was obtained for each speaker by averaging the annotator ratings. This process was repeated for the spouse, resulting in 2 sessions per couple at a time. The total number of sessions per couple varied between 2 and 6.
iQIYI-VID-2019 dataset is the first video dataset for multi-model person identification. This dataset aims to encourage the research of multi-modal based person identification. To get close to real applications, video clips are extracted from real online videos of extensive types. All the clips are labeled by human annotators, and use automatic algorithms to accelerate the collection and labeling process. The iQIYI-VID-2019 dataset is more challenging comparing to the iQIYI-VID-2018 dataset, since most hard examples are selected from iQIYI-VID-2018 while more person ids is added. The dataset contains 100K~200K video clips, divided into three parts, 40% for training, 30% for validation, and 30% for test. The dataset contains about 10, 000 identities, 5,000 of which come from the iQIYI celebrity database and mainly extracts from iQIYI-VID-2018.
Blind Audio-Visual Localization (BAVL) Dataset consists of 20 audio-visual recordings of sound sources, which could be talking faces or music instruments. Most audio-visual recordings (19) are videos from Youtube except V8, which is from 1. Besides, the video V7 was also used in2, and V16 used in 3. All 20 videos are annotated by ourselves in a uniform manner. Details of the video sequences are listed in Table 1.
The dataset comprises 16 short sequences showing various objects in challenging backgrounds. The sequences were chosen from a large pool of sequences using a methodology based on clustering visual features of object and background so that those 16 sequences sample evenly well the existing pool. The sequences were annotated by the VOT committee using axis-aligned bounding boxes.
The Mouse Embryo Tracking Database is a dataset for tracking mouse embryos. The dataset contains, for each of the 100 examples: (1) the uncompressed frames, up to the 10th frame after the appearance of the 8th cell; (2) a text file with the trajectories of all the cells, from appearance to division (for cells of generations 1 to 3), where a trajectory is a sequence of pairs (center, radius); (3) a movie file showing the trajectories of the cells.
A pedestrian dataset for Person Re-identification.
The Robot-at-Home dataset (Robot@Home) is a collection of raw and processed data from five domestic settings compiled by a mobile robot equipped with 4 RGB-D cameras and a 2D laser scanner. Its main purpose is to serve as a testbed for semantic mapping algorithms through the categorization of objects and/or rooms.
The Daimler Urban Segmentation Dataset is a dataset for semantic segmentation. It consists of video sequences recorded in urban traffic. The dataset consists of 5000 rectified stereo image pairs with a resolution of 1024x440. 500 frames (every 10th frame of the sequence) come with pixel-level semantic class annotations into 5 classes: ground, building, vehicle, pedestrian, sky. Dense disparity maps are provided as a reference, however these are not manually annotated but computed using semi-global matching (sgm).
LASIESTA (Labeled and Annotated Sequences for Integral Evaluation of SegmenTation Algorithms) is a segmentation and detection dataset composed by many real indoor and outdoor sequences organized into categories, each of one covering a specific challenge in moving object detection strategies.
The PIROPO database (People in Indoor ROoms with Perspective and Omnidirectional cameras) comprises multiple sequences recorded in two different indoor rooms, using both omnidirectional and perspective cameras. The sequences contain people in a variety of situations, including people walking, standing, and sitting. Both annotated and non-annotated sequences are provided, where ground truth is point-based (each person in the scene is represented by the point located in the center of its head). In total, more than 100,000 annotated frames are available.
OCTCBVS is a benchmark dataset for testing and evaluating novel and state-of-the-art computer vision algorithms. The benchmark contains videos and images recorded in and beyond the visible spectrum and is available for free to all researchers in the international computer vision communities.
The DogCentric Activity dataset is composed of dog activity videos taken from a first-person animal viewpoint. The dataset contains 10 different types of activities, including activities performed by the dog himself/herself, interactions between people and the dog, and activities performed by people or cars.