CADDY
Introduced 2018-07-12
- An underwater dataset collected from several field trials within the EU FP7 project “Cognitive autonomous diving buddy (CADDY)”, where an Autonomous Underwater Vehicle (AUV) was used to interact with divers and monitor their activities.
- Purpose: Studying and boosting object classification, segmentation and human pose estimation tasks, where divers use the CADDIAN gesture-based language.
- Data were recorded in different environmental conditions that cause various image distortions unique to underwater scenarios, i.e., low contrast, color distortion, and haze.
- Dataset characteristics (gesture-related):
- 9191 annotated stereo pairs were gathered for 16 classes (gesture types), i.e., 18,382 total samples.
- 7190 true negative stereo pairs (14,380 samples) that contain background scenery and divers without gesturing.