MuSoHu
Toward human-like social robot navigation: A large-scale, multi-modal, social human navigation dataset
3DActionsLiDARPoint cloudRGB-DStereoVideos
A large-scale, egocentric, multimodal, and context-aware dataset of human demonstrations of social navigation.
Provide:
MuSoHu contains approximately 20 hours, 300 trajectories, 100 kilometers of socially compliant navigation demonstrations collected by 13 human demonstrators that comprise multimodal data streams from different sensors, in both indoor and outdoor environments within the George Mason University campus and the Washington DC metropolitan area. MuSoHu also provide annotations of interesting social interaction events and of the navigation contexts (i.e., “casual”,”neutral”, and “rush”) for each of the trials.