How2Sign

A Large-scale Multimodal Dataset for Continuous American Sign Language

3DRGB VideoRGB-DTextsCreative Commons Attribution-NonCommercial 4.0 International LicenseIntroduced 2020-08-18

The How2Sign is a multimodal and multiview continuous American Sign Language (ASL) dataset consisting of a parallel corpus of more than 80 hours of sign language videos and a set of corresponding modalities including speech, English transcripts, and depth. A three-hour subset was further recorded in the Panoptic studio enabling detailed 3D pose estimation.