Colin Lea, Michael D. Flynn, Rene Vidal, Austin Reiter, Gregory D. Hager
The ability to identify and temporally segment fine-grained human actions throughout a video is crucial for robotics, surveillance, education, and beyond. Typical approaches decouple this problem by first extracting local spatiotemporal features from video frames and then feeding them into a temporal classifier that captures high-level temporal patterns. We introduce a new class of temporal models, which we call Temporal Convolutional Networks (TCNs), that use a hierarchy of temporal convolutions to perform fine-grained action segmentation or detection. Our Encoder-Decoder TCN uses pooling and upsampling to efficiently capture long-range temporal patterns whereas our Dilated TCN uses dilated convolutions. We show that TCNs are capable of capturing action compositions, segment durations, and long-range dependencies, and are over a magnitude faster to train than competing LSTM-based Recurrent Neural Networks. We apply these models to three challenging fine-grained datasets and show large improvements over the state of the art.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Action Localization | GTEA | Acc | 64 | ED-TCN |
| Action Localization | GTEA | F1@10% | 72.2 | ED-TCN |
| Action Localization | GTEA | F1@25% | 69.3 | ED-TCN |
| Action Localization | GTEA | F1@50% | 56 | ED-TCN |
| Action Segmentation | GTEA | Acc | 64 | ED-TCN |
| Action Segmentation | GTEA | F1@10% | 72.2 | ED-TCN |
| Action Segmentation | GTEA | F1@25% | 69.3 | ED-TCN |
| Action Segmentation | GTEA | F1@50% | 56 | ED-TCN |