Colin Lea, Rene Vidal, Austin Reiter, Gregory D. Hager
The dominant paradigm for video-based action segmentation is composed of two steps: first, for each frame, compute low-level features using Dense Trajectories or a Convolutional Neural Network that encode spatiotemporal information locally, and second, input these features into a classifier that captures high-level temporal relationships, such as a Recurrent Neural Network (RNN). While often effective, this decoupling requires specifying two separate models, each with their own complexities, and prevents capturing more nuanced long-range spatiotemporal relationships. We propose a unified approach, as demonstrated by our Temporal Convolutional Network (TCN), that hierarchically captures relationships at low-, intermediate-, and high-level time-scales. Our model achieves superior or competitive performance using video or sensor data on three public action segmentation datasets and can be trained in a fraction of the time it takes to train an RNN.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Video | NTU RGB+D | Accuracy (CV) | 83.1 | TCN |
| Temporal Action Localization | NTU RGB+D | Accuracy (CV) | 83.1 | TCN |
| Zero-Shot Learning | NTU RGB+D | Accuracy (CV) | 83.1 | TCN |
| Activity Recognition | NTU RGB+D | Accuracy (CV) | 83.1 | TCN |
| Action Localization | NTU RGB+D | Accuracy (CV) | 83.1 | TCN |
| Action Localization | JIGSAWS | Accuracy | 81.4 | TCN |
| Action Localization | JIGSAWS | Edit Distance | 83.1 | TCN |
| Action Detection | NTU RGB+D | Accuracy (CV) | 83.1 | TCN |
| 3D Action Recognition | NTU RGB+D | Accuracy (CV) | 83.1 | TCN |
| Action Recognition | NTU RGB+D | Accuracy (CV) | 83.1 | TCN |
| Action Segmentation | JIGSAWS | Accuracy | 81.4 | TCN |
| Action Segmentation | JIGSAWS | Edit Distance | 83.1 | TCN |