Noureldien Hussein, Efstratios Gavves, Arnold W. M. Smeulders
This paper focuses on the temporal aspect for recognizing human activities in videos; an important visual cue that has long been undervalued. We revisit the conventional definition of activity and restrict it to Complex Action: a set of one-actions with a weak temporal pattern that serves a specific purpose. Related works use spatiotemporal 3D convolutions with fixed kernel size, too rigid to capture the varieties in temporal extents of complex actions, and too short for long-range temporal modeling. In contrast, we use multi-scale temporal convolutions, and we reduce the complexity of 3D convolutions. The outcome is Timeception convolution layers, which reasons about minute-long temporal patterns, a factor of 8 longer than best related works. As a result, Timeception achieves impressive accuracy in recognizing the human activities of Charades, Breakfast Actions, and MultiTHUMOS. Further, we demonstrate that Timeception learns long-range temporal dependencies and tolerate temporal extents of complex actions.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Video Understanding | Breakfast | mAP | 61.82 | Timeception (I3D-K400-Pretrain-feature) |
| Video | Breakfast | mAP | 61.82 | Timeception (I3D-K400-Pretrain-feature) |
| Video | Charades | MAP | 41.1 | Timeception (R3D) |
| Video | Charades | MAP | 37.2 | Timeception (I3D) |
| Video | Charades | MAP | 31.6 | Timeception (R2D) |
| Video | Breakfast | Accuracy (%) | 71.3 | Timeception |
| Video Classification | Breakfast | Accuracy (%) | 71.3 | Timeception |