Videos as Space-Time Region Graphs
Xiaolong Wang, Abhinav Gupta
Abstract
How do humans recognize the action "opening a book" ? We argue that there are two important cues: modeling temporal shape dynamics and modeling functional relationships between humans and objects. In this paper, we propose to represent videos as space-time region graphs which capture these two important cues. Our graph nodes are defined by the object region proposals from different frames in a long range video. These nodes are connected by two types of relations: (i) similarity relations capturing the long range dependencies between correlated objects and (ii) spatial-temporal relations capturing the interactions between nearby objects. We perform reasoning on this graph representation via Graph Convolutional Networks. We achieve state-of-the-art results on both Charades and Something-Something datasets. Especially for Charades, we obtain a huge 4.4% gain when our model is applied in complex environments.
Results
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Video | Charades | MAP | 39.7 | STRG |
| Activity Recognition | Something-Something V1 | Top 1 Accuracy | 46.1 | NL I3D + GCN |
| Action Recognition | Something-Something V1 | Top 1 Accuracy | 46.1 | NL I3D + GCN |