Paritosh Parmar, Brendan Morris
Spatiotemporal representations learned using 3D convolutional neural networks (CNN) are currently used in state-of-the-art approaches for action related tasks. However, 3D-CNN are notorious for being memory and compute resource intensive as compared with more simple 2D-CNN architectures. We propose to hallucinate spatiotemporal representations from a 3D-CNN teacher with a 2D-CNN student. By requiring the 2D-CNN to predict the future and intuit upcoming activity, it is encouraged to gain a deeper understanding of actions and how they evolve. The hallucination task is treated as an auxiliary task, which can be used with any other action related task in a multitask learning setting. Thorough experimental evaluation shows that the hallucination task indeed helps improve performance on action recognition, action quality assessment, and dynamic scene recognition tasks. From a practical standpoint, being able to hallucinate spatiotemporal representations without an actual 3D-CNN can enable deployment in resource-constrained scenarios, such as with limited computing power and/or lower bandwidth. Codebase is available here: https://github.com/ParitoshParmar/HalluciNet.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Scene Parsing | YUP++ | Accuracy (%) | 84.44 | HalluciNet (ResNet-50) |
| Activity Recognition | UCF101 | 3-fold Accuracy | 79.83 | HalluciNet (ResNet-50) |
| Animation | YUP++ | Accuracy (%) | 84.44 | HalluciNet (ResNet-50) |
| Action Recognition | UCF101 | 3-fold Accuracy | 79.83 | HalluciNet (ResNet-50) |
| 3D Character Animation From A Single Photo | YUP++ | Accuracy (%) | 84.44 | HalluciNet (ResNet-50) |
| 2D Semantic Segmentation | YUP++ | Accuracy (%) | 84.44 | HalluciNet (ResNet-50) |