Krishna Kumar Singh, Yong Jae Lee
We propose `Hide-and-Seek', a weakly-supervised framework that aims to improve object localization in images and action localization in videos. Most existing weakly-supervised methods localize only the most discriminative parts of an object rather than all relevant parts, which leads to suboptimal performance. Our key idea is to hide patches in a training image randomly, forcing the network to seek other relevant parts when the most discriminative part is hidden. Our approach only needs to modify the input image and can work with any network designed for object localization. During testing, we do not need to hide any patches. Our Hide-and-Seek approach obtains superior performance compared to previous methods for weakly-supervised object localization on the ILSVRC dataset. We also demonstrate that our framework can be easily extended to weakly-supervised action localization.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Video | THUMOS 2014 | mAP@0.5 | 6.8 | Hide-and-seek |
| Temporal Action Localization | THUMOS 2014 | mAP@0.5 | 6.8 | Hide-and-seek |
| Zero-Shot Learning | THUMOS 2014 | mAP@0.5 | 6.8 | Hide-and-seek |
| Action Localization | THUMOS 2014 | mAP@0.5 | 6.8 | Hide-and-seek |
| Weakly Supervised Action Localization | THUMOS 2014 | mAP@0.5 | 6.8 | Hide-and-seek |