Tan Wang, Jianqiang Huang, Hanwang Zhang, Qianru Sun
We present a novel unsupervised feature representation learning method, Visual Commonsense Region-based Convolutional Neural Network (VC R-CNN), to serve as an improved visual region encoder for high-level tasks such as captioning and VQA. Given a set of detected object regions in an image (e.g., using Faster R-CNN), like any other unsupervised feature learning methods (e.g., word2vec), the proxy training objective of VC R-CNN is to predict the contextual objects of a region. However, they are fundamentally different: the prediction of VC R-CNN is by using causal intervention: P(Y|do(X)), while others are by using the conventional likelihood: P(Y|X). This is also the core reason why VC R-CNN can learn "sense-making" knowledge like chair can be sat -- while not just "common" co-occurrences such as chair is likely to exist if table is observed. We extensively apply VC R-CNN features in prevailing models of three popular tasks: Image Captioning, VQA, and VCR, and observe consistent performance boosts across them, achieving many new state-of-the-arts. Code and feature are available at https://github.com/Wangt-CN/VC-R-CNN.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Visual Question Answering (VQA) | VQA v2 test-dev | Accuracy | 71.21 | MCAN+VC |
| Visual Question Answering (VQA) | VQA v2 test-std | overall | 71.49 | MCAN+VC |
| Image Captioning | COCO Captions | BLEU-4 | 39.5 | AoANet + VC |
| Image Captioning | COCO Captions | METEOR | 29.3 | AoANet + VC |
| Image Captioning | COCO Captions | ROUGE-L | 59.3 | AoANet + VC |