TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ERNIE-ViL: Knowledge Enhanced Vision-Language Representati...

ERNIE-ViL: Knowledge Enhanced Vision-Language Representations Through Scene Graph

Fei Yu, Jiji Tang, Weichong Yin, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

2020-06-30AttributeReferring Expression ComprehensionPredictionVisual Question Answering (VQA)
PaperPDF

Abstract

We propose a knowledge-enhanced approach, ERNIE-ViL, which incorporates structured knowledge obtained from scene graphs to learn joint representations of vision-language. ERNIE-ViL tries to build the detailed semantic connections (objects, attributes of objects and relationships between objects) across vision and language, which are essential to vision-language cross-modal tasks. Utilizing scene graphs of visual scenes, ERNIE-ViL constructs Scene Graph Prediction tasks, i.e., Object Prediction, Attribute Prediction and Relationship Prediction tasks in the pre-training phase. Specifically, these prediction tasks are implemented by predicting nodes of different types in the scene graph parsed from the sentence. Thus, ERNIE-ViL can learn the joint representations characterizing the alignments of the detailed semantics across vision and language. After pre-training on large scale image-text aligned datasets, we validate the effectiveness of ERNIE-ViL on 5 cross-modal downstream tasks. ERNIE-ViL achieves state-of-the-art performances on all these tasks and ranks the first place on the VCR leaderboard with an absolute improvement of 3.7%.

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)VCR (Q-AR) testAccuracy70.5ERNIE-ViL-large(ensemble of 15 models)
Visual Question Answering (VQA)VCR (QA-R) testAccuracy86.1ERNIE-ViL-large(ensemble of 15 models)
Visual Question Answering (VQA)VCR (Q-A) testAccuracy81.6ERNIE-ViL-large(ensemble of 15 models)
Visual Question Answering (VQA)VQA v2 test-stdnumber56.79ERNIE-ViL-single model
Visual Question Answering (VQA)VQA v2 test-stdother65.24ERNIE-ViL-single model
Visual Question Answering (VQA)VQA v2 test-stdoverall74.93ERNIE-ViL-single model
Visual Question Answering (VQA)VQA v2 test-stdyes/no90.83ERNIE-ViL-single model

Related Papers

Multi-Strategy Improved Snake Optimizer Accelerated CNN-LSTM-Attention-Adaboost for Trajectory Prediction2025-07-21VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Non-Adaptive Adversarial Face Generation2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Attributes Shape the Embedding Space of Face Recognition Models2025-07-15COLIBRI Fuzzy Model: Color Linguistic-Based Representation and Interpretation2025-07-15Generative Click-through Rate Prediction with Applications to Search Advertising2025-07-15