TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Rethinking Uncertainly Missing and Ambiguous Visual Modali...

Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment

Zhuo Chen, Lingbing Guo, Yin Fang, Yichi Zhang, Jiaoyan Chen, Jeff Z. Pan, Yangning Li, Huajun Chen, Wen Zhang

2023-07-30Knowledge GraphsBenchmarkingKnowledge Graph EmbeddingsEntity AlignmentMulti-modal Entity Alignment
PaperPDFCode

Abstract

As a crucial extension of entity alignment (EA), multi-modal entity alignment (MMEA) aims to identify identical entities across disparate knowledge graphs (KGs) by exploiting associated visual information. However, existing MMEA approaches primarily concentrate on the fusion paradigm of multi-modal entity features, while neglecting the challenges presented by the pervasive phenomenon of missing and intrinsic ambiguity of visual images. In this paper, we present a further analysis of visual modality incompleteness, benchmarking latest MMEA models on our proposed dataset MMEA-UMVM, where the types of alignment KGs covering bilingual and monolingual, with standard (non-iterative) and iterative training paradigms to evaluate the model performance. Our research indicates that, in the face of modality incompleteness, models succumb to overfitting the modality noise, and exhibit performance oscillations or declines at high rates of missing modality. This proves that the inclusion of additional multi-modal data can sometimes adversely affect EA. To address these challenges, we introduce UMAEA , a robust multi-modal entity alignment approach designed to tackle uncertainly missing and ambiguous visual modalities. It consistently achieves SOTA performance across all 97 benchmark splits, significantly surpassing existing baselines with limited parameters and time consumption, while effectively alleviating the identified limitations of other models. Our code and benchmark data are available at https://github.com/zjukg/UMAEA.

Results

TaskDatasetMetricValueModel
Data IntegrationDBP15k zh-enHits@10.856UMAEA (w/o surf)
Data IntegrationDBP15k zh-enHits@10.8UMAEA (w/o surf & iter )
Data Integrationdbp15k ja-enHits@10.857UMAEA (w/o surf)
Data Integrationdbp15k ja-enHits@10.801UMAEA (w/o surf & iter )
Data Integrationdbp15k fr-enHits@10.873UMAEA (w/o surf)
Data Integrationdbp15k fr-enHits@10.818UMAEA (w/o surf & iter )
Data IntegrationUMVM-oea-d-w-v2Hits@10.973UMAEA (w/o surf)
Data IntegrationUMVM-oea-d-w-v2Hits@10.948UMAEA (w/o surf & iter )
Data IntegrationUMVM-dbp-fr-enHits@10.873UMAEA (w/o surf)
Data IntegrationUMVM-dbp-fr-enHits@10.818UMAEA (w/o surf & iter )
Data IntegrationUMVM-oea-en-frHits@10.895UMAEA (w/o surf)
Data IntegrationUMVM-oea-en-frHits@10.848UMAEA (w/o surf & iter )
Data IntegrationUMVM-dbp-ja-enHits@10.857UMAEA (w/o surf)
Data IntegrationUMVM-dbp-ja-enHits@10.801UMAEA (w/o surf & iter )
Data IntegrationUMVM-dbp-zh-enHits@10.856UMAEA (w/o surf)
Data IntegrationUMVM-dbp-zh-enHits@10.8UMAEA (w/o surf & iter )
Data IntegrationUMVM-oea-en-deHits@10.974UMAEA (w/o surf)
Data IntegrationUMVM-oea-en-deHits@10.956UMAEA (w/o surf & iter )
Data IntegrationUMVM-oea-d-w-v1Hits@10.945UMAEA (w/o surf)
Data IntegrationUMVM-oea-d-w-v1Hits@10.904UMAEA (w/o surf & iter )
Entity AlignmentDBP15k zh-enHits@10.856UMAEA (w/o surf)
Entity AlignmentDBP15k zh-enHits@10.8UMAEA (w/o surf & iter )
Entity Alignmentdbp15k ja-enHits@10.857UMAEA (w/o surf)
Entity Alignmentdbp15k ja-enHits@10.801UMAEA (w/o surf & iter )
Entity Alignmentdbp15k fr-enHits@10.873UMAEA (w/o surf)
Entity Alignmentdbp15k fr-enHits@10.818UMAEA (w/o surf & iter )
Entity AlignmentUMVM-oea-d-w-v2Hits@10.973UMAEA (w/o surf)
Entity AlignmentUMVM-oea-d-w-v2Hits@10.948UMAEA (w/o surf & iter )
Entity AlignmentUMVM-dbp-fr-enHits@10.873UMAEA (w/o surf)
Entity AlignmentUMVM-dbp-fr-enHits@10.818UMAEA (w/o surf & iter )
Entity AlignmentUMVM-oea-en-frHits@10.895UMAEA (w/o surf)
Entity AlignmentUMVM-oea-en-frHits@10.848UMAEA (w/o surf & iter )
Entity AlignmentUMVM-dbp-ja-enHits@10.857UMAEA (w/o surf)
Entity AlignmentUMVM-dbp-ja-enHits@10.801UMAEA (w/o surf & iter )
Entity AlignmentUMVM-dbp-zh-enHits@10.856UMAEA (w/o surf)
Entity AlignmentUMVM-dbp-zh-enHits@10.8UMAEA (w/o surf & iter )
Entity AlignmentUMVM-oea-en-deHits@10.974UMAEA (w/o surf)
Entity AlignmentUMVM-oea-en-deHits@10.956UMAEA (w/o surf & iter )
Entity AlignmentUMVM-oea-d-w-v1Hits@10.945UMAEA (w/o surf)
Entity AlignmentUMVM-oea-d-w-v1Hits@10.904UMAEA (w/o surf & iter )
Multi-modal Entity AlignmentUMVM-oea-d-w-v2Hits@10.973UMAEA (w/o surf)
Multi-modal Entity AlignmentUMVM-oea-d-w-v2Hits@10.948UMAEA (w/o surf & iter )
Multi-modal Entity AlignmentUMVM-dbp-fr-enHits@10.873UMAEA (w/o surf)
Multi-modal Entity AlignmentUMVM-dbp-fr-enHits@10.818UMAEA (w/o surf & iter )
Multi-modal Entity AlignmentUMVM-oea-en-frHits@10.895UMAEA (w/o surf)
Multi-modal Entity AlignmentUMVM-oea-en-frHits@10.848UMAEA (w/o surf & iter )
Multi-modal Entity AlignmentUMVM-dbp-ja-enHits@10.857UMAEA (w/o surf)
Multi-modal Entity AlignmentUMVM-dbp-ja-enHits@10.801UMAEA (w/o surf & iter )
Multi-modal Entity AlignmentUMVM-dbp-zh-enHits@10.856UMAEA (w/o surf)
Multi-modal Entity AlignmentUMVM-dbp-zh-enHits@10.8UMAEA (w/o surf & iter )
Multi-modal Entity AlignmentUMVM-oea-en-deHits@10.974UMAEA (w/o surf)
Multi-modal Entity AlignmentUMVM-oea-en-deHits@10.956UMAEA (w/o surf & iter )
Multi-modal Entity AlignmentUMVM-oea-d-w-v1Hits@10.945UMAEA (w/o surf)
Multi-modal Entity AlignmentUMVM-oea-d-w-v1Hits@10.904UMAEA (w/o surf & iter )

Related Papers

Visual Place Recognition for Large-Scale UAV Applications2025-07-20SMART: Relation-Aware Learning of Geometric Representations for Knowledge Graphs2025-07-17Training Transformers with Enforced Lipschitz Constants2025-07-17Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16DCR: Quantifying Data Contamination in LLMs Evaluation2025-07-15A Multi-View High-Resolution Foot-Ankle Complex Point Cloud Dataset During Gait for Occlusion-Robust 3D Completion2025-07-15