TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Multi-modal Contrastive Representation Learning for Entity...

Multi-modal Contrastive Representation Learning for Entity Alignment

Zhenxi Lin, Ziheng Zhang, Meng Wang, Yinghui Shi, Xian Wu, Yefeng Zheng

2022-09-02COLING 2022 10Knowledge GraphsRepresentation LearningContrastive LearningEntity AlignmentMulti-modal Entity Alignment
PaperPDFCode(official)

Abstract

Multi-modal entity alignment aims to identify equivalent entities between two different multi-modal knowledge graphs, which consist of structural triples and images associated with entities. Most previous works focus on how to utilize and encode information from different modalities, while it is not trivial to leverage multi-modal knowledge in entity alignment because of the modality heterogeneity. In this paper, we propose MCLEA, a Multi-modal Contrastive Learning based Entity Alignment model, to obtain effective joint representations for multi-modal entity alignment. Different from previous works, MCLEA considers task-oriented modality and models the inter-modal relationships for each entity representation. In particular, MCLEA firstly learns multiple individual representations from multiple modalities, and then performs contrastive learning to jointly model intra-modal and inter-modal interactions. Extensive experimental results show that MCLEA outperforms state-of-the-art baselines on public datasets under both supervised and unsupervised settings.

Results

TaskDatasetMetricValueModel
Data IntegrationUMVM-oea-d-w-v2Hits@10.969MCLEA (w/o surf)
Data IntegrationUMVM-oea-d-w-v2Hits@10.928MCLEA (w/o surf & w/o iter)
Data IntegrationUMVM-dbp-fr-enHits@10.808MCLEA (w/o surf)
Data IntegrationUMVM-dbp-fr-enHits@10.719MCLEA (w/o surf & w/o iter)
Data IntegrationUMVM-oea-en-frHits@10.888MCLEA (w/o surf)
Data IntegrationUMVM-oea-en-frHits@10.819MCLEA (w/o surf & w/o iter)
Data IntegrationUMVM-dbp-ja-enHits@10.805MCLEA (w/o surf)
Data IntegrationUMVM-dbp-ja-enHits@10.719MCLEA (w/o surf & w/o iter)
Data IntegrationUMVM-dbp-zh-enHits@10.811MCLEA (w/o surf)
Data IntegrationUMVM-dbp-zh-enHits@10.726MCLEA (w/o surf & w/o iter)
Data IntegrationUMVM-oea-en-deHits@10.969MCLEA (w/o surf)
Data IntegrationUMVM-oea-en-deHits@10.939MCLEA (w/o surf & w/o iter)
Data IntegrationUMVM-oea-d-w-v1Hits@10.944MCLEA (w/o surf)
Data IntegrationUMVM-oea-d-w-v1Hits@10.881MCLEA (w/o surf & w/o iter)
Entity AlignmentUMVM-oea-d-w-v2Hits@10.969MCLEA (w/o surf)
Entity AlignmentUMVM-oea-d-w-v2Hits@10.928MCLEA (w/o surf & w/o iter)
Entity AlignmentUMVM-dbp-fr-enHits@10.808MCLEA (w/o surf)
Entity AlignmentUMVM-dbp-fr-enHits@10.719MCLEA (w/o surf & w/o iter)
Entity AlignmentUMVM-oea-en-frHits@10.888MCLEA (w/o surf)
Entity AlignmentUMVM-oea-en-frHits@10.819MCLEA (w/o surf & w/o iter)
Entity AlignmentUMVM-dbp-ja-enHits@10.805MCLEA (w/o surf)
Entity AlignmentUMVM-dbp-ja-enHits@10.719MCLEA (w/o surf & w/o iter)
Entity AlignmentUMVM-dbp-zh-enHits@10.811MCLEA (w/o surf)
Entity AlignmentUMVM-dbp-zh-enHits@10.726MCLEA (w/o surf & w/o iter)
Entity AlignmentUMVM-oea-en-deHits@10.969MCLEA (w/o surf)
Entity AlignmentUMVM-oea-en-deHits@10.939MCLEA (w/o surf & w/o iter)
Entity AlignmentUMVM-oea-d-w-v1Hits@10.944MCLEA (w/o surf)
Entity AlignmentUMVM-oea-d-w-v1Hits@10.881MCLEA (w/o surf & w/o iter)
Multi-modal Entity AlignmentUMVM-oea-d-w-v2Hits@10.969MCLEA (w/o surf)
Multi-modal Entity AlignmentUMVM-oea-d-w-v2Hits@10.928MCLEA (w/o surf & w/o iter)
Multi-modal Entity AlignmentUMVM-dbp-fr-enHits@10.808MCLEA (w/o surf)
Multi-modal Entity AlignmentUMVM-dbp-fr-enHits@10.719MCLEA (w/o surf & w/o iter)
Multi-modal Entity AlignmentUMVM-oea-en-frHits@10.888MCLEA (w/o surf)
Multi-modal Entity AlignmentUMVM-oea-en-frHits@10.819MCLEA (w/o surf & w/o iter)
Multi-modal Entity AlignmentUMVM-dbp-ja-enHits@10.805MCLEA (w/o surf)
Multi-modal Entity AlignmentUMVM-dbp-ja-enHits@10.719MCLEA (w/o surf & w/o iter)
Multi-modal Entity AlignmentUMVM-dbp-zh-enHits@10.811MCLEA (w/o surf)
Multi-modal Entity AlignmentUMVM-dbp-zh-enHits@10.726MCLEA (w/o surf & w/o iter)
Multi-modal Entity AlignmentUMVM-oea-en-deHits@10.969MCLEA (w/o surf)
Multi-modal Entity AlignmentUMVM-oea-en-deHits@10.939MCLEA (w/o surf & w/o iter)
Multi-modal Entity AlignmentUMVM-oea-d-w-v1Hits@10.944MCLEA (w/o surf)
Multi-modal Entity AlignmentUMVM-oea-d-w-v1Hits@10.881MCLEA (w/o surf & w/o iter)

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20SMART: Relation-Aware Learning of Geometric Representations for Knowledge Graphs2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17