TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Exploring Optimal Transport-Based Multi-Grained Alignments...

Exploring Optimal Transport-Based Multi-Grained Alignments for Text-Molecule Retrieval

Zijun Min, Bingshuai Liu, Liang Zhang, Jia Song, Jinsong Su, Song He, Xiaochen Bo

2024-11-04Cross-Modal RetrievalContrastive LearningRetrieval
PaperPDF

Abstract

The field of bioinformatics has seen significant progress, making the cross-modal text-molecule retrieval task increasingly vital. This task focuses on accurately retrieving molecule structures based on textual descriptions, by effectively aligning textual descriptions and molecules to assist researchers in identifying suitable molecular candidates. However, many existing approaches overlook the details inherent in molecule sub-structures. In this work, we introduce the Optimal TRansport-based Multi-grained Alignments model (ORMA), a novel approach that facilitates multi-grained alignments between textual descriptions and molecules. Our model features a text encoder and a molecule encoder. The text encoder processes textual descriptions to generate both token-level and sentence-level representations, while molecules are modeled as hierarchical heterogeneous graphs, encompassing atom, motif, and molecule nodes to extract representations at these three levels. A key innovation in ORMA is the application of Optimal Transport (OT) to align tokens with motifs, creating multi-token representations that integrate multiple token alignments with their corresponding motifs. Additionally, we employ contrastive learning to refine cross-modal alignments at three distinct scales: token-atom, multitoken-motif, and sentence-molecule, ensuring that the similarities between correctly matched text-molecule pairs are maximized while those of unmatched pairs are minimized. To our knowledge, this is the first attempt to explore alignments at both the motif and multi-token levels. Experimental results on the ChEBI-20 and PCdes datasets demonstrate that ORMA significantly outperforms existing state-of-the-art (SOTA) models.

Results

TaskDatasetMetricValueModel
Image Retrieval with Multi-Modal QueryChEBI-20Hits@166.5ORMA
Image Retrieval with Multi-Modal QueryChEBI-20Hits@1093.9ORMA
Image Retrieval with Multi-Modal QueryChEBI-20Mean Rank18.53ORMA
Image Retrieval with Multi-Modal QueryChEBI-20Test MRR77.2ORMA
Cross-Modal Information RetrievalChEBI-20Hits@166.5ORMA
Cross-Modal Information RetrievalChEBI-20Hits@1093.9ORMA
Cross-Modal Information RetrievalChEBI-20Mean Rank18.53ORMA
Cross-Modal Information RetrievalChEBI-20Test MRR77.2ORMA
Cross-Modal RetrievalChEBI-20Hits@166.5ORMA
Cross-Modal RetrievalChEBI-20Hits@1093.9ORMA
Cross-Modal RetrievalChEBI-20Mean Rank18.53ORMA
Cross-Modal RetrievalChEBI-20Test MRR77.2ORMA

Related Papers

SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16