TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Automatic Annotation Augmentation Boosts Translation betwe...

Automatic Annotation Augmentation Boosts Translation between Molecules and Natural Language

Zhiqiang Zhong, Simon Sataa-Yu Larsen, Haoyu Guo, Tao Tang, Kuangyu Zhou, Davide Mottin

2025-02-10Drug DiscoveryText-based de novo Molecule GenerationMolecule Captioning
PaperPDFCode

Abstract

Recent advancements in AI for biological research focus on integrating molecular data with natural language to accelerate drug discovery. However, the scarcity of high-quality annotations limits progress in this area. This paper introduces LA$^3$, a Language-based Automatic Annotation Augmentation framework that leverages large language models to augment existing datasets, thereby improving AI training. We demonstrate the effectiveness of LA$^3$ by creating an enhanced dataset, LaChEBI-20, where we systematically rewrite the annotations of molecules from an established dataset. These rewritten annotations preserve essential molecular information while providing more varied sentence structures and vocabulary. Using LaChEBI-20, we train LaMolT5 based on a benchmark architecture to learn the mapping between molecular representations and augmented annotations. Experimental results on text-based *de novo* molecule generation and molecule captioning demonstrate that LaMolT5 outperforms state-of-the-art models. Notably, incorporating LA$^3$ leads to improvements of up to 301% over the benchmark architecture. Furthermore, we validate the effectiveness of LA$^3$ notable applications in *image*, *text* and *graph* tasks, affirming its versatility and utility.

Results

TaskDatasetMetricValueModel
Molecule CaptioningChEBI-20BLEU-260.2LaMolT5-Large
Molecule CaptioningChEBI-20BLEU-452.1LaMolT5-Large
Molecule CaptioningChEBI-20METEOR63.4LaMolT5-Large
Molecule CaptioningChEBI-20ROUGE-165.5LaMolT5-Large
Molecule CaptioningChEBI-20ROUGE-251.2LaMolT5-Large
Molecule CaptioningChEBI-20ROUGE-L59.8LaMolT5-Large
Molecule CaptioningChEBI-20Text2Mol59.7LaMolT5-Large
Molecule CaptioningChEBI-20BLEU-257.4LaMolT5-Base
Molecule CaptioningChEBI-20BLEU-448.5LaMolT5-Base
Molecule CaptioningChEBI-20METEOR59.6LaMolT5-Base
Molecule CaptioningChEBI-20ROUGE-163.4LaMolT5-Base
Molecule CaptioningChEBI-20ROUGE-247.8LaMolT5-Base
Molecule CaptioningChEBI-20ROUGE-L56.4LaMolT5-Base
Molecule CaptioningChEBI-20Text2Mol59.9LaMolT5-Base
Molecule CaptioningChEBI-20BLEU-253.9LaMolT5-Small
Molecule CaptioningChEBI-20BLEU-444.6LaMolT5-Small
Molecule CaptioningChEBI-20METEOR56.6LaMolT5-Small
Molecule CaptioningChEBI-20ROUGE-162LaMolT5-Small
Molecule CaptioningChEBI-20ROUGE-246.9LaMolT5-Small
Molecule CaptioningChEBI-20ROUGE-L56.3LaMolT5-Small
Molecule CaptioningChEBI-20Text2Mol58.8LaMolT5-Small

Related Papers

Assay2Mol: large language model-based drug design using BioAssay context2025-07-16A Graph-in-Graph Learning Framework for Drug-Target Interaction Prediction2025-07-15Graph Learning2025-07-08Exploring Modularity of Agentic Systems for Drug Discovery2025-06-27Diverse Mini-Batch Selection in Reinforcement Learning for Efficient Chemical Exploration in de novo Drug Design2025-06-26Large Language Model Agent for Modular Task Execution in Drug Discovery2025-06-26PocketVina Enables Scalable and Highly Accurate Physically Valid Docking through Multi-Pocket Conditioning2025-06-24A standard transformer and attention with linear biases for molecular conformer generation2025-06-24