TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/State-of-the-Art Augmented NLP Transformer models for dire...

State-of-the-Art Augmented NLP Transformer models for direct and single-step retrosynthesis

Igor V. Tetko, Pavel Karpov, Ruud Van Deursen, Guillaume Godin

2020-03-05Data AugmentationPredictionRetrosynthesisSingle-step retrosynthesisMemorization
PaperPDFCode(official)

Abstract

We investigated the effect of different training scenarios on predicting the (retro)synthesis of chemical compounds using a text-like representation of chemical reactions (SMILES) and Natural Language Processing neural network Transformer architecture. We showed that data augmentation, which is a powerful method used in image processing, eliminated the effect of data memorization by neural networks, and improved their performance for the prediction of new sequences. This effect was observed when augmentation was used simultaneously for input and the target data simultaneously. The top-5 accuracy was 84.8% for the prediction of the largest fragment (thus identifying principal transformation for classical retro-synthesis) for the USPTO-50k test dataset and was achieved by a combination of SMILES augmentation and a beam search algorithm. The same approach provided significantly better results for the prediction of direct reactions from the single-step USPTO-MIT test set. Our model achieved 90.6% top-1 and 96.1% top-5 accuracy for its challenging mixed set and 97% top-5 accuracy for the USPTO-MIT separated set. It also significantly improved results for USPTO-full set single-step retrosynthesis for both top-1 and top-10 accuracies. The appearance frequency of the most abundantly generated SMILES was well correlated with the prediction outcome and can be used as a measure of the quality of reaction prediction.

Results

TaskDatasetMetricValueModel
Single-step retrosynthesisUSPTO-50kTop-1 accuracy53.5Augmented Transformer (reaction class unknown)
Single-step retrosynthesisUSPTO-50kTop-10 accuracy85.7Augmented Transformer (reaction class unknown)
Single-step retrosynthesisUSPTO-50kTop-5 accuracy81Augmented Transformer (reaction class unknown)

Related Papers

Multi-Strategy Improved Snake Optimizer Accelerated CNN-LSTM-Attention-Adaboost for Trajectory Prediction2025-07-21Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Data Augmentation in Time Series Forecasting through Inverted Framework2025-07-15Generative Click-through Rate Prediction with Applications to Search Advertising2025-07-15What Should LLMs Forget? Quantifying Personal Data in LLMs for Right-to-Be-Forgotten Requests2025-07-15Iceberg: Enhancing HLS Modeling with Synthetic Data2025-07-14