TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Simple Recurrent Units for Highly Parallelizable Recurrence

Simple Recurrent Units for Highly Parallelizable Recurrence

Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi

2017-09-08EMNLP 2018 10Text ClassificationMachine TranslationQuestion AnsweringTranslationGeneral Classification
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode(official)

Abstract

Common recurrent neural architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU), a light recurrent unit that balances model capacity and scalability. SRU is designed to provide expressive recurrence, enable highly parallelized implementation, and comes with careful initialization to facilitate training of deep models. We demonstrate the effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over cuDNN-optimized LSTM on classification and question answering datasets, and delivers stronger results than LSTM and convolutional models. We also obtain an average of 0.7 BLEU improvement over the Transformer model on translation by incorporating SRU into the architecture.

Results

TaskDatasetMetricValueModel
Machine TranslationWMT2014 English-GermanBLEU score28.4Transformer + SRU
Question AnsweringSQuAD1.1 devEM71.4SRU
Question AnsweringSQuAD1.1 devF180.2SRU
Question AnsweringSQuAD1.1EM71.4SRU
Question AnsweringSQuAD1.1F180.2SRU

Related Papers

Making Language Model a Hierarchical Classifier and Generator2025-07-17From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17A Translation of Probabilistic Event Calculus into Markov Decision Processes2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16