TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Recurrent Neural Network-Based Sentence Encoder with Gated...

Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference

Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, Diana Inkpen

2017-08-04WS 2017 9Natural Language InferenceNatural Language Understanding
PaperPDFCodeCode

Abstract

The RepEval 2017 Shared Task aims to evaluate natural language understanding models for sentence representation, in which a sentence is represented as a fixed-length vector with neural networks and the quality of the representation is tested with a natural language inference task. This paper describes our system (alpha) that is ranked among the top in the Shared Task, on both the in-domain test set (obtaining a 74.9% accuracy) and on the cross-domain test set (also attaining a 74.9% accuracy), demonstrating that the model generalizes well to the cross-domain data. Our model is equipped with intra-sentence gated-attention composition which helps achieve a better performance. In addition to submitting our model to the Shared Task, we have also tested it on the Stanford Natural Language Inference (SNLI) dataset. We obtain an accuracy of 85.5%, which is the best reported result on SNLI when cross-sentence attention is not allowed, the same condition enforced in RepEval 2017.

Results

TaskDatasetMetricValueModel
Natural Language InferenceSNLI% Test Accuracy85.5600D (300+300) Deep Gated Attn. BiLSTM encoders
Natural Language InferenceSNLI% Train Accuracy90.5600D (300+300) Deep Gated Attn. BiLSTM encoders

Related Papers

LRCTI: A Large Language Model-Based Framework for Multi-Step Evidence Retrieval and Reasoning in Cyber Threat Intelligence Credibility Verification2025-07-15Vision Language Action Models in Robotic Manipulation: A Systematic Review2025-07-14DS@GT at CheckThat! 2025: Evaluating Context and Tokenization Strategies for Numerical Fact Verification2025-07-08A Survey on Vision-Language-Action Models for Autonomous Driving2025-06-30State and Memory is All You Need for Robust and Reliable AI Agents2025-06-30ARAG: Agentic Retrieval Augmented Generation for Personalized Recommendation2025-06-27skLEP: A Slovak General Language Understanding Benchmark2025-06-26SV-LLM: An Agentic Approach for SoC Security Verification using Large Language Models2025-06-25