TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning Natural Language Inference using Bidirectional LS...

Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention

Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang

2016-05-30Natural Language Inference
PaperPDFCodeCode

Abstract

In this paper, we proposed a sentence encoding-based model for recognizing text entailment. In our approach, the encoding of sentence is a two-stage process. Firstly, average pooling was used over word-level bidirectional LSTM (biLSTM) to generate a first-stage sentence representation. Secondly, attention mechanism was employed to replace average pooling on the same sentence for better representations. Instead of using target sentence to attend words in source sentence, we utilized the sentence's first-stage representation to attend words appeared in itself, which is called "Inner-Attention" in our paper . Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus has proved the effectiveness of "Inner-Attention" mechanism. With less number of parameters, our model outperformed the existing best sentence encoding-based approach by a large margin.

Results

TaskDatasetMetricValueModel
Natural Language InferenceSNLI% Test Accuracy85600D (300+300) BiLSTM encoders with intra-attention and symbolic preproc.
Natural Language InferenceSNLI% Train Accuracy85.9600D (300+300) BiLSTM encoders with intra-attention and symbolic preproc.
Natural Language InferenceSNLI% Test Accuracy84.2600D (300+300) BiLSTM encoders with intra-attention
Natural Language InferenceSNLI% Train Accuracy84.5600D (300+300) BiLSTM encoders with intra-attention
Natural Language InferenceSNLI% Test Accuracy83.3600D (300+300) BiLSTM encoders
Natural Language InferenceSNLI% Train Accuracy86.4600D (300+300) BiLSTM encoders

Related Papers

LRCTI: A Large Language Model-Based Framework for Multi-Step Evidence Retrieval and Reasoning in Cyber Threat Intelligence Credibility Verification2025-07-15DS@GT at CheckThat! 2025: Evaluating Context and Tokenization Strategies for Numerical Fact Verification2025-07-08ARAG: Agentic Retrieval Augmented Generation for Personalized Recommendation2025-06-27Thunder-NUBench: A Benchmark for LLMs' Sentence-Level Negation Understanding2025-06-17When Does Meaning Backfire? Investigating the Role of AMRs in NLI2025-06-17Explainable Compliance Detection with Multi-Hop Natural Language Inference on Assurance Case Structure2025-06-10Theorem-of-Thought: A Multi-Agent Framework for Abductive, Deductive, and Inductive Reasoning in Language Models2025-06-08A MISMATCHED Benchmark for Scientific Natural Language Inference2025-06-05