TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Dynamic Self-Attention : Computing Attention over Words Dy...

Dynamic Self-Attention : Computing Attention over Words Dynamically for Sentence Embedding

Deunsol Yoon, Dongbok Lee, SangKeun Lee

2018-08-22Natural Language InferenceSentence EmbeddingSentence-Embedding
PaperPDFCode

Abstract

In this paper, we propose Dynamic Self-Attention (DSA), a new self-attention mechanism for sentence embedding. We design DSA by modifying dynamic routing in capsule network (Sabouretal.,2017) for natural language processing. DSA attends to informative words with a dynamic weight vector. We achieve new state-of-the-art results among sentence encoding methods in Stanford Natural Language Inference (SNLI) dataset with the least number of parameters, while showing comparative results in Stanford Sentiment Treebank (SST) dataset.

Results

TaskDatasetMetricValueModel
Natural Language InferenceSNLI% Test Accuracy87.42400D Multiple-Dynamic Self-Attention Model
Natural Language InferenceSNLI% Train Accuracy892400D Multiple-Dynamic Self-Attention Model
Natural Language InferenceSNLI% Test Accuracy86.8600D Dynamic Self-Attention Model
Natural Language InferenceSNLI% Train Accuracy87.3600D Dynamic Self-Attention Model

Related Papers

LRCTI: A Large Language Model-Based Framework for Multi-Step Evidence Retrieval and Reasoning in Cyber Threat Intelligence Credibility Verification2025-07-15DS@GT at CheckThat! 2025: Evaluating Context and Tokenization Strategies for Numerical Fact Verification2025-07-08ARAG: Agentic Retrieval Augmented Generation for Personalized Recommendation2025-06-27Intrinsic vs. Extrinsic Evaluation of Czech Sentence Embeddings: Semantic Relevance Doesn't Help with MT Evaluation2025-06-25Thunder-NUBench: A Benchmark for LLMs' Sentence-Level Negation Understanding2025-06-17When Does Meaning Backfire? Investigating the Role of AMRs in NLI2025-06-17Explainable Compliance Detection with Multi-Hop Natural Language Inference on Assurance Case Structure2025-06-10Factors affecting the in-context learning abilities of LLMs for dialogue state tracking2025-06-10