TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Discourse Marker Augmented Network with Reinforcement Lear...

Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference

Boyuan Pan, Yazheng Yang, Zhou Zhao, Yueting Zhuang, Deng Cai, Xiaofei He

2019-07-23ACL 2018 7Reinforcement LearningNatural Language InferenceRTEreinforcement-learning
PaperPDFCode(official)

Abstract

Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), is one of the most important problems in natural language processing. It requires to infer the logical relationship between two given sentences. While current approaches mostly focus on the interaction architectures of the sentences, in this paper, we propose to transfer knowledge from some important discourse markers to augment the quality of the NLI model. We observe that people usually use some discourse markers such as "so" or "but" to represent the logical relationship between two sentences. These words potentially have deep connections with the meanings of the sentences, thus can be utilized to help improve the representations of them. Moreover, we use reinforcement learning to optimize a new objective function with a reward defined by the property of the NLI datasets to make full use of the labels information. Experiments show that our method achieves the state-of-the-art performance on several large-scale datasets.

Results

TaskDatasetMetricValueModel
Natural Language InferenceSNLI% Test Accuracy89.6300D DMAN Ensemble
Natural Language InferenceSNLI% Train Accuracy96.1300D DMAN Ensemble
Natural Language InferenceSNLI% Test Accuracy89.6300D DMAN Ensemble
Natural Language InferenceSNLI% Train Accuracy96.1300D DMAN Ensemble
Natural Language InferenceSNLI% Test Accuracy88.8300D DMAN
Natural Language InferenceSNLI% Train Accuracy95.4300D DMAN
Natural Language InferenceSNLI% Test Accuracy88.8300D DMAN
Natural Language InferenceSNLI% Train Accuracy95.4300D DMAN

Related Papers

CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Aligning Humans and Robots via Reinforcement Learning from Implicit Human Feedback2025-07-17VAR-MATH: Probing True Mathematical Reasoning in Large Language Models via Symbolic Multi-Instance Benchmarks2025-07-17QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Autonomous Resource Management in Microservice Systems via Reinforcement Learning2025-07-17