TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/A Trigger-Sense Memory Flow Framework for Joint Entity and...

A Trigger-Sense Memory Flow Framework for Joint Entity and Relation Extraction

Yongliang Shen, Xinyin Ma, Yechun Tang, Weiming Lu

2021-01-25Reading ComprehensionRelation ExtractionJoint Entity and Relation Extraction
PaperPDFCode(official)

Abstract

Joint entity and relation extraction framework constructs a unified model to perform entity recognition and relation extraction simultaneously, which can exploit the dependency between the two tasks to mitigate the error propagation problem suffered by the pipeline model. Current efforts on joint entity and relation extraction focus on enhancing the interaction between entity recognition and relation extraction through parameter sharing, joint decoding, or other ad-hoc tricks (e.g., modeled as a semi-Markov decision process, cast as a multi-round reading comprehension task). However, there are still two issues on the table. First, the interaction utilized by most methods is still weak and uni-directional, which is unable to model the mutual dependency between the two tasks. Second, relation triggers are ignored by most methods, which can help explain why humans would extract a relation in the sentence. They're essential for relation extraction but overlooked. To this end, we present a Trigger-Sense Memory Flow Framework (TriMF) for joint entity and relation extraction. We build a memory module to remember category representations learned in entity recognition and relation extraction tasks. And based on it, we design a multi-level memory flow attention mechanism to enhance the bi-directional interaction between entity recognition and relation extraction. Moreover, without any human annotations, our model can enhance relation trigger information in a sentence through a trigger sensor module, which improves the model performance and makes model predictions with better interpretation. Experiment results show that our proposed framework achieves state-of-the-art results by improves the relation F1 to 52.44% (+3.2%) on SciERC, 66.49% (+4.9%) on ACE05, 72.35% (+0.6%) on CoNLL04 and 80.66% (+2.3%) on ADE.

Results

TaskDatasetMetricValueModel
Relation ExtractionACE 2005NER Micro F187.61TriMF
Relation ExtractionACE 2005RE Micro F166.49TriMF
Relation ExtractionACE 2005RE+ Micro F162.77TriMF
Relation ExtractionCoNLL04NER Micro F190.3TriMF
Relation ExtractionCoNLL04RE+ Micro F172.35TriMF
Relation ExtractionSciERCEntity F170.17TriMF
Relation ExtractionSciERCRelation F152.44TriMF
Information ExtractionSciERCEntity F170.17TriMF
Information ExtractionSciERCRelation F152.44TriMF

Related Papers

DocIE@XLLM25: In-Context Learning for Information Extraction using Fully Synthetic Demonstrations2025-07-08DeRIS: Decoupling Perception and Cognition for Enhanced Referring Image Segmentation through Loopback Synergy2025-07-02Multiple Streams of Relation Extraction: Enriching and Recalling in Transformers2025-06-25Chaining Event Spans for Temporal Relation Grounding2025-06-17S2ST-Omni: An Efficient and Scalable Multilingual Speech-to-Speech Translation Framework via Seamless Speech-Text Alignment and Streaming Speech Generation2025-06-11CoMuMDR: Code-mixed Multi-modal Multi-domain corpus for Discourse paRsing in conversations2025-06-10Summarization for Generative Relation Extraction in the Microbiome Domain2025-06-10Automatic Generation of Inference Making Questions for Reading Comprehension Assessments2025-06-09