TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Hierarchically-Refined Label Attention Network for Sequenc...

Hierarchically-Refined Label Attention Network for Sequence Labeling

Leyang Cui, Yue Zhang

2019-08-23IJCNLP 2019 11POSPart-Of-Speech TaggingCCG SupertaggingNERNamed Entity Recognition (NER)POS Tagging
PaperPDFCodeCode(official)

Abstract

CRF has been used as a powerful model for statistical sequence labeling. For neural sequence labeling, however, BiLSTM-CRF does not always lead to better results compared with BiLSTM-softmax local classification. This can be because the simple Markov label transition model of CRF does not give much information gain over strong neural encoding. For better representing label sequences, we investigate a hierarchically-refined label attention network, which explicitly leverages label embeddings and captures potential long-term label dependency by giving each word incrementally refined label distributions with hierarchical attention. Results on POS tagging, NER and CCG supertagging show that the proposed model not only improves the overall tagging accuracy with similar number of parameters, but also significantly speeds up the training and testing compared to BiLSTM-CRF.

Results

TaskDatasetMetricValueModel
Part-Of-Speech TaggingPenn TreebankAccuracy97.65BiLSTM-LAN
Part-Of-Speech TaggingUDAvg accuracy96.88BiLSTM-LAN
Named Entity Recognition (NER)Ontonotes v5 (English)F188.16BiLSTM-LAN
CCG SupertaggingCCGbankAccuracy94.7BiLSTM-LAN

Related Papers

Flippi: End To End GenAI Assistant for E-Commerce2025-07-08Selecting and Merging: Towards Adaptable and Scalable Named Entity Recognition with Large Language Models2025-06-28LingoLoop Attack: Trapping MLLMs via Linguistic Context and State Entrapment into Endless Loops2025-06-17Hybrid Meta-learners for Estimating Heterogeneous Treatment Effects2025-06-16Improving Named Entity Transcription with Contextual LLM-based Revision2025-06-12Step-by-step Instructions and a Simple Tabular Output Format Improve the Dependency Parsing Accuracy of LLMs2025-06-11Better Semi-supervised Learning for Multi-domain ASR Through Incremental Retraining and Data Filtering2025-06-05Efficient Data Selection for Domain Adaptation of ASR Using Pseudo-Labels and Multi-Stage Filtering2025-06-04