TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/More Embeddings, Better Sequence Labelers?

More Embeddings, Better Sequence Labelers?

Xinyu Wang, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu

2020-09-17Findings of the Association for Computational Linguistics 2020Word EmbeddingsChunking
PaperPDF

Abstract

Recent work proposes a family of contextual embeddings that significantly improves the accuracy of sequence labelers over non-contextual embeddings. However, there is no definite conclusion on whether we can build better sequence labelers by combining different kinds of embeddings in various settings. In this paper, we conduct extensive experiments on 3 tasks over 18 datasets and 8 languages to study the accuracy of sequence labeling with various embedding concatenations and make three observations: (1) concatenating more embedding variants leads to better accuracy in rich-resource and cross-domain settings and some conditions of low-resource settings; (2) concatenating additional contextual sub-word embeddings with contextual character embeddings hurts the accuracy in extremely low-resource settings; (3) based on the conclusion of (1), concatenating additional similar contextual embeddings cannot lead to further improvements. We hope these conclusions can help people build stronger sequence labelers in various settings.

Results

TaskDatasetMetricValueModel
ChunkingCoNLL 2003 (German)F194.4Wang et al., 2020
ChunkingCoNLL 2003 (English)F192Wang et al., 2020
Shallow SyntaxCoNLL 2003 (German)F194.4Wang et al., 2020
Shallow SyntaxCoNLL 2003 (English)F192Wang et al., 2020

Related Papers

Dynamic Chunking for End-to-End Hierarchical Sequence Modeling2025-07-10Speak2Sign3D: A Multi-modal Pipeline for English Speech to American Sign Language Animation2025-07-09CLI-RAG: A Retrieval-Augmented Framework for Clinically Structured and Context Aware Text Generation with LLMs2025-07-09Computational Detection of Intertextual Parallels in Biblical Hebrew: A Benchmark Study Using Transformer-Based Language Models2025-06-30Can LLMs Replace Humans During Code Chunking?2025-06-24CronusVLA: Transferring Latent Motion Across Time for Multi-Frame Prediction in Manipulation2025-06-24Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Low-resource keyword spotting using contrastively trained transformer acoustic word embeddings2025-06-21