TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/g2pW: A Conditional Weighted Softmax BERT for Polyphone Di...

g2pW: A Conditional Weighted Softmax BERT for Polyphone Disambiguation in Mandarin

Yi-Chang Chen, Yu-Chuan Chang, Yen-Cheng Chang, Yi-Ren Yeh

2022-03-20POSPart-Of-Speech TaggingPolyphone disambiguationPOS Tagging
PaperPDFCode(official)

Abstract

Polyphone disambiguation is the most crucial task in Mandarin grapheme-to-phoneme (g2p) conversion. Previous studies have approached this problem using pre-trained language models, restricted output, and extra information from Part-Of-Speech (POS) tagging. Inspired by these strategies, we propose a novel approach, called g2pW, which adapts learnable softmax-weights to condition the outputs of BERT with the polyphonic character of interest and its POS tagging. Rather than using the hard mask as in previous works, our experiments show that learning a soft-weighting function for the candidate phonemes benefits performance. In addition, our proposed g2pW does not require extra pre-trained POS tagging models while using POS tags as auxiliary features since we train the POS tagging model simultaneously with the unified encoder. Experimental results show that our g2pW outperforms existing methods on the public CPP dataset. All codes, model weights, and a user-friendly package are publicly available.

Results

TaskDatasetMetricValueModel
Polyphone disambiguationCPPAccuracy99.08g2pW

Related Papers

LingoLoop Attack: Trapping MLLMs via Linguistic Context and State Entrapment into Endless Loops2025-06-17Hybrid Meta-learners for Estimating Heterogeneous Treatment Effects2025-06-16Step-by-step Instructions and a Simple Tabular Output Format Improve the Dependency Parsing Accuracy of LLMs2025-06-11Private MEV Protection RPCs: Benchmark Stud2025-05-26FiLLM -- A Filipino-optimized Large Language Model based on Southeast Asia Large Language Model (SEALLM)2025-05-25On Multilingual Encoder Language Model Compression for Low-Resource Languages2025-05-22The taggedPBC: Annotating a massive parallel corpus for crosslinguistic investigations2025-05-18A Comparative Analysis of Static Word Embeddings for Hungarian2025-05-12