TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Copy mechanism and tailored training for character-based d...

Copy mechanism and tailored training for character-based data-to-text generation

Marco Roberti, Giovanni Bonetta, Rossella Cancelliere, Patrick Gallinari

2019-04-26Data-to-Text GenerationText GenerationTransfer Learning
PaperPDFCode(official)

Abstract

In the last few years, many different methods have been focusing on using deep recurrent neural networks for natural language generation. The most widely used sequence-to-sequence neural methods are word-based: as such, they need a pre-processing step called delexicalization (conversely, relexicalization) to deal with uncommon or unknown words. These forms of processing, however, give rise to models that depend on the vocabulary used and are not completely neural. In this work, we present an end-to-end sequence-to-sequence model with attention mechanism which reads and generates at a character level, no longer requiring delexicalization, tokenization, nor even lowercasing. Moreover, since characters constitute the common "building blocks" of every text, it also allows a more general approach to text generation, enabling the possibility to exploit transfer learning for training. These skills are obtained thanks to two major features: (i) the possibility to alternate between the standard generation mechanism and a copy one, which allows to directly copy input facts to produce outputs, and (ii) the use of an original training pipeline that further improves the quality of the generated texts. We also introduce a new dataset called E2E+, designed to highlight the copying capabilities of character-based models, that is a modified version of the well-known E2E dataset used in the E2E Challenge. We tested our model according to five broadly accepted metrics (including the widely used BLEU), showing that it yields competitive performance with respect to both character-based and word-based approaches.

Results

TaskDatasetMetricValueModel
Text GenerationE2E NLG ChallengeBLEU67.05EDA_CS
Text GenerationE2E NLG ChallengeCIDEr2.2355EDA_CS
Text GenerationE2E NLG ChallengeMETEOR44.49EDA_CS
Text GenerationE2E NLG ChallengeNIST8.515EDA_CS
Text GenerationE2E NLG ChallengeROUGE-L68.94EDA_CS
Text GenerationE2E NLG ChallengeBLEU65.8EDA_CS (TL)
Text GenerationE2E NLG ChallengeCIDEr2.1803EDA_CS (TL)
Text GenerationE2E NLG ChallengeMETEOR45.16EDA_CS (TL)
Text GenerationE2E NLG ChallengeNIST8.5615EDA_CS (TL)
Text GenerationE2E NLG ChallengeROUGE-L67.4EDA_CS (TL)
Data-to-Text GenerationE2E NLG ChallengeBLEU67.05EDA_CS
Data-to-Text GenerationE2E NLG ChallengeCIDEr2.2355EDA_CS
Data-to-Text GenerationE2E NLG ChallengeMETEOR44.49EDA_CS
Data-to-Text GenerationE2E NLG ChallengeNIST8.515EDA_CS
Data-to-Text GenerationE2E NLG ChallengeROUGE-L68.94EDA_CS
Data-to-Text GenerationE2E NLG ChallengeBLEU65.8EDA_CS (TL)
Data-to-Text GenerationE2E NLG ChallengeCIDEr2.1803EDA_CS (TL)
Data-to-Text GenerationE2E NLG ChallengeMETEOR45.16EDA_CS (TL)
Data-to-Text GenerationE2E NLG ChallengeNIST8.5615EDA_CS (TL)
Data-to-Text GenerationE2E NLG ChallengeROUGE-L67.4EDA_CS (TL)

Related Papers

RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Making Language Model a Hierarchical Classifier and Generator2025-07-17Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17Mitigating Object Hallucinations via Sentence-Level Early Intervention2025-07-16Best Practices for Large-Scale, Pixel-Wise Crop Mapping and Transfer Learning Workflows2025-07-16The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs2025-07-15Seq vs Seq: An Open Suite of Paired Encoders and Decoders2025-07-15Hashed Watermark as a Filter: Defeating Forging and Overwriting Attacks in Weight-based Neural Network Watermarking2025-07-15