Dimsum @LaySumm 20
Tiezheng Yu, Dan Su, Wenliang Dai, Pascale Fung
Abstract
Lay summarization aims to generate lay summaries of scientific papers automatically. It is an essential task that can increase the relevance of science for all of society. In this paper, we build a lay summary generation system based on BART model. We leverage sentence labels as extra supervision signals to improve the performance of lay summarization. In the CL-LaySumm 2020 shared task, our model achieves 46.00 Rouge1-F1 score.
Related Papers
Explanatory Summarization with Discourse-Driven Planning2025-04-27BioLay_AK_SS at BioLaySumm: Domain Adaptation by Two-Stage Fine-Tuning of Large Language Models used for Biomedical Lay Summary Generation2024-08-16Overview of the BioLaySumm 2024 Shared Task on the Lay Summarization of Biomedical Research Articles2024-08-16WisPerMed at BioLaySumm: Adapting Autoregressive Large Language Models for Lay Summarization of Scientific Articles2024-05-20Generating Summaries with Controllable Readability Levels2023-10-16Overview of the BioLaySumm 2023 Shared Task on Lay Summarization of Biomedical Research Articles2023-09-29Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature2022-10-18Using Pre-Trained Transformer for Better Lay Summarization2020-11-01