TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/TextDiffuser-2: Unleashing the Power of Language Models fo...

TextDiffuser-2: Unleashing the Power of Language Models for Text Rendering

Jingye Chen, Yupan Huang, Tengchao Lv, Lei Cui, Qifeng Chen, Furu Wei

2023-11-28Large Language ModelImage GenerationLanguage Modelling
PaperPDF

Abstract

The diffusion model has been proven a powerful generative model in recent years, yet remains a challenge in generating visual text. Several methods alleviated this issue by incorporating explicit text position and content as guidance on where and what text to render. However, these methods still suffer from several drawbacks, such as limited flexibility and automation, constrained capability of layout prediction, and restricted style diversity. In this paper, we present TextDiffuser-2, aiming to unleash the power of language models for text rendering. Firstly, we fine-tune a large language model for layout planning. The large language model is capable of automatically generating keywords for text rendering and also supports layout modification through chatting. Secondly, we utilize the language model within the diffusion model to encode the position and texts at the line level. Unlike previous methods that employed tight character-level guidance, this approach generates more diverse text images. We conduct extensive experiments and incorporate user studies involving human participants as well as GPT-4V, validating TextDiffuser-2's capacity to achieve a more rational text layout and generation with enhanced diversity. The code and model will be available at \url{https://aka.ms/textdiffuser-2}.

Results

TaskDatasetMetricValueModel
Image GenerationTextAtlasEvalStyledTextSynth Clip Score0.251TextDiffuser2
Image GenerationTextAtlasEvalStyledTextSynth FID114.31TextDiffuser2
Image GenerationTextAtlasEvalStyledTextSynth OCR (Accuracy)0.76TextDiffuser2
Image GenerationTextAtlasEvalStyledTextSynth OCR (Cer)0.99TextDiffuser2
Image GenerationTextAtlasEvalStyledTextSynth OCR (F1 Score)1.46TextDiffuser2
Image GenerationTextAtlasEvalTextScenesHQ Clip Score0.2252TextDiffuser2
Image GenerationTextAtlasEvalTextScenesHQ FID84.1TextDiffuser2
Image GenerationTextAtlasEvalTextScenesHQ OCR (Accuracy)0.66TextDiffuser2
Image GenerationTextAtlasEvalTextScenesHQ OCR (Cer)0.96TextDiffuser2
Image GenerationTextAtlasEvalTextScenesHQ OCR (F1 Score)1.25TextDiffuser2

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21DENSE: Longitudinal Progress Note Generation with Temporal Modeling of Heterogeneous Clinical Notes Across Hospital Visits2025-07-18GeoReg: Weight-Constrained Few-Shot Regression for Socio-Economic Estimation using LLM2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Rethinking the Embodied Gap in Vision-and-Language Navigation: A Holistic Study of Physical and Visual Disparities2025-07-17fastWDM3D: Fast and Accurate 3D Healthy Tissue Inpainting2025-07-17Synthesizing Reality: Leveraging the Generative AI-Powered Platform Midjourney for Construction Worker Detection2025-07-17