TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Recurrent Topic-Transition GAN for Visual Paragraph Genera...

Recurrent Topic-Transition GAN for Visual Paragraph Generation

Xiaodan Liang, Zhiting Hu, Hao Zhang, Chuang Gan, Eric P. Xing

2017-03-21ICCV 2017 10Image Paragraph Captioning
PaperPDF

Abstract

A natural image usually conveys rich semantic content and can be viewed from different angles. Existing image description methods are largely restricted by small sets of biased visual paragraph annotations, and fail to cover rich underlying semantics. In this paper, we investigate a semi-supervised paragraph generative framework that is able to synthesize diverse and semantically coherent paragraph descriptions by reasoning over local semantic regions and exploiting linguistic knowledge. The proposed Recurrent Topic-Transition Generative Adversarial Network (RTT-GAN) builds an adversarial framework between a structured paragraph generator and multi-level paragraph discriminators. The paragraph generator generates sentences recurrently by incorporating region-based visual and language attention mechanisms at each step. The quality of generated paragraph sentences is assessed by multi-level adversarial discriminators from two aspects, namely, plausibility at sentence level and topic-transition coherence at paragraph level. The joint adversarial training of RTT-GAN drives the model to generate realistic paragraphs with smooth logical transition between sentence topics. Extensive quantitative experiments on image and video paragraph datasets demonstrate the effectiveness of our RTT-GAN in both supervised and semi-supervised settings. Qualitative results on telling diverse stories for an image also verify the interpretability of RTT-GAN.

Results

TaskDatasetMetricValueModel
Image Paragraph CaptioningImage Paragraph CaptioningBLEU-49.21RTT-GAN (Semi + Fully)
Image Paragraph CaptioningImage Paragraph CaptioningCIDEr20.36RTT-GAN (Semi + Fully)
Image Paragraph CaptioningImage Paragraph CaptioningMETEOR18.39RTT-GAN (Semi + Fully)

Related Papers

VLIS: Unimodal Language Models Guide Multimodal Language Generation2023-10-15Enhancing image captioning with depth information using a Transformer-based framework2023-07-24Bypass Network for Semantics Driven Image Paragraph Captioning2022-06-21Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning2022-06-03Matching Visual Features to Hierarchical Semantic Topics for Image Paragraph Captioning2021-05-10Interactive Key-Value Memory-augmented Attention for Image Paragraph Captioning2020-12-01When an Image Tells a Story: The Role of Visual and Semantic Information for Generating Paragraph Descriptions2020-12-01Hierarchical Scene Graph Encoder-Decoder for Image Paragraph Captioning2020-10-12