TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/GROUNDHOG: Grounding Large Language Models to Holistic Seg...

GROUNDHOG: Grounding Large Language Models to Holistic Segmentation

Yichi Zhang, Ziqiao Ma, Xiaofeng Gao, Suhaila Shakiah, Qiaozi Gao, Joyce Chai

2024-02-26CVPR 2024 1Generalized Referring Expression SegmentationReferring Expression SegmentationHallucinationLanguage Modelling
PaperPDF

Abstract

Most multimodal large language models (MLLMs) learn language-to-object grounding through causal language modeling where grounded objects are captured by bounding boxes as sequences of location tokens. This paradigm lacks pixel-level representations that are important for fine-grained visual understanding and diagnosis. In this work, we introduce GROUNDHOG, an MLLM developed by grounding Large Language Models to holistic segmentation. GROUNDHOG incorporates a masked feature extractor and converts extracted features into visual entity tokens for the MLLM backbone, which then connects groundable phrases to unified grounding masks by retrieving and merging the entity masks. To train GROUNDHOG, we carefully curated M3G2, a grounded visual instruction tuning dataset with Multi-Modal Multi-Grained Grounding, by harvesting a collection of segmentation-grounded datasets with rich annotations. Our experimental results show that GROUNDHOG achieves superior performance on various language grounding tasks without task-specific fine-tuning, and significantly reduces object hallucination. GROUNDHOG also demonstrates better grounding towards complex forms of visual input and provides easy-to-understand diagnosis in failure cases.

Results

TaskDatasetMetricValueModel
Instance SegmentationRefCoCo valOverall IoU78.5GROUNDHOG
Instance SegmentationPhraseCutMean IoU54.5GROUNDHOG
Instance SegmentationRefCOCOg-testOverall IoU74.6GROUNDHOG
Instance SegmentationRefCOCO+ valOverall IoU70.5GROUNDHOG
Instance SegmentationRefCOCO+ test BOverall IoU64.9GROUNDHOG
Instance SegmentationRefCOCO+ testAOverall IoU75GROUNDHOG
Instance SegmentationRefCOCOg-valOverall IoU74.1GROUNDHOG
Instance SegmentationgRefCOCOgIoU66.7GROUNDHOG
Referring Expression SegmentationRefCoCo valOverall IoU78.5GROUNDHOG
Referring Expression SegmentationPhraseCutMean IoU54.5GROUNDHOG
Referring Expression SegmentationRefCOCOg-testOverall IoU74.6GROUNDHOG
Referring Expression SegmentationRefCOCO+ valOverall IoU70.5GROUNDHOG
Referring Expression SegmentationRefCOCO+ test BOverall IoU64.9GROUNDHOG
Referring Expression SegmentationRefCOCO+ testAOverall IoU75GROUNDHOG
Referring Expression SegmentationRefCOCOg-valOverall IoU74.1GROUNDHOG
Referring Expression SegmentationgRefCOCOgIoU66.7GROUNDHOG

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Mitigating Object Hallucinations via Sentence-Level Early Intervention2025-07-16Assay2Mol: large language model-based drug design using BioAssay context2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16