TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/HPT++: Hierarchically Prompting Vision-Language Models wit...

HPT++: Hierarchically Prompting Vision-Language Models with Multi-Granularity Knowledge Generation and Improved Structure Modeling

Yubin Wang, Xinyang Jiang, De Cheng, Wenli Sun, Dongsheng Li, Cairong Zhao

2024-08-27Prompt EngineeringDomain Generalization
PaperPDFCodeCode

Abstract

Prompt learning has become a prevalent strategy for adapting vision-language foundation models (VLMs) such as CLIP to downstream tasks. With the emergence of large language models (LLMs), recent studies have explored the potential of using category-related descriptions to enhance prompt effectiveness. However, conventional descriptions lack explicit structured information necessary to represent the interconnections among key elements like entities or attributes with relation to a particular category. Since existing prompt tuning methods give little consideration to managing structured knowledge, this paper advocates leveraging LLMs to construct a graph for each description to prioritize such structured knowledge. Consequently, we propose a novel approach called Hierarchical Prompt Tuning (HPT), enabling simultaneous modeling of both structured and conventional linguistic knowledge. Specifically, we introduce a relationship-guided attention module to capture pair-wise associations among entities and attributes for low-level prompt learning. In addition, by incorporating high-level and global-level prompts modeling overall semantics, the proposed hierarchical structure forges cross-level interlinks and empowers the model to handle more complex and long-term relationships. Finally, by enhancing multi-granularity knowledge generation, redesigning the relationship-driven attention re-weighting module, and incorporating consistent constraints on the hierarchical text encoder, we propose HPT++, which further improves the performance of HPT. Our experiments are conducted across a wide range of evaluation settings, including base-to-new generalization, cross-dataset evaluation, and domain generalization. Extensive results and ablation studies demonstrate the effectiveness of our methods, which consistently outperform existing SOTA methods.

Results

TaskDatasetMetricValueModel
Prompt EngineeringImageNet-RTop-1 accuracy %77.52HPT++
Prompt EngineeringStanford CarsHarmonic mean75.59HPT++
Prompt EngineeringOxford 102 FlowerHarmonic mean85.85HPT++
Prompt EngineeringEuroSATHarmonic mean87.36HPT++
Prompt EngineeringOxford-IIIT Pet DatasetHarmonic mean96.91HPT++
Prompt EngineeringImageNet-STop-1 accuracy %49.28HPT++
Prompt EngineeringDTDHarmonic mean74.23HPT++
Prompt EngineeringUCF101Harmonic mean83.81HPT++
Prompt EngineeringFood-101Harmonic mean91.09HPT++
Prompt EngineeringCaltech-101Harmonic mean96.96HPT++
Prompt EngineeringImageNetHarmonic mean74.24HPT++
Prompt EngineeringFGVC-AircraftHarmonic mean41.33HPT++
Prompt EngineeringSUN397Harmonic mean81.11HPT++
Prompt EngineeringImageNet-ATop-1 accuracy %51.18HPT++
Prompt EngineeringImageNet V2Top-1 accuracy %65.31HPT++

Related Papers

Leveraging Language Prior for Infrared Small Target Detection2025-07-17Emotional Support with LLM-based Empathetic Dialogue Generation2025-07-17Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16Prompt Engineering in Segment Anything Model: Methodologies, Applications, and Emerging Challenges2025-07-13From Physics to Foundation Models: A Review of AI-Driven Quantitative Remote Sensing Inversion2025-07-11