Yubin Wang, Xinyang Jiang, De Cheng, Wenli Sun, Dongsheng Li, Cairong Zhao
Prompt learning has become a prevalent strategy for adapting vision-language foundation models (VLMs) such as CLIP to downstream tasks. With the emergence of large language models (LLMs), recent studies have explored the potential of using category-related descriptions to enhance prompt effectiveness. However, conventional descriptions lack explicit structured information necessary to represent the interconnections among key elements like entities or attributes with relation to a particular category. Since existing prompt tuning methods give little consideration to managing structured knowledge, this paper advocates leveraging LLMs to construct a graph for each description to prioritize such structured knowledge. Consequently, we propose a novel approach called Hierarchical Prompt Tuning (HPT), enabling simultaneous modeling of both structured and conventional linguistic knowledge. Specifically, we introduce a relationship-guided attention module to capture pair-wise associations among entities and attributes for low-level prompt learning. In addition, by incorporating high-level and global-level prompts modeling overall semantics, the proposed hierarchical structure forges cross-level interlinks and empowers the model to handle more complex and long-term relationships. Finally, by enhancing multi-granularity knowledge generation, redesigning the relationship-driven attention re-weighting module, and incorporating consistent constraints on the hierarchical text encoder, we propose HPT++, which further improves the performance of HPT. Our experiments are conducted across a wide range of evaluation settings, including base-to-new generalization, cross-dataset evaluation, and domain generalization. Extensive results and ablation studies demonstrate the effectiveness of our methods, which consistently outperform existing SOTA methods.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Prompt Engineering | ImageNet-R | Top-1 accuracy % | 77.52 | HPT++ |
| Prompt Engineering | Stanford Cars | Harmonic mean | 75.59 | HPT++ |
| Prompt Engineering | Oxford 102 Flower | Harmonic mean | 85.85 | HPT++ |
| Prompt Engineering | EuroSAT | Harmonic mean | 87.36 | HPT++ |
| Prompt Engineering | Oxford-IIIT Pet Dataset | Harmonic mean | 96.91 | HPT++ |
| Prompt Engineering | ImageNet-S | Top-1 accuracy % | 49.28 | HPT++ |
| Prompt Engineering | DTD | Harmonic mean | 74.23 | HPT++ |
| Prompt Engineering | UCF101 | Harmonic mean | 83.81 | HPT++ |
| Prompt Engineering | Food-101 | Harmonic mean | 91.09 | HPT++ |
| Prompt Engineering | Caltech-101 | Harmonic mean | 96.96 | HPT++ |
| Prompt Engineering | ImageNet | Harmonic mean | 74.24 | HPT++ |
| Prompt Engineering | FGVC-Aircraft | Harmonic mean | 41.33 | HPT++ |
| Prompt Engineering | SUN397 | Harmonic mean | 81.11 | HPT++ |
| Prompt Engineering | ImageNet-A | Top-1 accuracy % | 51.18 | HPT++ |
| Prompt Engineering | ImageNet V2 | Top-1 accuracy % | 65.31 | HPT++ |