TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/OVMR: Open-Vocabulary Recognition with Multi-Modal Referen...

OVMR: Open-Vocabulary Recognition with Multi-Modal References

Zehong Ma, Shiliang Zhang, Longhui Wei, Qi Tian

2024-06-07CVPR 2024 1Open Vocabulary Object Detection
PaperPDFCode(official)

Abstract

The challenge of open-vocabulary recognition lies in the model has no clue of new categories it is applied to. Existing works have proposed different methods to embed category cues into the model, \eg, through few-shot fine-tuning, providing category names or textual descriptions to Vision-Language Models. Fine-tuning is time-consuming and degrades the generalization capability. Textual descriptions could be ambiguous and fail to depict visual details. This paper tackles open-vocabulary recognition from a different perspective by referring to multi-modal clues composed of textual descriptions and exemplar images. Our method, named OVMR, adopts two innovative components to pursue a more robust category cues embedding. A multi-modal classifier is first generated by dynamically complementing textual descriptions with image exemplars. A preference-based refinement module is hence applied to fuse uni-modal and multi-modal classifiers, with the aim to alleviate issues of low-quality exemplar images or textual descriptions. The proposed OVMR is a plug-and-play module, and works well with exemplar images randomly crawled from the Internet. Extensive experiments have demonstrated the promising performance of OVMR, \eg, it outperforms existing methods across various scenarios and setups. Codes are publicly available at \href{https://github.com/Zehong-Ma/OVMR}{https://github.com/Zehong-Ma/OVMR}.

Results

TaskDatasetMetricValueModel
Object DetectionLVIS v1.0AP novel-LVIS base training34.4OVMR
3DLVIS v1.0AP novel-LVIS base training34.4OVMR
2D ClassificationLVIS v1.0AP novel-LVIS base training34.4OVMR
2D Object DetectionLVIS v1.0AP novel-LVIS base training34.4OVMR
Open Vocabulary Object DetectionLVIS v1.0AP novel-LVIS base training34.4OVMR
16kLVIS v1.0AP novel-LVIS base training34.4OVMR

Related Papers

ATAS: Any-to-Any Self-Distillation for Enhanced Open-Vocabulary Dense Prediction2025-06-10Gen-n-Val: Agentic Image Data Generation and Validation2025-06-05From Data to Modeling: Fully Open-vocabulary Scene Graph Generation2025-05-26FG-CLIP: Fine-Grained Visual and Textual Alignment2025-05-08VLM-R1: A Stable and Generalizable R1-style Large Vision-Language Model2025-04-10An Iterative Feedback Mechanism for Improving Natural Language Class Descriptions in Open-Vocabulary Object Detection2025-03-21Superpowering Open-Vocabulary Object Detectors for X-ray Vision2025-03-21Fine-Grained Open-Vocabulary Object Detection with Fined-Grained Prompts: Task, Dataset and Benchmark2025-03-19