TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Webly Supervised Concept Expansion for General Purpose Vis...

Webly Supervised Concept Expansion for General Purpose Vision Models

Amita Kamath, Christopher Clark, Tanmay Gupta, Eric Kolve, Derek Hoiem, Aniruddha Kembhavi

2022-02-04Object CategorizationHuman-Object Interaction DetectionReferring Expression ComprehensionObject LocalizationVisual Question Answering (VQA)Image Retrieval
PaperPDF

Abstract

General Purpose Vision (GPV) systems are models that are designed to solve a wide array of visual tasks without requiring architectural changes. Today, GPVs primarily learn both skills and concepts from large fully supervised datasets. Scaling GPVs to tens of thousands of concepts by acquiring data to learn each concept for every skill quickly becomes prohibitive. This work presents an effective and inexpensive alternative: learn skills from supervised datasets, learn concepts from web image search, and leverage a key characteristic of GPVs: the ability to transfer visual knowledge across skills. We use a dataset of 1M+ images spanning 10k+ visual concepts to demonstrate webly-supervised concept expansion for two existing GPVs (GPV-1 and VL-T5) on 3 benchmarks: 5 COCO-based datasets (80 primary concepts), a newly curated series of 5 datasets based on the OpenImages and VisualGenome repositories (~500 concepts), and the Web-derived dataset (10k+ concepts). We also propose a new architecture, GPV-2 that supports a variety of tasks -- from vision tasks like classification and localization to vision+language tasks like QA and captioning, to more niche ones like human-object interaction detection. GPV-2 benefits hugely from web data and outperforms GPV-1 and VL-T5 across these benchmarks. Our data, code, and web demo are available at https://prior.allenai.org/projects/gpv2.

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)GRITVQA (ablation)63.5GPV-2
Visual Question Answering (VQA)GRITVQA (test)63.2GPV-2
Visual Question Answering (VQA)A-OKVQADA VQA Score40.7GPV-2
Visual Question Answering (VQA)A-OKVQAMC Accuracy53.7GPV-2
Object LocalizationGRITLocalization (ablation)53.6GPV-2
Object LocalizationGRITLocalization (test)53.6GPV-2
Object CategorizationGRITCategorization (ablation)54.7GPV-2
Object CategorizationGRITCategorization (test)55.1GPV-2

Related Papers

VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17FAR-Net: Multi-Stage Fusion Network with Enhanced Semantic Alignment and Adaptive Reconciliation for Composed Image Retrieval2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16RoHOI: Robustness Benchmark for Human-Object Interaction Detection2025-07-12RadiomicsRetrieval: A Customizable Framework for Medical Image Retrieval Using Radiomics Features2025-07-11Bilateral Collaboration with Large Vision-Language Models for Open Vocabulary Human-Object Interaction Detection2025-07-09