TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Less Is More: Linear Layers on CLIP Features as Powerful V...

Less Is More: Linear Layers on CLIP Features as Powerful VizWiz Model

Fabian Deuser, Konrad Habel, Philipp J. Rösch, Norbert Oswald

2022-06-10Question AnsweringVisual Question Answering (VQA)Visual Question AnsweringTask 2
PaperPDF

Abstract

Current architectures for multi-modality tasks such as visual question answering suffer from their high complexity. As a result, these architectures are difficult to train and require high computational resources. To address these problems we present a CLIP-based architecture that does not require any fine-tuning of the feature extractors. A simple linear classifier is used on the concatenated features of the image and text encoder. During training an auxiliary loss is added which operates on the answer types. The resulting classification is then used as an attention gate on the answer class selection. On the VizWiz 2022 Visual Question Answering Challenge we achieve 60.15 % accuracy on Task 1: Predict Answer to a Visual Question and AP score of 83.78 % on Task 2: Predict Answerability of a Visual Question.

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)VizWiz 2020 VQAoverall61.64CLIP-Ensemble
Visual Question Answering (VQA)VizWiz 2020 VQAoverall60.66CLIP-Single
Visual Question Answering (VQA)VizWiz 2020 Answerabilityaverage_precision84.13CLIP-Ensemble
Visual Question Answering (VQA)VizWiz 2020 Answerabilityaverage_precision82.86CLIP-Single

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16