TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/FETA: Towards Specializing Foundation Models for Expert Ta...

FETA: Towards Specializing Foundation Models for Expert Task Applications

Amit Alfassy, Assaf Arbelle, Oshri Halimi, Sivan Harary, Roei Herzig, Eli Schwartz, Rameswar Panda, Michele Dolfi, Christoph Auer, Kate Saenko, PeterW. J. Staar, Rogerio Feris, Leonid Karlinsky

2022-09-08Few-Shot LearningImage-text RetrievalText RetrievalImage to textDomain GeneralizationOne-Shot LearningImage-to-Text RetrievalRetrievalZero-Shot LearningZero-shot Image RetrievalImage Retrieval
PaperPDFCode(official)

Abstract

Foundation Models (FMs) have demonstrated unprecedented capabilities including zero-shot learning, high fidelity data synthesis, and out of domain generalization. However, as we show in this paper, FMs still have poor out-of-the-box performance on expert tasks (e.g. retrieval of car manuals technical illustrations from language queries), data for which is either unseen or belonging to a long-tail part of the data distribution of the huge datasets used for FM pre-training. This underlines the necessity to explicitly evaluate and finetune FMs on such expert tasks, arguably ones that appear the most in practical real-world applications. In this paper, we propose a first of its kind FETA benchmark built around the task of teaching FMs to understand technical documentation, via learning to match their graphical illustrations to corresponding language descriptions. Our FETA benchmark focuses on text-to-image and image-to-text retrieval in public car manuals and sales catalogue brochures. FETA is equipped with a procedure for completely automatic annotation extraction (code would be released upon acceptance), allowing easy extension of FETA to more documentation types and application domains in the future. Our automatic annotation leads to an automated performance metric shown to be consistent with metrics computed on human-curated annotations (also released). We provide multiple baselines and analysis of popular FMs on FETA leading to several interesting findings that we believe would be very valuable to the FM community, paving the way towards real-world application of FMs for practical expert tasks currently 'overlooked' by standard benchmarks focusing on common objects.

Results

TaskDatasetMetricValueModel
Image RetrievalFETA Car-ManualsR@129FETA's CLIP-MIL (Many-Shot Image-to-text)
Image RetrievalFETA Car-ManualsR@1072.6FETA's CLIP-MIL (Many-Shot Image-to-text)
Image RetrievalFETA Car-ManualsR@559.9FETA's CLIP-MIL (Many-Shot Image-to-text)
Image-to-Text RetrievalFETA Car-ManualsR@135.5FETA's CLIP-MIL (Many-Shot Image-to-text)
Image-to-Text RetrievalFETA Car-ManualsR@1067FETA's CLIP-MIL (Many-Shot Image-to-text)
Image-to-Text RetrievalFETA Car-ManualsR@558.3FETA's CLIP-MIL (Many-Shot Image-to-text)

Related Papers

GLAD: Generalizable Tuning for Vision-Language Models2025-07-17Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17FAR-Net: Multi-Stage Fusion Network with Enhanced Semantic Alignment and Adaptive Reconciliation for Composed Image Retrieval2025-07-17