TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning Robust Visual-Semantic Embeddings

Learning Robust Visual-Semantic Embeddings

Yao-Hung Hubert Tsai, Liang-Kang Huang, Ruslan Salakhutdinov

2017-03-17ICCV 2017 10Representation LearningGeneralized Few-Shot LearningRetrieval
PaperPDF

Abstract

Many of the existing methods for learning joint embedding of images and text use only supervised information from paired images and its textual attributes. Taking advantage of the recent success of unsupervised learning in deep neural networks, we propose an end-to-end learning framework that is able to extract more robust multi-modal representations across domains. The proposed method combines representation learning models (i.e., auto-encoders) together with cross-domain learning criteria (i.e., Maximum Mean Discrepancy loss) to learn joint embeddings for semantic and visual features. A novel technique of unsupervised-data adaptation inference is introduced to construct more comprehensive embeddings for both labeled and unlabeled data. We evaluate our method on Animals with Attributes and Caltech-UCSD Birds 200-2011 dataset with a wide range of applications, including zero and few-shot image recognition and retrieval, from inductive to transductive settings. Empirically, we show that our framework improves over the current state of the art on many of the considered tasks.

Results

TaskDatasetMetricValueModel
Generalized Few-Shot LearningAwA2Per-Class Accuracy (1-shot)56.1REVISE
Generalized Few-Shot LearningAwA2Per-Class Accuracy (10-shots)67.8REVISE
Generalized Few-Shot LearningAwA2Per-Class Accuracy (2-shots)60.3REVISE
Generalized Few-Shot LearningAwA2Per-Class Accuracy (5-shots)64.1REVISE
Generalized Few-Shot LearningCUBPer-Class Accuracy (2-shots)41.1REVISE
Generalized Few-Shot LearningCUBPer-Class Accuracy (1-shot)36.3REVISE
Generalized Few-Shot LearningCUBPer-Class Accuracy (10-shots)50.9REVISE
Generalized Few-Shot LearningCUBPer-Class Accuracy (5-shots)44.6REVISE
Generalized Few-Shot LearningSUNPer-Class Accuracy (1-shot)27.4REVISE
Generalized Few-Shot LearningSUNPer-Class Accuracy (10-shots)40.8REVISE
Generalized Few-Shot LearningSUNPer-Class Accuracy (2-shots)33.4REVISE
Generalized Few-Shot LearningSUNPer-Class Accuracy (5-shots)37.4REVISE

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16