TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Symmetry and Group in Attribute-Object Compositions

Symmetry and Group in Attribute-Object Compositions

Yong-Lu Li, Yue Xu, Xiaohan Mao, Cewu Lu

2020-04-01CVPR 2020 6AttributeZero-Shot LearningCompositional Zero-Shot Learning
PaperPDFCode(official)

Abstract

Attributes and objects can compose diverse compositions. To model the compositional nature of these general concepts, it is a good choice to learn them through transformations, such as coupling and decoupling. However, complex transformations need to satisfy specific principles to guarantee the rationality. In this paper, we first propose a previously ignored principle of attribute-object transformation: Symmetry. For example, coupling peeled-apple with attribute peeled should result in peeled-apple, and decoupling peeled from apple should still output apple. Incorporating the symmetry principle, a transformation framework inspired by group theory is built, i.e. SymNet. SymNet consists of two modules, Coupling Network and Decoupling Network. With the group axioms and symmetry property as objectives, we adopt Deep Neural Networks to implement SymNet and train it in an end-to-end paradigm. Moreover, we propose a Relative Moving Distance (RMD) based recognition method to utilize the attribute change instead of the attribute pattern itself to classify attributes. Our symmetry learning can be utilized for the Compositional Zero-Shot Learning task and outperforms the state-of-the-art on widely-used benchmarks. Code is available at https://github.com/DirtyHarryLYL/SymNet.

Results

TaskDatasetMetricValueModel
Zero-Shot LearningMIT-States, generalized splitH-Mean16.1SymNet
Zero-Shot LearningMIT-States, generalized splitSeen accuracy24.4SymNet
Zero-Shot LearningMIT-States, generalized splitTest AUC top 13SymNet
Zero-Shot LearningMIT-States, generalized splitTest AUC top 27.6SymNet
Zero-Shot LearningMIT-States, generalized splitTest AUC top 312.3SymNet
Zero-Shot LearningMIT-States, generalized splitUnseen accuracy25.2SymNet
Zero-Shot LearningMIT-States, generalized splitVal AUC top 14.3SymNet
Zero-Shot LearningMIT-States, generalized splitVal AUC top 29.8SymNet
Zero-Shot LearningMIT-States, generalized splitVal AUC top 314.8SymNet
Zero-Shot LearningMIT-StatesTop-1 accuracy %19.9SymNet
Zero-Shot LearningMIT-StatesTop-2 accuracy %28.2SymNet
Zero-Shot LearningMIT-StatesTop-3 accuracy %33.8SymNet
Zero-Shot LearningUT-ZapposTop-1 accuracy %52.1SymNet
Zero-Shot LearningUT-ZapposTop-2 accuracy %67.8SymNet
Zero-Shot LearningUT-ZapposTop-3 accuracy %76SymNet

Related Papers

GLAD: Generalizable Tuning for Vision-Language Models2025-07-17MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Non-Adaptive Adversarial Face Generation2025-07-16Attributes Shape the Embedding Space of Face Recognition Models2025-07-15COLIBRI Fuzzy Model: Color Linguistic-Based Representation and Interpretation2025-07-15DEARLi: Decoupled Enhancement of Recognition and Localization for Semi-supervised Panoptic Segmentation2025-07-14Ref-Long: Benchmarking the Long-context Referencing Capability of Long-context Language Models2025-07-13Model Parallelism With Subnetwork Data Parallelism2025-07-11