Description
Pretext-Invariant Representation Learning (PIRL, pronounced as “pearl”) learns invariant representations based on pretext tasks. PIRL is used with a commonly used pretext task that involves solving jigsaw puzzles. Specifically, PIRL constructs image representations that are similar to the representation of transformed versions of the same image and different from the representations of other images.
Papers Using This Method
Physics-Guided Actor-Critic Reinforcement Learning for Swimming in Turbulence2024-06-05A Survey on Physics Informed Reinforcement Learning: Review and Open Problems2023-09-05Synthesizing Programmatic Policies with Actor-Critic Algorithms and ReLU Networks2023-08-04Digital Twin-Enhanced Wireless Indoor Navigation: Achieving Efficient Environment Sensing with Zero-Shot Reinforcement Learning2023-06-11Self-Supervised Learning for Fine-Grained Visual Categorization2021-05-18How Well Do Self-Supervised Models Transfer?2020-11-26Demystifying Contrastive Self-Supervised Learning: Invariances, Augmentations and Dataset Biases2020-07-28Self-Supervised Learning of Pretext-Invariant Representations2019-12-04