TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Towards Multi-pose Guided Virtual Try-on Network

Towards Multi-pose Guided Virtual Try-on Network

Haoye Dong, Xiaodan Liang, Bochao Wang, Hanjiang Lai, Jia Zhu, Jian Yin

2019-02-28ICCV 2019 10Virtual Try-onHuman ParsingFashion SynthesisImage-to-Image Translation
PaperPDF

Abstract

Virtual try-on system under arbitrary human poses has huge application potential, yet raises quite a lot of challenges, e.g. self-occlusions, heavy misalignment among diverse poses, and diverse clothes textures. Existing methods aim at fitting new clothes into a person can only transfer clothes on the fixed human pose, but still show unsatisfactory performances which often fail to preserve the identity, lose the texture details, and decrease the diversity of poses. In this paper, we make the first attempt towards multi-pose guided virtual try-on system, which enables transfer clothes on a person image under diverse poses. Given an input person image, a desired clothes image, and a desired pose, the proposed Multi-pose Guided Virtual Try-on Network (MG-VTON) can generate a new person image after fitting the desired clothes into the input image and manipulating human poses. Our MG-VTON is constructed in three stages: 1) a desired human parsing map of the target image is synthesized to match both the desired pose and the desired clothes shape; 2) a deep Warping Generative Adversarial Network (Warp-GAN) warps the desired clothes appearance into the synthesized human parsing map and alleviates the misalignment problem between the input human pose and desired human pose; 3) a refinement render utilizing multi-pose composition masks recovers the texture details of clothes and removes some artifacts. Extensive experiments on well-known datasets and our newly collected largest virtual try-on benchmark demonstrate that our MG-VTON significantly outperforms all state-of-the-art methods both qualitatively and quantitatively with promising multi-pose virtual try-on performances.

Results

TaskDatasetMetricValueModel
Virtual Try-onDeep-FashionIS3.03MG-VTON
Virtual Try-onDeep-FashionSSIM0.744MG-VTON
1 Image, 2*2 StitchiDeep-FashionIS3.03MG-VTON
1 Image, 2*2 StitchiDeep-FashionSSIM0.744MG-VTON

Related Papers

TalkFashion: Intelligent Virtual Try-On Assistant Based on Multimodal Large Language Model2025-07-08CycleVAR: Repurposing Autoregressive Model for Unsupervised One-Step Image Translation2025-06-29Video Virtual Try-on with Conditional Diffusion Transformer Inpainter2025-06-26ThermalDiffusion: Visual-to-Thermal Image-to-Image Translation for Autonomous Navigation2025-06-26Transforming H&E images into IHC: A Variance-Penalized GAN for Precision Oncology2025-06-23Real-Time Per-Garment Virtual Try-On with Temporal Consistency for Loose-Fitting Garments2025-06-14Low-Barrier Dataset Collection with Real Human Body for Interactive Per-Garment Virtual Try-On2025-06-12Optimal Transport Driven Asymmetric Image-to-Image Translation for Nuclei Segmentation of Histological Images2025-06-08