TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ScaleNAS: One-Shot Learning of Scale-Aware Representations...

ScaleNAS: One-Shot Learning of Scale-Aware Representations for Visual Recognition

Hsin-Pai Cheng, Feng Liang, Meng Li, Bowen Cheng, Feng Yan, Hai Li, Vikas Chandra, Yiran Chen

2020-11-30Semantic SegmentationNeural Architecture SearchPose EstimationMulti-Person Pose EstimationOne-Shot Learning
PaperPDF

Abstract

Scale variance among different sizes of body parts and objects is a challenging problem for visual recognition tasks. Existing works usually design dedicated backbone or apply Neural architecture Search(NAS) for each task to tackle this challenge. However, existing works impose significant limitations on the design or search space. To solve these problems, we present ScaleNAS, a one-shot learning method for exploring scale-aware representations. ScaleNAS solves multiple tasks at a time by searching multi-scale feature aggregation. ScaleNAS adopts a flexible search space that allows an arbitrary number of blocks and cross-scale feature fusions. To cope with the high search cost incurred by the flexible space, ScaleNAS employs one-shot learning for multi-scale supernet driven by grouped sampling and evolutionary search. Without further retraining, ScaleNet can be directly deployed for different visual recognition tasks with superior performance. We use ScaleNAS to create high-resolution models for two different tasks, ScaleNet-P for human pose estimation and ScaleNet-S for semantic segmentation. ScaleNet-P and ScaleNet-S outperform existing manually crafted and NAS-based methods in both tasks. When applying ScaleNet-P to bottom-up human pose estimation, it surpasses the state-of-the-art HigherHRNet. In particular, ScaleNet-P4 achieves 71.6% AP on COCO test-dev, achieving new state-of-the-art result.

Results

TaskDatasetMetricValueModel
Pose EstimationCOCO test-devAP71.6HigherHRNet (ScaleNet_P4)
Pose EstimationCOCO test-devAP5090.3HigherHRNet (ScaleNet_P4)
Pose EstimationCOCO test-devAP7578.2HigherHRNet (ScaleNet_P4)
Pose EstimationCOCO test-devAPL77.2HigherHRNet (ScaleNet_P4)
Pose EstimationCOCO test-devAPM67.5HigherHRNet (ScaleNet_P4)
Pose EstimationCOCO test-devAR76HigherHRNet (ScaleNet_P4)
Pose EstimationCOCO test-devAR5092.3HigherHRNet (ScaleNet_P4)
Pose EstimationCrowdPosemAP @0.5:0.9571.3HigherHRNet (ScaleNet_P4)
3DCOCO test-devAP71.6HigherHRNet (ScaleNet_P4)
3DCOCO test-devAP5090.3HigherHRNet (ScaleNet_P4)
3DCOCO test-devAP7578.2HigherHRNet (ScaleNet_P4)
3DCOCO test-devAPL77.2HigherHRNet (ScaleNet_P4)
3DCOCO test-devAPM67.5HigherHRNet (ScaleNet_P4)
3DCOCO test-devAR76HigherHRNet (ScaleNet_P4)
3DCOCO test-devAR5092.3HigherHRNet (ScaleNet_P4)
3DCrowdPosemAP @0.5:0.9571.3HigherHRNet (ScaleNet_P4)
Multi-Person Pose EstimationCOCO test-devAP71.6HigherHRNet (ScaleNet_P4)
Multi-Person Pose EstimationCOCO test-devAP5090.3HigherHRNet (ScaleNet_P4)
Multi-Person Pose EstimationCOCO test-devAP7578.2HigherHRNet (ScaleNet_P4)
Multi-Person Pose EstimationCOCO test-devAPL77.2HigherHRNet (ScaleNet_P4)
Multi-Person Pose EstimationCOCO test-devAPM67.5HigherHRNet (ScaleNet_P4)
Multi-Person Pose EstimationCOCO test-devAR76HigherHRNet (ScaleNet_P4)
Multi-Person Pose EstimationCOCO test-devAR5092.3HigherHRNet (ScaleNet_P4)
Multi-Person Pose EstimationCrowdPosemAP @0.5:0.9571.3HigherHRNet (ScaleNet_P4)
1 Image, 2*2 StitchiCOCO test-devAP71.6HigherHRNet (ScaleNet_P4)
1 Image, 2*2 StitchiCOCO test-devAP5090.3HigherHRNet (ScaleNet_P4)
1 Image, 2*2 StitchiCOCO test-devAP7578.2HigherHRNet (ScaleNet_P4)
1 Image, 2*2 StitchiCOCO test-devAPL77.2HigherHRNet (ScaleNet_P4)
1 Image, 2*2 StitchiCOCO test-devAPM67.5HigherHRNet (ScaleNet_P4)
1 Image, 2*2 StitchiCOCO test-devAR76HigherHRNet (ScaleNet_P4)
1 Image, 2*2 StitchiCOCO test-devAR5092.3HigherHRNet (ScaleNet_P4)
1 Image, 2*2 StitchiCrowdPosemAP @0.5:0.9571.3HigherHRNet (ScaleNet_P4)

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17DASViT: Differentiable Architecture Search for Vision Transformer2025-07-17$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17