Shang-Hua Gao, Ming-Ming Cheng, Kai Zhao, Xin-Yu Zhang, Ming-Hsuan Yang, Philip Torr
Representing features at multiple scales is of great importance for numerous vision tasks. Recent advances in backbone convolutional neural networks (CNNs) continually demonstrate stronger multi-scale representation ability, leading to consistent performance gains on a wide range of applications. However, most existing methods represent the multi-scale features in a layer-wise manner. In this paper, we propose a novel building block for CNNs, namely Res2Net, by constructing hierarchical residual-like connections within one single residual block. The Res2Net represents multi-scale features at a granular level and increases the range of receptive fields for each network layer. The proposed Res2Net block can be plugged into the state-of-the-art backbone CNN models, e.g., ResNet, ResNeXt, and DLA. We evaluate the Res2Net block on all these models and demonstrate consistent performance gains over baseline models on widely-used datasets, e.g., CIFAR-100 and ImageNet. Further ablation studies and experimental results on representative computer vision tasks, i.e., object detection, class activation mapping, and salient object detection, further verify the superiority of the Res2Net over the state-of-the-art baseline methods. The source code and trained models are available on https://mmcheng.net/res2net/.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Object Detection | COCO minival | AP50 | 66.5 | Res2Net101+HTC |
| Object Detection | COCO minival | AP75 | 51.3 | Res2Net101+HTC |
| Object Detection | COCO minival | APL | 62.1 | Res2Net101+HTC |
| Object Detection | COCO minival | APM | 51.6 | Res2Net101+HTC |
| Object Detection | COCO minival | APS | 28.6 | Res2Net101+HTC |
| Object Detection | COCO minival | box AP | 47.5 | Res2Net101+HTC |
| Object Detection | COCO minival | AP50 | 53.6 | Faster R-CNN (Res2Net-50) |
| Object Detection | COCO minival | APL | 51.1 | Faster R-CNN (Res2Net-50) |
| Object Detection | COCO minival | APM | 38.3 | Faster R-CNN (Res2Net-50) |
| Object Detection | COCO minival | APS | 14 | Faster R-CNN (Res2Net-50) |
| Object Detection | COCO minival | box AP | 33.7 | Faster R-CNN (Res2Net-50) |
| Object Detection | ECSSD | F-measure | 0.926 | DSS (Res2Net-50) |
| Object Detection | ECSSD | MAE | 0.056 | DSS (Res2Net-50) |
| Object Detection | PASCAL-S | F-measure | 0.841 | DSS (Res2Net-50) |
| Object Detection | PASCAL-S | MAE | 0.099 | DSS (Res2Net-50) |
| Object Detection | HKU-IS | F-measure | 0.905 | DSS (Res2Net-50) |
| Object Detection | HKU-IS | MAE | 0.05 | DSS (Res2Net-50) |
| Object Detection | DUT-OMRON | F-measure | 0.8 | DSS (Res2Net-50) |
| Object Detection | DUT-OMRON | MAE | 0.071 | DSS (Res2Net-50) |
| Image Classification | GasHisSDB | Accuracy | 98.68 | Res2Net-50 |
| Image Classification | GasHisSDB | F1-Score | 99.29 | Res2Net-50 |
| Image Classification | GasHisSDB | Precision | 99.91 | Res2Net-50 |
| Image Classification | CIFAR-100 | Percentage correct | 83.44 | Res2NeXt-29 |
| 3D | COCO minival | AP50 | 66.5 | Res2Net101+HTC |
| 3D | COCO minival | AP75 | 51.3 | Res2Net101+HTC |
| 3D | COCO minival | APL | 62.1 | Res2Net101+HTC |
| 3D | COCO minival | APM | 51.6 | Res2Net101+HTC |
| 3D | COCO minival | APS | 28.6 | Res2Net101+HTC |
| 3D | COCO minival | box AP | 47.5 | Res2Net101+HTC |
| 3D | COCO minival | AP50 | 53.6 | Faster R-CNN (Res2Net-50) |
| 3D | COCO minival | APL | 51.1 | Faster R-CNN (Res2Net-50) |
| 3D | COCO minival | APM | 38.3 | Faster R-CNN (Res2Net-50) |
| 3D | COCO minival | APS | 14 | Faster R-CNN (Res2Net-50) |
| 3D | COCO minival | box AP | 33.7 | Faster R-CNN (Res2Net-50) |
| 3D | ECSSD | F-measure | 0.926 | DSS (Res2Net-50) |
| 3D | ECSSD | MAE | 0.056 | DSS (Res2Net-50) |
| 3D | PASCAL-S | F-measure | 0.841 | DSS (Res2Net-50) |
| 3D | PASCAL-S | MAE | 0.099 | DSS (Res2Net-50) |
| 3D | HKU-IS | F-measure | 0.905 | DSS (Res2Net-50) |
| 3D | HKU-IS | MAE | 0.05 | DSS (Res2Net-50) |
| 3D | DUT-OMRON | F-measure | 0.8 | DSS (Res2Net-50) |
| 3D | DUT-OMRON | MAE | 0.071 | DSS (Res2Net-50) |
| Instance Segmentation | COCO minival | mask AP | 41.3 | Res2Net-101+HTC |
| Instance Segmentation | COCO minival | AP50 | 57.6 | Faster R-CNN (Res2Net-50) |
| Instance Segmentation | COCO minival | APL | 53.7 | Faster R-CNN (Res2Net-50) |
| Instance Segmentation | COCO minival | APM | 37.9 | Faster R-CNN (Res2Net-50) |
| Instance Segmentation | COCO minival | APS | 15.7 | Faster R-CNN (Res2Net-50) |
| Instance Segmentation | COCO minival | mask AP | 35.6 | Faster R-CNN (Res2Net-50) |
| RGB Salient Object Detection | ECSSD | F-measure | 0.926 | DSS (Res2Net-50) |
| RGB Salient Object Detection | ECSSD | MAE | 0.056 | DSS (Res2Net-50) |
| RGB Salient Object Detection | PASCAL-S | F-measure | 0.841 | DSS (Res2Net-50) |
| RGB Salient Object Detection | PASCAL-S | MAE | 0.099 | DSS (Res2Net-50) |
| RGB Salient Object Detection | HKU-IS | F-measure | 0.905 | DSS (Res2Net-50) |
| RGB Salient Object Detection | HKU-IS | MAE | 0.05 | DSS (Res2Net-50) |
| RGB Salient Object Detection | DUT-OMRON | F-measure | 0.8 | DSS (Res2Net-50) |
| RGB Salient Object Detection | DUT-OMRON | MAE | 0.071 | DSS (Res2Net-50) |
| 2D Classification | COCO minival | AP50 | 66.5 | Res2Net101+HTC |
| 2D Classification | COCO minival | AP75 | 51.3 | Res2Net101+HTC |
| 2D Classification | COCO minival | APL | 62.1 | Res2Net101+HTC |
| 2D Classification | COCO minival | APM | 51.6 | Res2Net101+HTC |
| 2D Classification | COCO minival | APS | 28.6 | Res2Net101+HTC |
| 2D Classification | COCO minival | box AP | 47.5 | Res2Net101+HTC |
| 2D Classification | COCO minival | AP50 | 53.6 | Faster R-CNN (Res2Net-50) |
| 2D Classification | COCO minival | APL | 51.1 | Faster R-CNN (Res2Net-50) |
| 2D Classification | COCO minival | APM | 38.3 | Faster R-CNN (Res2Net-50) |
| 2D Classification | COCO minival | APS | 14 | Faster R-CNN (Res2Net-50) |
| 2D Classification | COCO minival | box AP | 33.7 | Faster R-CNN (Res2Net-50) |
| 2D Classification | ECSSD | F-measure | 0.926 | DSS (Res2Net-50) |
| 2D Classification | ECSSD | MAE | 0.056 | DSS (Res2Net-50) |
| 2D Classification | PASCAL-S | F-measure | 0.841 | DSS (Res2Net-50) |
| 2D Classification | PASCAL-S | MAE | 0.099 | DSS (Res2Net-50) |
| 2D Classification | HKU-IS | F-measure | 0.905 | DSS (Res2Net-50) |
| 2D Classification | HKU-IS | MAE | 0.05 | DSS (Res2Net-50) |
| 2D Classification | DUT-OMRON | F-measure | 0.8 | DSS (Res2Net-50) |
| 2D Classification | DUT-OMRON | MAE | 0.071 | DSS (Res2Net-50) |
| Classification | NCT-CRC-HE-100K | Accuracy (%) | 93.37 | Res2Net-50 |
| Classification | NCT-CRC-HE-100K | F1-Score | 96.25 | Res2Net-50 |
| Classification | NCT-CRC-HE-100K | Precision | 99.93 | Res2Net-50 |
| Classification | NCT-CRC-HE-100K | Specificity | 99.17 | Res2Net-50 |
| 2D Object Detection | COCO minival | AP50 | 66.5 | Res2Net101+HTC |
| 2D Object Detection | COCO minival | AP75 | 51.3 | Res2Net101+HTC |
| 2D Object Detection | COCO minival | APL | 62.1 | Res2Net101+HTC |
| 2D Object Detection | COCO minival | APM | 51.6 | Res2Net101+HTC |
| 2D Object Detection | COCO minival | APS | 28.6 | Res2Net101+HTC |
| 2D Object Detection | COCO minival | box AP | 47.5 | Res2Net101+HTC |
| 2D Object Detection | COCO minival | AP50 | 53.6 | Faster R-CNN (Res2Net-50) |
| 2D Object Detection | COCO minival | APL | 51.1 | Faster R-CNN (Res2Net-50) |
| 2D Object Detection | COCO minival | APM | 38.3 | Faster R-CNN (Res2Net-50) |
| 2D Object Detection | COCO minival | APS | 14 | Faster R-CNN (Res2Net-50) |
| 2D Object Detection | COCO minival | box AP | 33.7 | Faster R-CNN (Res2Net-50) |
| 2D Object Detection | ECSSD | F-measure | 0.926 | DSS (Res2Net-50) |
| 2D Object Detection | ECSSD | MAE | 0.056 | DSS (Res2Net-50) |
| 2D Object Detection | PASCAL-S | F-measure | 0.841 | DSS (Res2Net-50) |
| 2D Object Detection | PASCAL-S | MAE | 0.099 | DSS (Res2Net-50) |
| 2D Object Detection | HKU-IS | F-measure | 0.905 | DSS (Res2Net-50) |
| 2D Object Detection | HKU-IS | MAE | 0.05 | DSS (Res2Net-50) |
| 2D Object Detection | DUT-OMRON | F-measure | 0.8 | DSS (Res2Net-50) |
| 2D Object Detection | DUT-OMRON | MAE | 0.071 | DSS (Res2Net-50) |
| Medical Image Classification | NCT-CRC-HE-100K | Accuracy (%) | 93.37 | Res2Net-50 |
| Medical Image Classification | NCT-CRC-HE-100K | F1-Score | 96.25 | Res2Net-50 |
| Medical Image Classification | NCT-CRC-HE-100K | Precision | 99.93 | Res2Net-50 |
| Medical Image Classification | NCT-CRC-HE-100K | Specificity | 99.17 | Res2Net-50 |
| 16k | COCO minival | AP50 | 66.5 | Res2Net101+HTC |
| 16k | COCO minival | AP75 | 51.3 | Res2Net101+HTC |
| 16k | COCO minival | APL | 62.1 | Res2Net101+HTC |
| 16k | COCO minival | APM | 51.6 | Res2Net101+HTC |
| 16k | COCO minival | APS | 28.6 | Res2Net101+HTC |
| 16k | COCO minival | box AP | 47.5 | Res2Net101+HTC |
| 16k | COCO minival | AP50 | 53.6 | Faster R-CNN (Res2Net-50) |
| 16k | COCO minival | APL | 51.1 | Faster R-CNN (Res2Net-50) |
| 16k | COCO minival | APM | 38.3 | Faster R-CNN (Res2Net-50) |
| 16k | COCO minival | APS | 14 | Faster R-CNN (Res2Net-50) |
| 16k | COCO minival | box AP | 33.7 | Faster R-CNN (Res2Net-50) |
| 16k | ECSSD | F-measure | 0.926 | DSS (Res2Net-50) |
| 16k | ECSSD | MAE | 0.056 | DSS (Res2Net-50) |
| 16k | PASCAL-S | F-measure | 0.841 | DSS (Res2Net-50) |
| 16k | PASCAL-S | MAE | 0.099 | DSS (Res2Net-50) |
| 16k | HKU-IS | F-measure | 0.905 | DSS (Res2Net-50) |
| 16k | HKU-IS | MAE | 0.05 | DSS (Res2Net-50) |
| 16k | DUT-OMRON | F-measure | 0.8 | DSS (Res2Net-50) |
| 16k | DUT-OMRON | MAE | 0.071 | DSS (Res2Net-50) |