Yuhui Yuan, Xiaokang Chen, Xilin Chen, Jingdong Wang
In this paper, we address the semantic segmentation problem with a focus on the context aggregation strategy. Our motivation is that the label of a pixel is the category of the object that the pixel belongs to. We present a simple yet effective approach, object-contextual representations, characterizing a pixel by exploiting the representation of the corresponding object class. First, we learn object regions under the supervision of ground-truth segmentation. Second, we compute the object region representation by aggregating the representations of the pixels lying in the object region. Last, % the representation similarity we compute the relation between each pixel and each object region and augment the representation of each pixel with the object-contextual representation which is a weighted aggregation of all the object region representations according to their relations with the pixel. We empirically demonstrate that the proposed approach achieves competitive performance on various challenging semantic segmentation benchmarks: Cityscapes, ADE20K, LIP, PASCAL-Context, and COCO-Stuff. Cityscapes, ADE20K, LIP, PASCAL-Context, and COCO-Stuff. Our submission "HRNet + OCR + SegFix" achieves 1-st place on the Cityscapes leaderboard by the time of submission. Code is available at: https://git.io/openseg and https://git.io/HRNet.OCR. We rephrase the object-contextual representation scheme using the Transformer encoder-decoder framework. The details are presented in~Section3.3.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Semantic Segmentation | Cityscapes val | mIoU | 83.6 | HRNetV2 + OCR + RMI (PaddleClas pretrained) |
| Semantic Segmentation | Cityscapes val | mIoU | 80.6 | OCR (ResNet-101-FCN) |
| Semantic Segmentation | BDD100K val | mIoU | 60.1 | OCRNet |
| Semantic Segmentation | ADE20K val | mIoU | 47.98 | HRNetV2 + OCR + RMI (PaddleClas pretrained) |
| Semantic Segmentation | ADE20K val | mIoU | 45.66 | OCR (HRNetV2-W48) |
| Semantic Segmentation | ADE20K val | mIoU | 45.28 | OCR (ResNet-101) |
| Semantic Segmentation | PASCAL Context | mIoU | 59.6 | HRNetV2 + OCR + RMI (PaddleClas pretrained) |
| Semantic Segmentation | PASCAL Context | mIoU | 56.2 | OCR (HRNetV2-W48) |
| Semantic Segmentation | PASCAL Context | mIoU | 54.8 | OCR (ResNet-101) |
| Semantic Segmentation | ADE20K | Validation mIoU | 47.98 | HRNetV2 + OCR + RMI (PaddleClas pretrained) |
| Semantic Segmentation | ADE20K | Validation mIoU | 45.66 | OCR(HRNetV2-W48) |
| Semantic Segmentation | ADE20K | Validation mIoU | 45.28 | OCR (ResNet-101) |
| 10-shot image generation | Cityscapes val | mIoU | 83.6 | HRNetV2 + OCR + RMI (PaddleClas pretrained) |
| 10-shot image generation | Cityscapes val | mIoU | 80.6 | OCR (ResNet-101-FCN) |
| 10-shot image generation | BDD100K val | mIoU | 60.1 | OCRNet |
| 10-shot image generation | ADE20K val | mIoU | 47.98 | HRNetV2 + OCR + RMI (PaddleClas pretrained) |
| 10-shot image generation | ADE20K val | mIoU | 45.66 | OCR (HRNetV2-W48) |
| 10-shot image generation | ADE20K val | mIoU | 45.28 | OCR (ResNet-101) |
| 10-shot image generation | PASCAL Context | mIoU | 59.6 | HRNetV2 + OCR + RMI (PaddleClas pretrained) |
| 10-shot image generation | PASCAL Context | mIoU | 56.2 | OCR (HRNetV2-W48) |
| 10-shot image generation | PASCAL Context | mIoU | 54.8 | OCR (ResNet-101) |
| 10-shot image generation | ADE20K | Validation mIoU | 47.98 | HRNetV2 + OCR + RMI (PaddleClas pretrained) |
| 10-shot image generation | ADE20K | Validation mIoU | 45.66 | OCR(HRNetV2-W48) |
| 10-shot image generation | ADE20K | Validation mIoU | 45.28 | OCR (ResNet-101) |