Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, Neil Houlsby
Combining simple architectures with large-scale pre-training has led to massive improvements in image classification. For object detection, pre-training and scaling approaches are less well established, especially in the long-tailed and open-vocabulary setting, where training data is relatively scarce. In this paper, we propose a strong recipe for transferring image-text models to open-vocabulary object detection. We use a standard Vision Transformer architecture with minimal modifications, contrastive image-text pre-training, and end-to-end detection fine-tuning. Our analysis of the scaling properties of this setup shows that increasing image-level pre-training and model size yield consistent improvements on the downstream detection task. We provide the adaptation strategies and regularizations needed to attain very strong performance on zero-shot text-conditioned and one-shot image-conditioned object detection. Code and models are available on GitHub.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Object Detection | LVIS v1.0 | AP novel-LVIS base training | 25.6 | OWL-ViT (CLIP-L/14) |
| Object Detection | LVIS v1.0 | AP novel-Unrestricted open-vocabulary training | 31.2 | OWL-ViT (CLIP-L/14) |
| Object Detection | COCO (Common Objects in Context) | AP 0.5 | 41.8 | OWL-ViT (R50+H/32) |
| Object Detection | Description Detection Dataset | Intra-scenario ABS mAP | 8.8 | OWL-ViT-base |
| Object Detection | Description Detection Dataset | Intra-scenario FULL mAP | 8.6 | OWL-ViT-base |
| Object Detection | Description Detection Dataset | Intra-scenario PRES mAP | 8.5 | OWL-ViT-base |
| 3D | LVIS v1.0 | AP novel-LVIS base training | 25.6 | OWL-ViT (CLIP-L/14) |
| 3D | LVIS v1.0 | AP novel-Unrestricted open-vocabulary training | 31.2 | OWL-ViT (CLIP-L/14) |
| 3D | COCO (Common Objects in Context) | AP 0.5 | 41.8 | OWL-ViT (R50+H/32) |
| 3D | Description Detection Dataset | Intra-scenario ABS mAP | 8.8 | OWL-ViT-base |
| 3D | Description Detection Dataset | Intra-scenario FULL mAP | 8.6 | OWL-ViT-base |
| 3D | Description Detection Dataset | Intra-scenario PRES mAP | 8.5 | OWL-ViT-base |
| 2D Classification | LVIS v1.0 | AP novel-LVIS base training | 25.6 | OWL-ViT (CLIP-L/14) |
| 2D Classification | LVIS v1.0 | AP novel-Unrestricted open-vocabulary training | 31.2 | OWL-ViT (CLIP-L/14) |
| 2D Classification | COCO (Common Objects in Context) | AP 0.5 | 41.8 | OWL-ViT (R50+H/32) |
| 2D Classification | Description Detection Dataset | Intra-scenario ABS mAP | 8.8 | OWL-ViT-base |
| 2D Classification | Description Detection Dataset | Intra-scenario FULL mAP | 8.6 | OWL-ViT-base |
| 2D Classification | Description Detection Dataset | Intra-scenario PRES mAP | 8.5 | OWL-ViT-base |
| 2D Object Detection | LVIS v1.0 | AP novel-LVIS base training | 25.6 | OWL-ViT (CLIP-L/14) |
| 2D Object Detection | LVIS v1.0 | AP novel-Unrestricted open-vocabulary training | 31.2 | OWL-ViT (CLIP-L/14) |
| 2D Object Detection | COCO (Common Objects in Context) | AP 0.5 | 41.8 | OWL-ViT (R50+H/32) |
| 2D Object Detection | Description Detection Dataset | Intra-scenario ABS mAP | 8.8 | OWL-ViT-base |
| 2D Object Detection | Description Detection Dataset | Intra-scenario FULL mAP | 8.6 | OWL-ViT-base |
| 2D Object Detection | Description Detection Dataset | Intra-scenario PRES mAP | 8.5 | OWL-ViT-base |
| Open Vocabulary Object Detection | LVIS v1.0 | AP novel-LVIS base training | 25.6 | OWL-ViT (CLIP-L/14) |
| Open Vocabulary Object Detection | LVIS v1.0 | AP novel-Unrestricted open-vocabulary training | 31.2 | OWL-ViT (CLIP-L/14) |
| 16k | LVIS v1.0 | AP novel-LVIS base training | 25.6 | OWL-ViT (CLIP-L/14) |
| 16k | LVIS v1.0 | AP novel-Unrestricted open-vocabulary training | 31.2 | OWL-ViT (CLIP-L/14) |
| 16k | COCO (Common Objects in Context) | AP 0.5 | 41.8 | OWL-ViT (R50+H/32) |
| 16k | Description Detection Dataset | Intra-scenario ABS mAP | 8.8 | OWL-ViT-base |
| 16k | Description Detection Dataset | Intra-scenario FULL mAP | 8.6 | OWL-ViT-base |
| 16k | Description Detection Dataset | Intra-scenario PRES mAP | 8.5 | OWL-ViT-base |