Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, Kai-Wei Chang, Jianfeng Gao
This paper presents a grounded language-image pre-training (GLIP) model for learning object-level, language-aware, and semantic-rich visual representations. GLIP unifies object detection and phrase grounding for pre-training. The unification brings two benefits: 1) it allows GLIP to learn from both detection and grounding data to improve both tasks and bootstrap a good grounding model; 2) GLIP can leverage massive image-text pairs by generating grounding boxes in a self-training fashion, making the learned representation semantic-rich. In our experiments, we pre-train GLIP on 27M grounding data, including 3M human-annotated and 24M web-crawled image-text pairs. The learned representations demonstrate strong zero-shot and few-shot transferability to various object-level recognition tasks. 1) When directly evaluated on COCO and LVIS (without seeing any images in COCO during pre-training), GLIP achieves 49.8 AP and 26.9 AP, respectively, surpassing many supervised baselines. 2) After fine-tuned on COCO, GLIP achieves 60.8 AP on val and 61.5 AP on test-dev, surpassing prior SoTA. 3) When transferred to 13 downstream object detection tasks, a 1-shot GLIP rivals with a fully-supervised Dynamic Head. Code is released at https://github.com/microsoft/GLIP.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Phrase Grounding | Flickr30k Entities Test | R@1 | 87.1 | GLIP |
| Phrase Grounding | Flickr30k Entities Test | R@10 | 98.1 | GLIP |
| Phrase Grounding | Flickr30k Entities Test | R@5 | 96.9 | GLIP |
| Object Detection | COCO test-dev | AP50 | 79.5 | GLIP (Swin-L, multi-scale) |
| Object Detection | COCO test-dev | AP75 | 67.7 | GLIP (Swin-L, multi-scale) |
| Object Detection | COCO test-dev | APL | 75 | GLIP (Swin-L, multi-scale) |
| Object Detection | COCO test-dev | APM | 64.9 | GLIP (Swin-L, multi-scale) |
| Object Detection | COCO test-dev | APS | 45.3 | GLIP (Swin-L, multi-scale) |
| Object Detection | COCO test-dev | box mAP | 61.5 | GLIP (Swin-L, multi-scale) |
| Object Detection | COCO-O | Average mAP | 48 | GLIP-L (Swin-L) |
| Object Detection | COCO-O | Effective Robustness | 24.89 | GLIP-L (Swin-L) |
| Object Detection | COCO-O | Average mAP | 29.1 | GLIP-T (Swin-T) |
| Object Detection | COCO-O | Effective Robustness | 8.11 | GLIP-T (Swin-T) |
| Object Detection | ODinW Full-Shot 13 Tasks | AP | 68.9 | GLIP |
| Object Detection | COCO minival | box AP | 60.8 | GLIP (Swin-L, multi-scale) |
| Object Detection | ODinW-35 | Average Score | 38.9 | GLIP-T |
| Object Detection | ODinW-13 | Average Score | 50.7 | GLIP-T |
| Object Detection | LVIS v1.0 minival | AP | 37.3 | GLIP-L |
| Object Detection | LVIS v1.0 val | AP | 26.9 | GLIP-L |
| Object Detection | Description Detection Dataset | Intra-scenario ABS mAP | 21.5 | GLIP-T |
| Object Detection | Description Detection Dataset | Intra-scenario FULL mAP | 19.1 | GLIP-T |
| Object Detection | Description Detection Dataset | Intra-scenario PRES mAP | 18.3 | GLIP-T |
| 3D | COCO test-dev | AP50 | 79.5 | GLIP (Swin-L, multi-scale) |
| 3D | COCO test-dev | AP75 | 67.7 | GLIP (Swin-L, multi-scale) |
| 3D | COCO test-dev | APL | 75 | GLIP (Swin-L, multi-scale) |
| 3D | COCO test-dev | APM | 64.9 | GLIP (Swin-L, multi-scale) |
| 3D | COCO test-dev | APS | 45.3 | GLIP (Swin-L, multi-scale) |
| 3D | COCO test-dev | box mAP | 61.5 | GLIP (Swin-L, multi-scale) |
| 3D | COCO-O | Average mAP | 48 | GLIP-L (Swin-L) |
| 3D | COCO-O | Effective Robustness | 24.89 | GLIP-L (Swin-L) |
| 3D | COCO-O | Average mAP | 29.1 | GLIP-T (Swin-T) |
| 3D | COCO-O | Effective Robustness | 8.11 | GLIP-T (Swin-T) |
| 3D | ODinW Full-Shot 13 Tasks | AP | 68.9 | GLIP |
| 3D | COCO minival | box AP | 60.8 | GLIP (Swin-L, multi-scale) |
| 3D | ODinW-35 | Average Score | 38.9 | GLIP-T |
| 3D | ODinW-13 | Average Score | 50.7 | GLIP-T |
| 3D | LVIS v1.0 minival | AP | 37.3 | GLIP-L |
| 3D | LVIS v1.0 val | AP | 26.9 | GLIP-L |
| 3D | Description Detection Dataset | Intra-scenario ABS mAP | 21.5 | GLIP-T |
| 3D | Description Detection Dataset | Intra-scenario FULL mAP | 19.1 | GLIP-T |
| 3D | Description Detection Dataset | Intra-scenario PRES mAP | 18.3 | GLIP-T |
| Few-Shot Object Detection | ODinW-35 | Average Score | 38.9 | GLIP-T |
| Few-Shot Object Detection | ODinW-13 | Average Score | 50.7 | GLIP-T |
| 2D Classification | COCO test-dev | AP50 | 79.5 | GLIP (Swin-L, multi-scale) |
| 2D Classification | COCO test-dev | AP75 | 67.7 | GLIP (Swin-L, multi-scale) |
| 2D Classification | COCO test-dev | APL | 75 | GLIP (Swin-L, multi-scale) |
| 2D Classification | COCO test-dev | APM | 64.9 | GLIP (Swin-L, multi-scale) |
| 2D Classification | COCO test-dev | APS | 45.3 | GLIP (Swin-L, multi-scale) |
| 2D Classification | COCO test-dev | box mAP | 61.5 | GLIP (Swin-L, multi-scale) |
| 2D Classification | COCO-O | Average mAP | 48 | GLIP-L (Swin-L) |
| 2D Classification | COCO-O | Effective Robustness | 24.89 | GLIP-L (Swin-L) |
| 2D Classification | COCO-O | Average mAP | 29.1 | GLIP-T (Swin-T) |
| 2D Classification | COCO-O | Effective Robustness | 8.11 | GLIP-T (Swin-T) |
| 2D Classification | ODinW Full-Shot 13 Tasks | AP | 68.9 | GLIP |
| 2D Classification | COCO minival | box AP | 60.8 | GLIP (Swin-L, multi-scale) |
| 2D Classification | ODinW-35 | Average Score | 38.9 | GLIP-T |
| 2D Classification | ODinW-13 | Average Score | 50.7 | GLIP-T |
| 2D Classification | LVIS v1.0 minival | AP | 37.3 | GLIP-L |
| 2D Classification | LVIS v1.0 val | AP | 26.9 | GLIP-L |
| 2D Classification | Description Detection Dataset | Intra-scenario ABS mAP | 21.5 | GLIP-T |
| 2D Classification | Description Detection Dataset | Intra-scenario FULL mAP | 19.1 | GLIP-T |
| 2D Classification | Description Detection Dataset | Intra-scenario PRES mAP | 18.3 | GLIP-T |
| 2D Object Detection | RF100 | Average mAP | 0.112 | GLIP |
| 2D Object Detection | COCO test-dev | AP50 | 79.5 | GLIP (Swin-L, multi-scale) |
| 2D Object Detection | COCO test-dev | AP75 | 67.7 | GLIP (Swin-L, multi-scale) |
| 2D Object Detection | COCO test-dev | APL | 75 | GLIP (Swin-L, multi-scale) |
| 2D Object Detection | COCO test-dev | APM | 64.9 | GLIP (Swin-L, multi-scale) |
| 2D Object Detection | COCO test-dev | APS | 45.3 | GLIP (Swin-L, multi-scale) |
| 2D Object Detection | COCO test-dev | box mAP | 61.5 | GLIP (Swin-L, multi-scale) |
| 2D Object Detection | COCO-O | Average mAP | 48 | GLIP-L (Swin-L) |
| 2D Object Detection | COCO-O | Effective Robustness | 24.89 | GLIP-L (Swin-L) |
| 2D Object Detection | COCO-O | Average mAP | 29.1 | GLIP-T (Swin-T) |
| 2D Object Detection | COCO-O | Effective Robustness | 8.11 | GLIP-T (Swin-T) |
| 2D Object Detection | ODinW Full-Shot 13 Tasks | AP | 68.9 | GLIP |
| 2D Object Detection | COCO minival | box AP | 60.8 | GLIP (Swin-L, multi-scale) |
| 2D Object Detection | ODinW-35 | Average Score | 38.9 | GLIP-T |
| 2D Object Detection | ODinW-13 | Average Score | 50.7 | GLIP-T |
| 2D Object Detection | LVIS v1.0 minival | AP | 37.3 | GLIP-L |
| 2D Object Detection | LVIS v1.0 val | AP | 26.9 | GLIP-L |
| 2D Object Detection | Description Detection Dataset | Intra-scenario ABS mAP | 21.5 | GLIP-T |
| 2D Object Detection | Description Detection Dataset | Intra-scenario FULL mAP | 19.1 | GLIP-T |
| 2D Object Detection | Description Detection Dataset | Intra-scenario PRES mAP | 18.3 | GLIP-T |
| 16k | COCO test-dev | AP50 | 79.5 | GLIP (Swin-L, multi-scale) |
| 16k | COCO test-dev | AP75 | 67.7 | GLIP (Swin-L, multi-scale) |
| 16k | COCO test-dev | APL | 75 | GLIP (Swin-L, multi-scale) |
| 16k | COCO test-dev | APM | 64.9 | GLIP (Swin-L, multi-scale) |
| 16k | COCO test-dev | APS | 45.3 | GLIP (Swin-L, multi-scale) |
| 16k | COCO test-dev | box mAP | 61.5 | GLIP (Swin-L, multi-scale) |
| 16k | COCO-O | Average mAP | 48 | GLIP-L (Swin-L) |
| 16k | COCO-O | Effective Robustness | 24.89 | GLIP-L (Swin-L) |
| 16k | COCO-O | Average mAP | 29.1 | GLIP-T (Swin-T) |
| 16k | COCO-O | Effective Robustness | 8.11 | GLIP-T (Swin-T) |
| 16k | ODinW Full-Shot 13 Tasks | AP | 68.9 | GLIP |
| 16k | COCO minival | box AP | 60.8 | GLIP (Swin-L, multi-scale) |
| 16k | ODinW-35 | Average Score | 38.9 | GLIP-T |
| 16k | ODinW-13 | Average Score | 50.7 | GLIP-T |
| 16k | LVIS v1.0 minival | AP | 37.3 | GLIP-L |
| 16k | LVIS v1.0 val | AP | 26.9 | GLIP-L |
| 16k | Description Detection Dataset | Intra-scenario ABS mAP | 21.5 | GLIP-T |
| 16k | Description Detection Dataset | Intra-scenario FULL mAP | 19.1 | GLIP-T |
| 16k | Description Detection Dataset | Intra-scenario PRES mAP | 18.3 | GLIP-T |