Metric: Average Accuracy (higher is better)
| # | Model↕ | Average Accuracy▼ | Augmentations | Paper | Date↕ | Code |
|---|---|---|---|---|---|---|
| 1 | GPT-4o | 76.22 | No | Benchmarking Vision-Language Models on Optical C... | 2025-02-10 | Code |
| 2 | Gemini-1.5 Pro | 76.13 | No | Benchmarking Vision-Language Models on Optical C... | 2025-02-10 | Code |
| 3 | Claude-3 Sonnet | 67.71 | No | Benchmarking Vision-Language Models on Optical C... | 2025-02-10 | Code |
| 4 | RapidOCR | 56.98 | No | Benchmarking Vision-Language Models on Optical C... | 2025-02-10 | Code |
| 5 | EasyOCR | 49.3 | No | Benchmarking Vision-Language Models on Optical C... | 2025-02-10 | Code |