Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, Alex Smola
Large language models (LLMs) have shown impressive performance on complex reasoning by leveraging chain-of-thought (CoT) prompting to generate intermediate reasoning chains as the rationale to infer the answer. However, existing CoT studies have primarily focused on the language modality. We propose Multimodal-CoT that incorporates language (text) and vision (images) modalities into a two-stage framework that separates rationale generation and answer inference. In this way, answer inference can leverage better generated rationales that are based on multimodal information. Experimental results on ScienceQA and A-OKVQA benchmark datasets show the effectiveness of our proposed approach. With Multimodal-CoT, our model under 1 billion parameters achieves state-of-the-art performance on the ScienceQA benchmark. Our analysis indicates that Multimodal-CoT offers the advantages of mitigating hallucination and enhancing convergence speed. Code is publicly available at https://github.com/amazon-science/mm-cot.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Question Answering | ScienceQA | Avg. Accuracy | 91.68 | Multimodal CoT |
| Question Answering | ScienceQA | Grades 1-6 | 92.44 | Multimodal CoT |
| Question Answering | ScienceQA | Grades 7-12 | 90.31 | Multimodal CoT |
| Question Answering | ScienceQA | Image Context | 88.8 | Multimodal CoT |
| Question Answering | ScienceQA | Language Science | 90.82 | Multimodal CoT |
| Question Answering | ScienceQA | Natural Science | 95.91 | Multimodal CoT |
| Question Answering | ScienceQA | No Context | 92.89 | Multimodal CoT |
| Question Answering | ScienceQA | Social Science | 82 | Multimodal CoT |
| Question Answering | ScienceQA | Text Context | 95.26 | Multimodal CoT |