Ankit Pal, Logesh Kumar Umapathi, Malaikannan Sankarasubbu
This paper introduces MedMCQA, a new large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions. More than 194k high-quality AIIMS \& NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity. Each sample contains a question, correct answer(s), and other options which requires a deeper language understanding as it tests the 10+ reasoning abilities of a model across a wide range of medical subjects \& topics. A detailed explanation of the solution, along with the above information, is provided in this study.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Question Answering | MedMCQA | Dev Set (Acc-%) | 0.4 | PubmedBERT(Gu et al., 2022) |
| Question Answering | MedMCQA | Test Set (Acc-%) | 0.41 | PubmedBERT(Gu et al., 2022) |
| Question Answering | MedMCQA | Dev Set (Acc-%) | 0.39 | SciBERT (Beltagy et al., 2019) |
| Question Answering | MedMCQA | Test Set (Acc-%) | 0.39 | SciBERT (Beltagy et al., 2019) |
| Question Answering | MedMCQA | Dev Set (Acc-%) | 0.38 | BioBERT (Lee et al.,2020) |
| Question Answering | MedMCQA | Test Set (Acc-%) | 0.37 | BioBERT (Lee et al.,2020) |
| Question Answering | MedMCQA | Dev Set (Acc-%) | 0.35 | BERT (Devlin et al., 2019)-Base |
| Question Answering | MedMCQA | Test Set (Acc-%) | 0.33 | BERT (Devlin et al., 2019)-Base |