Metric: Accuracy (higher is better)
| # | Model↕ | Accuracy▼ | Extra Data | Paper | Date↕ | Code |
|---|---|---|---|---|---|---|
| 1 | Golden Transformer | 0.917 | No | - | - | - |
| 2 | Human Benchmark | 0.915 | No | RussianSuperGLUE: A Russian Language Understandi... | 2020-10-29 | Code |
| 3 | ruRoberta-large finetune | 0.82 | No | - | - | - |
| 4 | ruBert-large finetune | 0.773 | No | - | - | - |
| 5 | ruT5-base-finetune | 0.732 | No | - | - | - |
| 6 | ruBert-base finetune | 0.712 | No | - | - | - |
| 7 | ruT5-large-finetune | 0.711 | No | - | - | - |
| 8 | SBERT_Large_mt_ru_finetuning | 0.697 | No | - | - | - |
| 9 | SBERT_Large | 0.675 | No | - | - | - |
| 10 | MT5 Large | 0.657 | No | mT5: A massively multilingual pre-trained text-t... | 2020-10-22 | Code |
| 11 | heuristic majority | 0.642 | No | Unreasonable Effectiveness of Rule-Based Heurist... | 2021-05-03 | - |
| 12 | RuBERT plain | 0.639 | No | - | - | - |
| 13 | YaLM 1.0B few-shot | 0.637 | No | - | - | - |
| 14 | RuGPT3Medium | 0.634 | No | - | - | - |
| 15 | Multilingual Bert | 0.624 | No | - | - | - |
| 16 | Baseline TF-IDF1.1 | 0.621 | No | RussianSuperGLUE: A Russian Language Understandi... | 2020-10-29 | Code |
| 17 | RuGPT3Small | 0.61 | No | - | - | - |
| 18 | RuBERT conversational | 0.606 | No | - | - | - |
| 19 | RuGPT3Large | 0.604 | No | - | - | - |
| 20 | RuGPT3XL few-shot | 0.59 | No | - | - | - |
| 21 | Random weighted | 0.52 | No | Unreasonable Effectiveness of Rule-Based Heurist... | 2021-05-03 | - |
| 22 | majority_class | 0.503 | No | Unreasonable Effectiveness of Rule-Based Heurist... | 2021-05-03 | - |