Metric: Average F1 (higher is better)
| # | Model↕ | Average F1▼ | Extra Data | Paper | Date↕ | Code |
|---|---|---|---|---|---|---|
| 1 | Human Benchmark | 0.68 | No | RussianSuperGLUE: A Russian Language Understandi... | 2020-10-29 | Code |
| 2 | RuBERT conversational | 0.452 | No | - | - | - |
| 3 | RuGPT3Large | 0.417 | No | - | - | - |
| 4 | YaLM 1.0B few-shot | 0.408 | No | - | - | - |
| 5 | Golden Transformer | 0.406 | No | - | - | - |
| 6 | heuristic majority | 0.4 | No | Unreasonable Effectiveness of Rule-Based Heurist... | 2021-05-03 | - |
| 7 | RuGPT3Medium | 0.372 | No | - | - | - |
| 8 | SBERT_Large | 0.371 | No | - | - | - |
| 9 | RuBERT plain | 0.367 | No | - | - | - |
| 10 | Multilingual Bert | 0.367 | No | - | - | - |
| 11 | MT5 Large | 0.366 | No | mT5: A massively multilingual pre-trained text-t... | 2020-10-22 | Code |
| 12 | ruRoberta-large finetune | 0.357 | No | - | - | - |
| 13 | ruBert-large finetune | 0.356 | No | - | - | - |
| 14 | RuGPT3Small | 0.356 | No | - | - | - |
| 15 | SBERT_Large_mt_ru_finetuning | 0.351 | No | - | - | - |
| 16 | ruBert-base finetune | 0.333 | No | - | - | - |
| 17 | Random weighted | 0.319 | No | Unreasonable Effectiveness of Rule-Based Heurist... | 2021-05-03 | - |
| 18 | ruT5-base-finetune | 0.307 | No | - | - | - |
| 19 | ruT5-large-finetune | 0.306 | No | - | - | - |
| 20 | RuGPT3XL few-shot | 0.302 | No | - | - | - |
| 21 | Baseline TF-IDF1.1 | 0.301 | No | RussianSuperGLUE: A Russian Language Understandi... | 2020-10-29 | Code |
| 22 | majority_class | 0.217 | No | Unreasonable Effectiveness of Rule-Based Heurist... | 2021-05-03 | - |