TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Natural Language Processing/Natural Language Inference/TERRa

Natural Language Inference on TERRa

Metric: Accuracy (higher is better)

LeaderboardDataset
Loading chart...

Results

Submit a result
#Model↕Accuracy▼Extra DataPaperDate↕Code
1Human Benchmark0.92NoRussianSuperGLUE: A Russian Language Understandi...2020-10-29Code
2Golden Transformer0.871No---
3ruRoberta-large finetune0.801No---
4ruT5-large-finetune0.747No---
5ruBert-large finetune0.704No---
6ruBert-base finetune0.703No---
7ruT5-base-finetune0.692No---
8RuGPT3Large0.654No---
9RuBERT plain0.642No---
10RuBERT conversational0.64No---
11SBERT_Large_mt_ru_finetuning0.637No---
12SBERT_Large0.637No---
13Multilingual Bert0.617No---
14YaLM 1.0B few-shot0.605No---
15RuGPT3XL few-shot0.573No---
16MT5 Large0.561NomT5: A massively multilingual pre-trained text-t...2020-10-22Code
17heuristic majority0.549NoUnreasonable Effectiveness of Rule-Based Heurist...2021-05-03-
18majority_class0.513NoUnreasonable Effectiveness of Rule-Based Heurist...2021-05-03-
19RuGPT3Medium0.505No---
20RuGPT3Small0.488No---
21Random weighted0.483NoUnreasonable Effectiveness of Rule-Based Heurist...2021-05-03-
22Baseline TF-IDF1.10.471NoRussianSuperGLUE: A Russian Language Understandi...2020-10-29Code