CLUE
Chinese Language Understanding Evaluation Benchmark
TextsUnknown
CLUE is a Chinese Language Understanding Evaluation benchmark. It consists of different NLU datasets. It is a community-driven project that brings together 9 tasks spanning several well-established single-sentence/sentence-pair classification tasks, as well as machine reading comprehension, all on original Chinese text.
Related Benchmarks
CLUE (AFQMC)/Language Modelling/AccuracyCLUE (C3)/Language Modelling/AccuracyCLUE (CMNLI)/Language Modelling/AccuracyCLUE (CMRC2018)/Language Modelling/AccuracyCLUE (DRCD)/Language Modelling/AccuracyCLUE (OCNLI_50K)/Language Modelling/AccuracyCLUE (WSC1.1)/Language Modelling/AccuracyClueWeb09-B/Ad-Hoc Information Retrieval/ERR@20ClueWeb09-B/Ad-Hoc Information Retrieval/nDCG@20ClueWeb09-B/Document Ranking/ERR@20ClueWeb09-B/Document Ranking/nDCG@20