TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/FarsTail: A Persian Natural Language Inference Dataset

FarsTail: A Persian Natural Language Inference Dataset

Hossein Amirkhani, Mohammad AzariJafari, Zohreh Pourjafari, Soroush Faridan-Jahromi, Zeinab Kouhkan, Azadeh Amirak

2020-09-18Natural Language InferenceMultiple-choice
PaperPDFCode(official)

Abstract

Natural language inference (NLI) is known as one of the central tasks in natural language processing (NLP) which encapsulates many fundamental aspects of language understanding. With the considerable achievements of data-hungry deep learning methods in NLP tasks, a great amount of effort has been devoted to develop more diverse datasets for different languages. In this paper, we present a new dataset for the NLI task in the Persian language, also known as Farsi, which is one of the dominant languages in the Middle East. This dataset, named FarsTail, includes 10,367 samples which are provided in both the Persian language as well as the indexed format to be useful for non-Persian researchers. The samples are generated from 3,539 multiple-choice questions with the least amount of annotator interventions in a way similar to the SciTail dataset. A carefully designed multi-step process is adopted to ensure the quality of the dataset. We also present the results of traditional and state-of-the-art methods on FarsTail including different embedding methods such as word2vec, fastText, ELMo, BERT, and LASER, as well as different modeling approaches such as DecompAtt, ESIM, HBMP, and ULMFiT to provide a solid baseline for the future research. The best obtained test accuracy is 83.38% which shows that there is a big room for improving the current methods to be useful for real-world NLP applications in different languages. We also investigate the extent to which the models exploit superficial clues, also known as dataset biases, in FarsTail, and partition the test set into easy and hard subsets according to the success of biased models. The dataset is available at https://github.com/dml-qom/FarsTail.

Results

TaskDatasetMetricValueModel
Natural Language InferenceFarsTail% Test Accuracy83.38mBERT
Natural Language InferenceFarsTail% Test Accuracy82.99ParsBERT
Natural Language InferenceFarsTail% Test Accuracy78.13Translate-Source + fastText
Natural Language InferenceFarsTail% Test Accuracy75.83LSTM + BERT (concat)
Natural Language InferenceFarsTail% Test Accuracy74.62ESIM + BERT (FarsTail, MultiNLI)
Natural Language InferenceFarsTail% Test Accuracy72.44ULMFiT
Natural Language InferenceFarsTail% Test Accuracy71.16ESIM + fastText
Natural Language InferenceFarsTail% Test Accuracy70.46Translate-Target + fastText
Natural Language InferenceFarsTail% Test Accuracy66.62Decomposable Attention Model + word2vec
Natural Language InferenceFarsTail% Test Accuracy66.04HBMP + word2vec

Related Papers

The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17HATS: Hindi Analogy Test Set for Evaluating Reasoning in Large Language Models2025-07-17LRCTI: A Large Language Model-Based Framework for Multi-Step Evidence Retrieval and Reasoning in Cyber Threat Intelligence Credibility Verification2025-07-15DS@GT at CheckThat! 2025: Evaluating Context and Tokenization Strategies for Numerical Fact Verification2025-07-08MateInfoUB: A Real-World Benchmark for Testing LLMs in Competitive, Multilingual, and Multimodal Educational Tasks2025-07-03Advanced Financial Reasoning at Scale: A Comprehensive Evaluation of Large Language Models on CFA Level III2025-06-29ARAG: Agentic Retrieval Augmented Generation for Personalized Recommendation2025-06-27OmniEval: A Benchmark for Evaluating Omni-modal Models with Visual, Auditory, and Textual Inputs2025-06-26