BEAR-probe

Benchmark for Evaluating Associative Reasoning

TextsCC BY-SAIntroduced 2024-04-05

The BEAR\text{BEAR} dataset and its larger version, BEARbig\text{BEAR}_{\text{big}}, are benchmarks for evaluating common factual knowledge contained in language models.

This dataset was created as part of the paper "BEAR: A Unified Framework for Evaluating Relational Knowledge in Causal and Masked Language Models".

For more information visit the LM Pub Quiz website.

Citation

When using the dataset or library, please cite the following paper:

@misc{wilandBEARUnifiedFramework2024,
  title = {{{BEAR}}: {{A Unified Framework}} for {{Evaluating Relational Knowledge}} in {{Causal}} and {{Masked Language Models}}},
  shorttitle = {{{BEAR}}},
  author = {Wiland, Jacek and Ploner, Max and Akbik, Alan},
  year = {2024},
  number = {arXiv:2404.04113},
  eprint = {2404.04113},
  publisher = {arXiv},
  url = {http://arxiv.org/abs/2404.04113},
}