COPEN

COnceptual knowledge Probing bENchmark

Introduced 2022-11-08

COPEN is a COnceptual knowledge Probing benchmark that aims to analyze the conceptual understanding capabilities of Pre-trained Language Models (PLMs). Specifically, COPEN consists of three tasks:

  1. Conceptual Similarity Judgment (CSJ). Given a query entity and several candidate entities, the CSJ task requires selecting the most conceptually similar candidate entity to the query entity.
  2. Conceptual Property Judgment (CPJ). Given a statement describing a property of a concept, PLMs need to judge whether the statement is true.
  3. Conceptualization in Contexts (CiC). Given a sentence, an entity mentioned in the sentence, and several concept chains of the entity, PLMs need to select the most appropriate concept according to the context of the entity.