Yanai Elazar, Hongming Zhang, Yoav Goldberg, Dan Roth
The Winograd Schema (WS) has been proposed as a test for measuring commonsense capabilities of models. Recently, pre-trained language model-based approaches have boosted performance on some WS benchmarks but the source of improvement is still not clear. This paper suggests that the apparent progress on WS may not necessarily reflect progress in commonsense reasoning. To support this claim, we first show that the current evaluation method of WS is sub-optimal and propose a modification that uses twin sentences for evaluation. We also propose two new baselines that indicate the existence of artifacts in WS benchmarks. We then develop a method for evaluating WS-like sentences in a zero-shot setting to account for the commonsense reasoning abilities acquired during the pretraining and observe that popular language models perform randomly in this setting when using our more strict evaluation. We conclude that the observed progress is mostly due to the use of supervision in training WS models, which is not likely to successfully support all the required commonsense reasoning skills and knowledge.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Question Answering | COPA | Accuracy | 50 | Random chance baseline |
| Question Answering | PIQA | Accuracy | 50 | Random chance baseline |
| Common Sense Reasoning | WinoGrande | Accuracy | 58.7 | ALBERT-xxlarge 235M |
| Common Sense Reasoning | WinoGrande | Accuracy | 56.3 | RoBERTa-base 125M |
| Common Sense Reasoning | WinoGrande | Accuracy | 55.6 | BERT-large 345M |
| Common Sense Reasoning | WinoGrande | Accuracy | 54.9 | RoBERTa-large 355M |
| Common Sense Reasoning | WinoGrande | Accuracy | 53.1 | BERT-base 110M |
| Common Sense Reasoning | WinoGrande | Accuracy | 52.8 | ALBERT-base 11M |
| Common Sense Reasoning | WinoGrande | Accuracy | 50 | Random baseline |
| Coreference Resolution | Winograd Schema Challenge | Accuracy | 78.8 | ALBERT-xxlarge 235M |
| Coreference Resolution | Winograd Schema Challenge | Accuracy | 73.9 | RoBERTa-large 354M |
| Coreference Resolution | Winograd Schema Challenge | Accuracy | 63 | RoBERTa-base 125M |
| Coreference Resolution | Winograd Schema Challenge | Accuracy | 61.4 | BERT-large 340M |
| Coreference Resolution | Winograd Schema Challenge | Accuracy | 56.5 | BERT-base 110M |
| Coreference Resolution | Winograd Schema Challenge | Accuracy | 55.4 | ALBERT-base 11M |
| Coreference Resolution | Winograd Schema Challenge | Accuracy | 50 | Random chance baseline |