Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning
Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entailment and contradiction is a valuable testing ground for the development of semantic representations. However, machine learning research in this area has been dramatically limited by the lack of large-scale resources. To address this, we introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. This increase in scale allows lexicalized classifiers to outperform some sophisticated existing entailment models, and it allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Natural Language Inference | SNLI | % Test Accuracy | 78.2 | + Unigram and bigram features |
| Natural Language Inference | SNLI | % Train Accuracy | 99.7 | + Unigram and bigram features |
| Natural Language Inference | SNLI | % Test Accuracy | 77.6 | 100D LSTM encoders |
| Natural Language Inference | SNLI | % Train Accuracy | 84.8 | 100D LSTM encoders |
| Natural Language Inference | SNLI | % Test Accuracy | 50.4 | Unlexicalized features |
| Natural Language Inference | SNLI | % Train Accuracy | 49.4 | Unlexicalized features |