Runqi Yang, Jianhai Zhang, Xing Gao, Feng Ji, Haiqing Chen
In this paper, we present a fast and strong neural approach for general purpose text matching applications. We explore what is sufficient to build a fast and well-performed text matching model and propose to keep three key features available for inter-sequence alignment: original point-wise features, previous aligned features, and contextual features while simplifying all the remaining components. We conduct experiments on four well-studied benchmark datasets across tasks of natural language inference, paraphrase identification and answer selection. The performance of our model is on par with the state-of-the-art on all datasets with much fewer parameters and the inference speed is at least 6 times faster compared with similarly performed ones.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Question Answering | WikiQA | MAP | 0.7452 | RE2 |
| Question Answering | WikiQA | MRR | 0.7618 | RE2 |
| Natural Language Inference | SciTail | Accuracy | 86 | RE2 |
| Natural Language Inference | SNLI | % Test Accuracy | 88.9 | RE2 |
| Natural Language Inference | SNLI | % Train Accuracy | 94 | RE2 |
| Semantic Textual Similarity | Quora Question Pairs | Accuracy | 89.2 | RE2 |
| Paraphrase Identification | Quora Question Pairs | Accuracy | 89.2 | RE2 |