Philipp Christmann, Rishiraj Saha Roy, Gerhard Weikum
In conversational question answering, users express their information needs through a series of utterances with incomplete context. Typical ConvQA methods rely on a single source (a knowledge base (KB), or a text corpus, or a set of tables), thus being unable to benefit from increased answer coverage and redundancy of multiple sources. Our method EXPLAIGNN overcomes these limitations by integrating information from a mixture of sources with user-comprehensible explanations for answers. It constructs a heterogeneous graph from entities and evidence snippets retrieved from a KB, a text corpus, web tables, and infoboxes. This large graph is then iteratively reduced via graph neural networks that incorporate question-level attention, until the best answers and their explanations are distilled. Experiments show that EXPLAIGNN improves performance over state-of-the-art baselines. A user study demonstrates that derived answers are understandable by end users.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Question Answering | TimeQuestions | P@1 | 52.5 | EXPLAIGNN |
| Question Answering | TIQ | P@1 | 44.6 | Explaignn |