Kurt Shuster, Eric Michael Smith, Da Ju, Jason Weston
Recent work in open-domain conversational agents has demonstrated that significant improvements in model engagingness and humanness metrics can be achieved via massive scaling in both pre-training data and model size (Adiwardana et al., 2020; Roller et al., 2020). However, if we want to build agents with human-like abilities, we must expand beyond handling just text. A particularly important topic is the ability to see images and communicate about what is perceived. With the goal of engaging humans in multi-modal dialogue, we investigate combining components from state-of-the-art open-domain dialogue agents with those from state-of-the-art vision models. We study incorporating different image fusion schemes and domain-adaptive pre-training and fine-tuning strategies, and show that our best resulting model outperforms strong existing models in multi-modal dialogue while simultaneously performing as well as its predecessor (text-only) BlenderBot (Roller et al., 2020) in text-based conversation. We additionally investigate and incorporate safety components in our final model, and show that such efforts do not diminish model performance with respect to engagingness metrics.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Dialogue | BlendedSkillTalk | BLEU-4 | 1 | Multi-Modal BlenderBot |
| Dialogue | BlendedSkillTalk | F1 | 17.8 | Multi-Modal BlenderBot |
| Dialogue | BlendedSkillTalk | ROUGE-L | 19.3 | Multi-Modal BlenderBot |
| Dialogue | EmpatheticDialogues | BLEU-4 | 1.5 | Multi-Modal BlenderBot |
| Dialogue | EmpatheticDialogues | F1 | 19.2 | Multi-Modal BlenderBot |
| Dialogue | EmpatheticDialogues | ROUGE-L | 24.5 | Multi-Modal BlenderBot |
| Dialogue | Image-Chat | BLEU-4 | 40 | Multi-Modal BlenderBot |
| Dialogue | Image-Chat | F1 | 13.1 | Multi-Modal BlenderBot |
| Dialogue | Image-Chat | ROUGE-L | 18 | Multi-Modal BlenderBot |
| Dialogue | ConvAI2 | BLEU-4 | 1.1 | Multi-Modal BlenderBot |
| Dialogue | ConvAI2 | F1 | 18.4 | Multi-Modal BlenderBot |
| Dialogue | ConvAI2 | ROUGE-L | 22.6 | Multi-Modal BlenderBot |
| Dialogue | Wizard of Wikipedia | BLEU-4 | 2.2 | Multi-Modal BlenderBot |
| Dialogue | Wizard of Wikipedia | F1 | 18.6 | Multi-Modal BlenderBot |
| Dialogue | Wizard of Wikipedia | ROUGE-L | 17.4 | Multi-Modal BlenderBot |
| Visual Dialog | BlendedSkillTalk | BLEU-4 | 1 | Multi-Modal BlenderBot |
| Visual Dialog | BlendedSkillTalk | F1 | 17.8 | Multi-Modal BlenderBot |
| Visual Dialog | BlendedSkillTalk | ROUGE-L | 19.3 | Multi-Modal BlenderBot |
| Visual Dialog | EmpatheticDialogues | BLEU-4 | 1.5 | Multi-Modal BlenderBot |
| Visual Dialog | EmpatheticDialogues | F1 | 19.2 | Multi-Modal BlenderBot |
| Visual Dialog | EmpatheticDialogues | ROUGE-L | 24.5 | Multi-Modal BlenderBot |
| Visual Dialog | Image-Chat | BLEU-4 | 40 | Multi-Modal BlenderBot |
| Visual Dialog | Image-Chat | F1 | 13.1 | Multi-Modal BlenderBot |
| Visual Dialog | Image-Chat | ROUGE-L | 18 | Multi-Modal BlenderBot |
| Visual Dialog | ConvAI2 | BLEU-4 | 1.1 | Multi-Modal BlenderBot |
| Visual Dialog | ConvAI2 | F1 | 18.4 | Multi-Modal BlenderBot |
| Visual Dialog | ConvAI2 | ROUGE-L | 22.6 | Multi-Modal BlenderBot |
| Visual Dialog | Wizard of Wikipedia | BLEU-4 | 2.2 | Multi-Modal BlenderBot |
| Visual Dialog | Wizard of Wikipedia | F1 | 18.6 | Multi-Modal BlenderBot |
| Visual Dialog | Wizard of Wikipedia | ROUGE-L | 17.4 | Multi-Modal BlenderBot |