TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Ensemble of MRR and NDCG models for Visual Dialog

Ensemble of MRR and NDCG models for Visual Dialog

Idan Schwartz

2021-04-15NAACL 2021 4AI AgentVisual Dialog
PaperPDFCode(official)

Abstract

Assessing an AI agent that can converse in human language and understand visual content is challenging. Generation metrics, such as BLEU scores favor correct syntax over semantics. Hence a discriminative approach is often used, where an agent ranks a set of candidate options. The mean reciprocal rank (MRR) metric evaluates the model performance by taking into account the rank of a single human-derived answer. This approach, however, raises a new challenge: the ambiguity and synonymy of answers, for instance, semantic equivalence (e.g., `yeah' and `yes'). To address this, the normalized discounted cumulative gain (NDCG) metric has been used to capture the relevance of all the correct answers via dense annotations. However, the NDCG metric favors the usually applicable uncertain answers such as `I don't know. Crafting a model that excels on both MRR and NDCG metrics is challenging. Ideally, an AI agent should answer a human-like reply and validate the correctness of any answer. To address this issue, we describe a two-step non-parametric ranking approach that can merge strong MRR and NDCG models. Using our approach, we manage to keep most MRR state-of-the-art performance (70.41% vs. 71.24%) and the NDCG state-of-the-art performance (72.16% vs. 75.35%). Moreover, our approach won the recent Visual Dialog 2020 challenge. Source code is available at https://github.com/idansc/mrr-ndcg.

Results

TaskDatasetMetricValueModel
DialogueVisDial v1.0 test-stdMRR0.71245xFGA + LS*+
DialogueVisDial v1.0 test-stdMean Rank2.965xFGA + LS*+
DialogueVisDial v1.0 test-stdR@158.285xFGA + LS*+
DialogueVisDial v1.0 test-stdR@1094.455xFGA + LS*+
DialogueVisDial v1.0 test-stdR@587.555xFGA + LS*+
DialogueVisDial v1.0 test-stdMRR0.7041Two-Step
DialogueVisDial v1.0 test-stdMean Rank3.66Two-Step
DialogueVisDial v1.0 test-stdNDCG72.16Two-Step
DialogueVisDial v1.0 test-stdR@158.18Two-Step
DialogueVisDial v1.0 test-stdR@1090.83Two-Step
DialogueVisDial v1.0 test-stdR@583.85Two-Step
DialogueVisDial v1.0 test-stdNDCG64.045xFGA + LS
DialogueVisual Dialog v1.0 test-stdMRR (x 100)69.922 Step: Factor Graph Attention + VD-Bert
DialogueVisual Dialog v1.0 test-stdMean3.842 Step: Factor Graph Attention + VD-Bert
DialogueVisual Dialog v1.0 test-stdNDCG (x 100)72.832 Step: Factor Graph Attention + VD-Bert
DialogueVisual Dialog v1.0 test-stdR@158.32 Step: Factor Graph Attention + VD-Bert
DialogueVisual Dialog v1.0 test-stdR@1089.62 Step: Factor Graph Attention + VD-Bert
DialogueVisual Dialog v1.0 test-stdR@581.552 Step: Factor Graph Attention + VD-Bert
Visual DialogVisDial v1.0 test-stdMRR0.71245xFGA + LS*+
Visual DialogVisDial v1.0 test-stdMean Rank2.965xFGA + LS*+
Visual DialogVisDial v1.0 test-stdR@158.285xFGA + LS*+
Visual DialogVisDial v1.0 test-stdR@1094.455xFGA + LS*+
Visual DialogVisDial v1.0 test-stdR@587.555xFGA + LS*+
Visual DialogVisDial v1.0 test-stdMRR0.7041Two-Step
Visual DialogVisDial v1.0 test-stdMean Rank3.66Two-Step
Visual DialogVisDial v1.0 test-stdNDCG72.16Two-Step
Visual DialogVisDial v1.0 test-stdR@158.18Two-Step
Visual DialogVisDial v1.0 test-stdR@1090.83Two-Step
Visual DialogVisDial v1.0 test-stdR@583.85Two-Step
Visual DialogVisDial v1.0 test-stdNDCG64.045xFGA + LS
Visual DialogVisual Dialog v1.0 test-stdMRR (x 100)69.922 Step: Factor Graph Attention + VD-Bert
Visual DialogVisual Dialog v1.0 test-stdMean3.842 Step: Factor Graph Attention + VD-Bert
Visual DialogVisual Dialog v1.0 test-stdNDCG (x 100)72.832 Step: Factor Graph Attention + VD-Bert
Visual DialogVisual Dialog v1.0 test-stdR@158.32 Step: Factor Graph Attention + VD-Bert
Visual DialogVisual Dialog v1.0 test-stdR@1089.62 Step: Factor Graph Attention + VD-Bert
Visual DialogVisual Dialog v1.0 test-stdR@581.552 Step: Factor Graph Attention + VD-Bert

Related Papers

Token Compression Meets Compact Vision Transformers: A Survey and Comparative Evaluation for Edge AI2025-07-13OpenAgentSafety: A Comprehensive Framework for Evaluating Real-World AI Agent Safety2025-07-08STELLA: Self-Evolving LLM Agent for Biomedical Research2025-07-01Prover Agent: An Agent-based Framework for Formal Mathematical Proofs2025-06-24AI Agents-as-Judge: Automated Assessment of Accuracy, Consistency, Completeness and Clarity for Enterprise Documents2025-06-23Exploring Big Five Personality and AI Capability Effects in LLM-Simulated Negotiation Dialogues2025-06-19xbench: Tracking Agents Productivity Scaling with Profession-Aligned Real-World Evaluations2025-06-16IndoorWorld: Integrating Physical Task Solving and Social Simulation in A Heterogeneous Multi-Agent Environment2025-06-14