TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Benchmarking Llama2, Mistral, Gemma and GPT for Factuality...

Benchmarking Llama2, Mistral, Gemma and GPT for Factuality, Toxicity, Bias and Propensity for Hallucinations

David Nadeau, Mike Kroutikov, Karen McNeil, Simon Baribeau

2024-04-15BenchmarkingHallucinationDialogue Safety PredictionBias Detection
PaperPDFCode(official)

Abstract

This paper introduces fourteen novel datasets for the evaluation of Large Language Models' safety in the context of enterprise tasks. A method was devised to evaluate a model's safety, as determined by its ability to follow instructions and output factual, unbiased, grounded, and appropriate content. In this research, we used OpenAI GPT as point of comparison since it excels at all levels of safety. On the open-source side, for smaller models, Meta Llama2 performs well at factuality and toxicity but has the highest propensity for hallucination. Mistral hallucinates the least but cannot handle toxicity well. It performs well in a dataset mixing several tasks and safety vectors in a narrow vertical domain. Gemma, the newly introduced open-source model based on Google Gemini, is generally balanced but trailing behind. When engaging in back-and-forth conversation (multi-turn prompts), we find that the safety of open-source models degrades significantly. Aside from OpenAI's GPT, Mistral is the only model that still performed well in multi-turn tests.

Results

TaskDatasetMetricValueModel
Dialoguert-inod-jailbreakingBest-of0.92Baseline
Dialoguert-inod-jailbreakingBest-of0.91GPT-4
Dialoguert-inod-jailbreakingBest-of0.91Gemma
Dialoguert-inod-jailbreakingBest-of0.87Mistral
Dialoguert-inod-jailbreakingBest-of0.86Llama2
Bias Detectionrt-inod-biasBest-of0.5GPT-4
Bias Detectionrt-inod-biasBest-of0.41Gemma
Bias Detectionrt-inod-biasBest-of0.41Baseline
Bias Detectionrt-inod-biasBest-of0.36Mistral
Bias Detectionrt-inod-biasBest-of0.34Llama2
Dialogue Understandingrt-inod-jailbreakingBest-of0.92Baseline
Dialogue Understandingrt-inod-jailbreakingBest-of0.91GPT-4
Dialogue Understandingrt-inod-jailbreakingBest-of0.91Gemma
Dialogue Understandingrt-inod-jailbreakingBest-of0.87Mistral
Dialogue Understandingrt-inod-jailbreakingBest-of0.86Llama2

Related Papers

Visual Place Recognition for Large-Scale UAV Applications2025-07-20Training Transformers with Enforced Lipschitz Constants2025-07-17Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Mitigating Object Hallucinations via Sentence-Level Early Intervention2025-07-16DCR: Quantifying Data Contamination in LLMs Evaluation2025-07-15A Multi-View High-Resolution Foot-Ankle Complex Point Cloud Dataset During Gait for Occlusion-Robust 3D Completion2025-07-15