TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/PieAPP: Perceptual Image-Error Assessment through Pairwise...

PieAPP: Perceptual Image-Error Assessment through Pairwise Preference

Ekta Prashnani, Hong Cai, Yasamin Mostofi, Pradeep Sen

2018-06-06CVPR 2018 6Video Quality Assessment
PaperPDFCode(official)

Abstract

The ability to estimate the perceptual error between images is an important problem in computer vision with many applications. Although it has been studied extensively, however, no method currently exists that can robustly predict visual differences like humans. Some previous approaches used hand-coded models, but they fail to model the complexity of the human visual system. Others used machine learning to train models on human-labeled datasets, but creating large, high-quality datasets is difficult because people are unable to assign consistent error labels to distorted images. In this paper, we present a new learning-based method that is the first to predict perceptual image error like human observers. Since it is much easier for people to compare two given images and identify the one more similar to a reference than to assign quality scores to each, we propose a new, large-scale dataset labeled with the probability that humans will prefer one image over another. We then train a deep-learning model using a novel, pairwise-learning framework to predict the preference of one distorted image over the other. Our key observation is that our trained network can then be used separately with only one distorted image and a reference to predict its perceptual error, without ever being trained on explicit human perceptual-error labels. The perceptual error estimated by our new metric, PieAPP, is well-correlated with human opinion. Furthermore, it significantly outperforms existing algorithms, beating the state-of-the-art by almost 3x on our test set in terms of binary error rate, while also generalizing to new kinds of distortions, unlike previous learning-based methods.

Results

TaskDatasetMetricValueModel
Video UnderstandingMSU SR-QA DatasetKLCC0.61945PieAPP
Video UnderstandingMSU SR-QA DatasetPLCC0.75743PieAPP
Video UnderstandingMSU SR-QA DatasetSROCC0.75215PieAPP
Video Quality AssessmentMSU SR-QA DatasetKLCC0.61945PieAPP
Video Quality AssessmentMSU SR-QA DatasetPLCC0.75743PieAPP
Video Quality AssessmentMSU SR-QA DatasetSROCC0.75215PieAPP
VideoMSU SR-QA DatasetKLCC0.61945PieAPP
VideoMSU SR-QA DatasetPLCC0.75743PieAPP
VideoMSU SR-QA DatasetSROCC0.75215PieAPP

Related Papers

Bridging Video Quality Scoring and Justification via Large Multimodal Models2025-06-26EyeSim-VQA: A Free-Energy-Guided Eye Simulation Framework for Video Quality Assessment2025-06-13TDVE-Assessor: Benchmarking and Evaluating the Quality of Text-Driven Video Editing with LMMs2025-05-26NTIRE 2025 Challenge on Video Quality Enhancement for Video Conferencing: Datasets, Methods and Results2025-05-25CP-LLM: Context and Pixel Aware Large Language Model for Video Quality Assessment2025-05-21Semantically-Aware Game Image Quality Assessment2025-05-16Breaking Annotation Barriers: Generalized Video Quality Assessment via Ranking-based Self-Supervision2025-05-06DiffVQA: Video Quality Assessment Using Diffusion Feature Extractor2025-05-06