TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Unified Quality Assessment of In-the-Wild Videos with Mixe...

Unified Quality Assessment of In-the-Wild Videos with Mixed Datasets Training

Dingquan Li, Tingting Jiang, Ming Jiang

2020-11-09Video Quality Assessment
PaperPDFCode(official)

Abstract

Video quality assessment (VQA) is an important problem in computer vision. The videos in computer vision applications are usually captured in the wild. We focus on automatically assessing the quality of in-the-wild videos, which is a challenging problem due to the absence of reference videos, the complexity of distortions, and the diversity of video contents. Moreover, the video contents and distortions among existing datasets are quite different, which leads to poor performance of data-driven methods in the cross-dataset evaluation setting. To improve the performance of quality assessment models, we borrow intuitions from human perception, specifically, content dependency and temporal-memory effects of human visual system. To face the cross-dataset evaluation challenge, we explore a mixed datasets training strategy for training a single VQA model with multiple datasets. The proposed unified framework explicitly includes three stages: relative quality assessor, nonlinear mapping, and dataset-specific perceptual scale alignment, to jointly predict relative quality, perceptual quality, and subjective quality. Experiments are conducted on four publicly available datasets for VQA in the wild, i.e., LIVE-VQC, LIVE-Qualcomm, KoNViD-1k, and CVD2014. The experimental results verify the effectiveness of the mixed datasets training strategy and prove the superior performance of the unified model in comparison with the state-of-the-art models. For reproducible research, we make the PyTorch implementation of our method available at https://github.com/lidq92/MDTVSFA.

Results

TaskDatasetMetricValueModel
Video UnderstandingMSU NR VQA DatabaseKLCC0.7883MDTVSFA
Video UnderstandingMSU NR VQA DatabasePLCC0.9431MDTVSFA
Video UnderstandingMSU NR VQA DatabaseSRCC0.9289MDTVSFA
Video UnderstandingMSU SR-QA DatasetKLCC0.48406MDTVSFA
Video UnderstandingMSU SR-QA DatasetPLCC0.61821MDTVSFA
Video UnderstandingMSU SR-QA DatasetSROCC0.60193MDTVSFA
Video Quality AssessmentMSU NR VQA DatabaseKLCC0.7883MDTVSFA
Video Quality AssessmentMSU NR VQA DatabasePLCC0.9431MDTVSFA
Video Quality AssessmentMSU NR VQA DatabaseSRCC0.9289MDTVSFA
Video Quality AssessmentMSU SR-QA DatasetKLCC0.48406MDTVSFA
Video Quality AssessmentMSU SR-QA DatasetPLCC0.61821MDTVSFA
Video Quality AssessmentMSU SR-QA DatasetSROCC0.60193MDTVSFA
VideoMSU NR VQA DatabaseKLCC0.7883MDTVSFA
VideoMSU NR VQA DatabasePLCC0.9431MDTVSFA
VideoMSU NR VQA DatabaseSRCC0.9289MDTVSFA
VideoMSU SR-QA DatasetKLCC0.48406MDTVSFA
VideoMSU SR-QA DatasetPLCC0.61821MDTVSFA
VideoMSU SR-QA DatasetSROCC0.60193MDTVSFA

Related Papers

Bridging Video Quality Scoring and Justification via Large Multimodal Models2025-06-26EyeSim-VQA: A Free-Energy-Guided Eye Simulation Framework for Video Quality Assessment2025-06-13TDVE-Assessor: Benchmarking and Evaluating the Quality of Text-Driven Video Editing with LMMs2025-05-26NTIRE 2025 Challenge on Video Quality Enhancement for Video Conferencing: Datasets, Methods and Results2025-05-25CP-LLM: Context and Pixel Aware Large Language Model for Video Quality Assessment2025-05-21Semantically-Aware Game Image Quality Assessment2025-05-16Breaking Annotation Barriers: Generalized Video Quality Assessment via Ranking-based Self-Supervision2025-05-06DiffVQA: Video Quality Assessment Using Diffusion Feature Extractor2025-05-06