TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/A Deep Learning based No-reference Quality Assessment Mode...

A Deep Learning based No-reference Quality Assessment Model for UGC Videos

Wei Sun, Xiongkuo Min, Wei Lu, Guangtao Zhai

2022-04-29Video Quality AssessmentImage Quality Assessment
PaperPDFCode(official)

Abstract

Quality assessment for User Generated Content (UGC) videos plays an important role in ensuring the viewing experience of end-users. Previous UGC video quality assessment (VQA) studies either use the image recognition model or the image quality assessment (IQA) models to extract frame-level features of UGC videos for quality regression, which are regarded as the sub-optimal solutions because of the domain shifts between these tasks and the UGC VQA task. In this paper, we propose a very simple but effective UGC VQA model, which tries to address this problem by training an end-to-end spatial feature extraction network to directly learn the quality-aware spatial feature representation from raw pixels of the video frames. We also extract the motion features to measure the temporal-related distortions that the spatial features cannot model. The proposed model utilizes very sparse frames to extract spatial features and dense frames (i.e. the video chunk) with a very low spatial resolution to extract motion features, which thereby has low computational complexity. With the better quality-aware features, we only use the simple multilayer perception layer (MLP) network to regress them into the chunk-level quality scores, and then the temporal average pooling strategy is adopted to obtain the video-level quality score. We further introduce a multi-scale quality fusion strategy to solve the problem of VQA across different spatial resolutions, where the multi-scale weights are obtained from the contrast sensitivity function of the human visual system. The experimental results show that the proposed model achieves the best performance on five popular UGC VQA databases, which demonstrates the effectiveness of the proposed model. The code will be publicly available.

Results

TaskDatasetMetricValueModel
Video UnderstandingYouTube-UGCPLCC0.856SimpleVQA
Video UnderstandingKoNViD-1kPLCC0.86SimpleVQA
Video UnderstandingLIVE-FB LSVQPLCC0.861SimpleVQA
Video Quality AssessmentYouTube-UGCPLCC0.856SimpleVQA
Video Quality AssessmentKoNViD-1kPLCC0.86SimpleVQA
Video Quality AssessmentLIVE-FB LSVQPLCC0.861SimpleVQA
VideoYouTube-UGCPLCC0.856SimpleVQA
VideoKoNViD-1kPLCC0.86SimpleVQA
VideoLIVE-FB LSVQPLCC0.861SimpleVQA

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21DeQA-Doc: Adapting DeQA-Score to Document Image Quality Assessment2025-07-17Text-Visual Semantic Constrained AI-Generated Image Quality Assessment2025-07-144KAgent: Agentic Any Image to 4K Super-Resolution2025-07-09Bridging Video Quality Scoring and Justification via Large Multimodal Models2025-06-26FundaQ-8: A Clinically-Inspired Scoring Framework for Automated Fundus Image Quality Assessment2025-06-25MS-IQA: A Multi-Scale Feature Fusion Network for PET/CT Image Quality Assessment2025-06-25Enhanced Dermatology Image Quality Assessment via Cross-Domain Training2025-06-19