TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/HVS Revisited: A Comprehensive Video Quality Assessment Fr...

HVS Revisited: A Comprehensive Video Quality Assessment Framework

Ao-Xiang Zhang, Yuan-Gen Wang, Weixuan Tang, Leida Li, Sam Kwong

2022-10-09Video Quality AssessmentVisual Question Answering (VQA)
PaperPDF

Abstract

Video quality is a primary concern for video service providers. In recent years, the techniques of video quality assessment (VQA) based on deep convolutional neural networks (CNNs) have been developed rapidly. Although existing works attempt to introduce the knowledge of the human visual system (HVS) into VQA, there still exhibit limitations that prevent the full exploitation of HVS, including an incomplete model by few characteristics and insufficient connections among these characteristics. To overcome these limitations, this paper revisits HVS with five representative characteristics, and further reorganizes their connections. Based on the revisited HVS, a no-reference VQA framework called HVS-5M (NRVQA framework with five modules simulating HVS with five characteristics) is proposed. It works in a domain-fusion design paradigm with advanced network structures. On the side of the spatial domain, the visual saliency module applies SAMNet to obtain a saliency map. And then, the content-dependency and the edge masking modules respectively utilize ConvNeXt to extract the spatial features, which have been attentively weighted by the saliency map for the purpose of highlighting those regions that human beings may be interested in. On the other side of the temporal domain, to supplement the static spatial features, the motion perception module utilizes SlowFast to obtain the dynamic temporal features. Besides, the temporal hysteresis module applies TempHyst to simulate the memory mechanism of human beings, and comprehensively evaluates the quality score according to the fusion features from the spatial and temporal domains. Extensive experiments show that our HVS-5M outperforms the state-of-the-art VQA methods. Ablation studies are further conducted to verify the effectiveness of each module towards the proposed framework.

Results

TaskDatasetMetricValueModel
Video UnderstandingLIVE-VQCPLCC0.8422HVS-5M
Video UnderstandingYouTube-UGCPLCC0.8451HVS-5M
Video UnderstandingKoNViD-1kPLCC0.8562HVS-5M
Video UnderstandingLIVE-FB LSVQPLCC0.8723HVS-5M
Video Quality AssessmentLIVE-VQCPLCC0.8422HVS-5M
Video Quality AssessmentYouTube-UGCPLCC0.8451HVS-5M
Video Quality AssessmentKoNViD-1kPLCC0.8562HVS-5M
Video Quality AssessmentLIVE-FB LSVQPLCC0.8723HVS-5M
VideoLIVE-VQCPLCC0.8422HVS-5M
VideoYouTube-UGCPLCC0.8451HVS-5M
VideoKoNViD-1kPLCC0.8562HVS-5M
VideoLIVE-FB LSVQPLCC0.8723HVS-5M

Related Papers

VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Evaluating Attribute Confusion in Fashion Text-to-Image Generation2025-07-09LinguaMark: Do Multimodal Models Speak Fairly? A Benchmark-Based Evaluation2025-07-09Decoupled Seg Tokens Make Stronger Reasoning Video Segmenter and Grounder2025-06-28Bridging Video Quality Scoring and Justification via Large Multimodal Models2025-06-26DrishtiKon: Multi-Granular Visual Grounding for Text-Rich Document Images2025-06-26