TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/No-Reference Image Quality Assessment via Transformers, Re...

No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consistency

S. Alireza Golestaneh, Saba Dadsetan, Kris M. Kitani

2021-08-16Image Quality AssessmentNo-Reference Image Quality Assessment
PaperPDFCode(official)

Abstract

The goal of No-Reference Image Quality Assessment (NR-IQA) is to estimate the perceptual image quality in accordance with subjective evaluations, it is a complex and unsolved problem due to the absence of the pristine reference image. In this paper, we propose a novel model to address the NR-IQA task by leveraging a hybrid approach that benefits from Convolutional Neural Networks (CNNs) and self-attention mechanism in Transformers to extract both local and non-local features from the input image. We capture local structure information of the image via CNNs, then to circumvent the locality bias among the extracted CNNs features and obtain a non-local representation of the image, we utilize Transformers on the extracted features where we model them as a sequential input to the Transformer model. Furthermore, to improve the monotonicity correlation between the subjective and objective scores, we utilize the relative distance information among the images within each batch and enforce the relative ranking among them. Last but not least, we observe that the performance of NR-IQA models degrades when we apply equivariant transformations (e.g. horizontal flipping) to the inputs. Therefore, we propose a method that leverages self-consistency as a source of self-supervision to improve the robustness of NRIQA models. Specifically, we enforce self-consistency between the outputs of our quality assessment model for each image and its transformation (horizontally flipped) to utilize the rich self-supervisory information and reduce the uncertainty of the model. To demonstrate the effectiveness of our work, we evaluate it on seven standard IQA datasets (both synthetic and authentic) and show that our model achieves state-of-the-art results on various datasets.

Results

TaskDatasetMetricValueModel
Video UnderstandingMSU SR-QA DatasetKLCC0.49004TReS trained on KONIQ
Video UnderstandingMSU SR-QA DatasetPLCC0.56226TReS trained on KONIQ
Video UnderstandingMSU SR-QA DatasetSROCC0.62578TReS trained on KONIQ
Video UnderstandingMSU SR-QA DatasetKLCC0.48901TReS
Video UnderstandingMSU SR-QA DatasetPLCC0.56277TReS
Video UnderstandingMSU SR-QA DatasetSROCC0.62496TReS
Video UnderstandingMSU SR-QA DatasetKLCC0.39398TReS trained on FLIVE
Video UnderstandingMSU SR-QA DatasetPLCC0.50005TReS trained on FLIVE
Video UnderstandingMSU SR-QA DatasetSROCC0.48882TReS trained on FLIVE
Video Quality AssessmentMSU SR-QA DatasetKLCC0.49004TReS trained on KONIQ
Video Quality AssessmentMSU SR-QA DatasetPLCC0.56226TReS trained on KONIQ
Video Quality AssessmentMSU SR-QA DatasetSROCC0.62578TReS trained on KONIQ
Video Quality AssessmentMSU SR-QA DatasetKLCC0.48901TReS
Video Quality AssessmentMSU SR-QA DatasetPLCC0.56277TReS
Video Quality AssessmentMSU SR-QA DatasetSROCC0.62496TReS
Video Quality AssessmentMSU SR-QA DatasetKLCC0.39398TReS trained on FLIVE
Video Quality AssessmentMSU SR-QA DatasetPLCC0.50005TReS trained on FLIVE
Video Quality AssessmentMSU SR-QA DatasetSROCC0.48882TReS trained on FLIVE
Image Quality AssessmentKADID-10kPLCC0.858TReS
Image Quality AssessmentKADID-10kSRCC0.859TReS
Image Quality AssessmentTID2013PLCC0.883TReS
Image Quality AssessmentTID2013SRCC0.863TReS
Image Quality AssessmentCSIQPLCC0.942TReS
Image Quality AssessmentCSIQSRCC0.922TReS
VideoMSU SR-QA DatasetKLCC0.49004TReS trained on KONIQ
VideoMSU SR-QA DatasetPLCC0.56226TReS trained on KONIQ
VideoMSU SR-QA DatasetSROCC0.62578TReS trained on KONIQ
VideoMSU SR-QA DatasetKLCC0.48901TReS
VideoMSU SR-QA DatasetPLCC0.56277TReS
VideoMSU SR-QA DatasetSROCC0.62496TReS
VideoMSU SR-QA DatasetKLCC0.39398TReS trained on FLIVE
VideoMSU SR-QA DatasetPLCC0.50005TReS trained on FLIVE
VideoMSU SR-QA DatasetSROCC0.48882TReS trained on FLIVE
No-Reference Image Quality AssessmentKADID-10kPLCC0.858TReS
No-Reference Image Quality AssessmentKADID-10kSRCC0.859TReS
No-Reference Image Quality AssessmentTID2013PLCC0.883TReS
No-Reference Image Quality AssessmentTID2013SRCC0.863TReS
No-Reference Image Quality AssessmentCSIQPLCC0.942TReS
No-Reference Image Quality AssessmentCSIQSRCC0.922TReS

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Language Integration in Fine-Tuning Multimodal Large Language Models for Image-Based Regression2025-07-20DeQA-Doc: Adapting DeQA-Score to Document Image Quality Assessment2025-07-17Text-Visual Semantic Constrained AI-Generated Image Quality Assessment2025-07-144KAgent: Agentic Any Image to 4K Super-Resolution2025-07-09FundaQ-8: A Clinically-Inspired Scoring Framework for Automated Fundus Image Quality Assessment2025-06-25MS-IQA: A Multi-Scale Feature Fusion Network for PET/CT Image Quality Assessment2025-06-25Enhanced Dermatology Image Quality Assessment via Cross-Domain Training2025-06-19