TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Image Quality Assessment: Unifying Structure and Texture S...

Image Quality Assessment: Unifying Structure and Texture Similarity

Keyan Ding, Kede Ma, Shiqi Wang, Eero P. Simoncelli

2020-04-16Video Quality AssessmentImage Quality AssessmentRetrieval
PaperPDFCode(official)Code

Abstract

Objective measures of image quality generally operate by comparing pixels of a "degraded" image to those of the original. Relative to human observers, these measures are overly sensitive to resampling of texture regions (e.g., replacing one patch of grass with another). Here, we develop the first full-reference image quality model with explicit tolerance to texture resampling. Using a convolutional neural network, we construct an injective and differentiable function that transforms images to multi-scale overcomplete representations. We demonstrate empirically that the spatial averages of the feature maps in this representation capture texture appearance, in that they provide a set of sufficient statistical constraints to synthesize a wide variety of texture patterns. We then describe an image quality method that combines correlations of these spatial averages ("texture similarity") with correlations of the feature maps ("structure similarity"). The parameters of the proposed measure are jointly optimized to match human ratings of image quality, while minimizing the reported distances between subimages cropped from the same texture images. Experiments show that the optimized method explains human perceptual scores, both on conventional image quality databases, as well as on texture databases. The measure also offers competitive performance on related tasks such as texture classification and retrieval. Finally, we show that our method is relatively insensitive to geometric transformations (e.g., translation and dilation), without use of any specialized training or data augmentation. Code is available at https://github.com/dingkeyan93/DISTS.

Results

TaskDatasetMetricValueModel
Video UnderstandingMSU SR-QA DatasetKLCC0.4232DISTS
Video UnderstandingMSU SR-QA DatasetPLCC0.55042DISTS
Video UnderstandingMSU SR-QA DatasetSROCC0.53346DISTS
Video Quality AssessmentMSU SR-QA DatasetKLCC0.4232DISTS
Video Quality AssessmentMSU SR-QA DatasetPLCC0.55042DISTS
Video Quality AssessmentMSU SR-QA DatasetSROCC0.53346DISTS
VideoMSU SR-QA DatasetKLCC0.4232DISTS
VideoMSU SR-QA DatasetPLCC0.55042DISTS
VideoMSU SR-QA DatasetSROCC0.53346DISTS

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21DeQA-Doc: Adapting DeQA-Score to Document Image Quality Assessment2025-07-17From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Developing Visual Augmented Q&A System using Scalable Vision Embedding Retrieval & Late Interaction Re-ranker2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16