TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/The Devil Is in the Details: Window-based Attention for Im...

The Devil Is in the Details: Window-based Attention for Image Compression

Renjie Zou, Chunfeng Song, Zhaoxiang Zhang

2022-03-16CVPR 2022 1Image Compression
PaperPDFCode(official)Code

Abstract

Learned image compression methods have exhibited superior rate-distortion performance than classical image compression standards. Most existing learned image compression models are based on Convolutional Neural Networks (CNNs). Despite great contributions, a main drawback of CNN based model is that its structure is not designed for capturing local redundancy, especially the non-repetitive textures, which severely affects the reconstruction quality. Therefore, how to make full use of both global structure and local texture becomes the core problem for learning-based image compression. Inspired by recent progresses of Vision Transformer (ViT) and Swin Transformer, we found that combining the local-aware attention mechanism with the global-related feature learning could meet the expectation in image compression. In this paper, we first extensively study the effects of multiple kinds of attention mechanisms for local features learning, then introduce a more straightforward yet effective window-based local attention block. The proposed window-based attention is very flexible which could work as a plug-and-play component to enhance CNN and Transformer models. Moreover, we propose a novel Symmetrical TransFormer (STF) framework with absolute transformer blocks in the down-sampling encoder and up-sampling decoder. Extensive experimental evaluations have shown that the proposed method is effective and outperforms the state-of-the-art methods. The code is publicly available at https://github.com/Googolxx/STF.

Results

TaskDatasetMetricValueModel
Image CompressionkodakBD-Rate over VTM-17.0-2.95WACNN
Image CompressionkodakBD-Rate over VTM-17.0-2.48STF

Related Papers

Perception-Oriented Latent Coding for High-Performance Compressed Domain Semantic Inference2025-07-02Explicit Residual-Based Scalable Image Coding for Humans and Machines2025-06-24NIC-RobustBench: A Comprehensive Open-Source Toolkit for Neural Image Compression and Robustness Analysis2025-06-23LVPNet: A Latent-variable-based Prediction-driven End-to-end Framework for Lossless Compression of Medical Images2025-06-22DiffO: Single-step Diffusion for Image Compression at Ultra-Low Bitrates2025-06-19Fast Training-free Perceptual Image Compression2025-06-19ABC: Adaptive BayesNet Structure Learning for Computational Scalable Multi-task Image Compression2025-06-18Breaking the Multi-Enhancement Bottleneck: Domain-Consistent Quality Enhancement for Compressed Images2025-06-17