TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MLIC++: Linear Complexity Multi-Reference Entropy Modeling...

MLIC++: Linear Complexity Multi-Reference Entropy Modeling for Learned Image Compression

Wei Jiang, Jiayu Yang, Yongqi Zhai, Feng Gao, Ronggang Wang

2023-07-28Image Compression
PaperPDFCode(official)

Abstract

Recently, learned image compression has achieved impressive performance. The entropy model, which estimates the distribution of the latent representation, plays a crucial role in enhancing rate-distortion performance. However, existing global context modules rely on computationally intensive quadratic complexity computations to capture global correlations. This quadratic complexity imposes limitations on the potential of high-resolution image coding. Moreover, effectively capturing local, global, and channel-wise contexts with acceptable even linear complexity within a single entropy model remains a challenge. To address these limitations, we propose the Linear Complexity Multi-Reference Entropy Model (MEM++). MEM++ effectively captures the diverse range of correlations inherent in the latent representation. Specifically, the latent representation is first divided into multiple slices. When compressing a particular slice, the previously compressed slices serve as its channel-wise contexts. To capture local contexts without sacrificing performance, we introduce a novel checkerboard attention module. Additionally, to capture global contexts, we propose the linear complexity attention-based global correlations capturing by leveraging the decomposition of the softmax operation. The attention map of the previously decoded slice is implicitly computed and employed to predict global correlations in the current slice. Based on MEM++, we propose image compression model MLIC++. Extensive experimental evaluations demonstrate that our MLIC++ achieves state-of-the-art performance, reducing BD-rate by 13.39% on the Kodak dataset compared to VTM-17.0 in PSNR. Furthermore, MLIC++ exhibits linear GPU memory consumption with resolution, making it highly suitable for high-resolution image coding. Code and pre-trained models are available at https://github.com/JiangWeibeta/MLIC.

Results

TaskDatasetMetricValueModel
Image CompressionkodakBD-Rate over VTM-17.0-13.39MLIC++

Related Papers

Perception-Oriented Latent Coding for High-Performance Compressed Domain Semantic Inference2025-07-02Explicit Residual-Based Scalable Image Coding for Humans and Machines2025-06-24NIC-RobustBench: A Comprehensive Open-Source Toolkit for Neural Image Compression and Robustness Analysis2025-06-23LVPNet: A Latent-variable-based Prediction-driven End-to-end Framework for Lossless Compression of Medical Images2025-06-22DiffO: Single-step Diffusion for Image Compression at Ultra-Low Bitrates2025-06-19Fast Training-free Perceptual Image Compression2025-06-19ABC: Adaptive BayesNet Structure Learning for Computational Scalable Multi-task Image Compression2025-06-18Breaking the Multi-Enhancement Bottleneck: Domain-Consistent Quality Enhancement for Compressed Images2025-06-17