TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MLIC: Multi-Reference Entropy Model for Learned Image Comp...

MLIC: Multi-Reference Entropy Model for Learned Image Compression

Wei Jiang, Jiayu Yang, Yongqi Zhai, Peirong Ning, Feng Gao, Ronggang Wang

2022-11-14Image Compression
PaperPDFCode(official)

Abstract

Recently, learned image compression has achieved remarkable performance. The entropy model, which estimates the distribution of the latent representation, plays a crucial role in boosting rate-distortion performance. However, most entropy models only capture correlations in one dimension, while the latent representation contain channel-wise, local spatial, and global spatial correlations. To tackle this issue, we propose the Multi-Reference Entropy Model (MEM) and the advanced version, MEM$^+$. These models capture the different types of correlations present in latent representation. Specifically, We first divide the latent representation into slices. When decoding the current slice, we use previously decoded slices as context and employ the attention map of the previously decoded slice to predict global correlations in the current slice. To capture local contexts, we introduce two enhanced checkerboard context capturing techniques that avoids performance degradation. Based on MEM and MEM$^+$, we propose image compression models MLIC and MLIC$^+$. Extensive experimental evaluations demonstrate that our MLIC and MLIC$^+$ models achieve state-of-the-art performance, reducing BD-rate by $8.05\%$ and $11.39\%$ on the Kodak dataset compared to VTM-17.0 when measured in PSNR. Our code is available at https://github.com/JiangWeibeta/MLIC.

Results

TaskDatasetMetricValueModel
Image CompressionkodakBD-Rate over VTM-17.0-11.39MLIC+
Image CompressionkodakBD-Rate over VTM-17.0-8.05MLIC

Related Papers

Perception-Oriented Latent Coding for High-Performance Compressed Domain Semantic Inference2025-07-02Explicit Residual-Based Scalable Image Coding for Humans and Machines2025-06-24NIC-RobustBench: A Comprehensive Open-Source Toolkit for Neural Image Compression and Robustness Analysis2025-06-23LVPNet: A Latent-variable-based Prediction-driven End-to-end Framework for Lossless Compression of Medical Images2025-06-22DiffO: Single-step Diffusion for Image Compression at Ultra-Low Bitrates2025-06-19Fast Training-free Perceptual Image Compression2025-06-19ABC: Adaptive BayesNet Structure Learning for Computational Scalable Multi-task Image Compression2025-06-18Breaking the Multi-Enhancement Bottleneck: Domain-Consistent Quality Enhancement for Compressed Images2025-06-17