TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Mesoscopic Insights: Orchestrating Multi-scale & Hybrid Ar...

Mesoscopic Insights: Orchestrating Multi-scale & Hybrid Architecture for Image Manipulation Localization

Xuekang Zhu, Xiaochen Ma, Lei Su, Zhuohang Jiang, Bo Du, Xiwen Wang, Zeyu Lei, Wentao Feng, Chi-Man Pun, Jizhe Zhou

2024-12-18Image Manipulation LocalizationImage Manipulation
PaperPDFCode(official)

Abstract

The mesoscopic level serves as a bridge between the macroscopic and microscopic worlds, addressing gaps overlooked by both. Image manipulation localization (IML), a crucial technique to pursue truth from fake images, has long relied on low-level (microscopic-level) traces. However, in practice, most tampering aims to deceive the audience by altering image semantics. As a result, manipulation commonly occurs at the object level (macroscopic level), which is equally important as microscopic traces. Therefore, integrating these two levels into the mesoscopic level presents a new perspective for IML research. Inspired by this, our paper explores how to simultaneously construct mesoscopic representations of micro and macro information for IML and introduces the Mesorch architecture to orchestrate both. Specifically, this architecture i) combines Transformers and CNNs in parallel, with Transformers extracting macro information and CNNs capturing micro details, and ii) explores across different scales, assessing micro and macro information seamlessly. Additionally, based on the Mesorch architecture, the paper introduces two baseline models aimed at solving IML tasks through mesoscopic representation. Extensive experiments across four datasets have demonstrated that our models surpass the current state-of-the-art in terms of performance, computational complexity, and robustness.

Results

TaskDatasetMetricValueModel
Image Manipulation LocalizationColumbia(Protocol-CAT)Pixel Binary F10.9224Mesorch
Image Manipulation LocalizationNIST16(Protocol-CAT)Pixel Binary F10.3921Mesorch
Image Manipulation LocalizationCASIAv1(Protoclo-CAT)Pixel Binary F10.8397Mesorch
Image Manipulation LocalizationCOVERAGE(Protocol-CAT)Pixel Binary F10.5862Mesorch

Related Papers

Beyond Fully Supervised Pixel Annotations: Scribble-Driven Weakly-Supervised Framework for Image Manipulation Localization2025-07-17Towards Reliable Identification of Diffusion-based Image Manipulations2025-06-05UniWorld-V1: High-Resolution Semantic Encoders for Unified Visual Understanding and Generation2025-06-03Weakly-supervised Localization of Manipulated Image Regions Using Multi-resolution Learned Features2025-05-29RBench-V: A Primary Assessment for Visual Reasoning Models with Multi-modal Outputs2025-05-22My Face Is Mine, Not Yours: Facial Protection Against Diffusion Model Face Swapping2025-05-21Visual Agentic Reinforcement Fine-Tuning2025-05-20Emerging Properties in Unified Multimodal Pretraining2025-05-20