TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Multimodal Multi-Head Convolutional Attention with Various...

Multimodal Multi-Head Convolutional Attention with Various Kernel Sizes for Medical Image Super-Resolution

Mariana-Iuliana Georgescu, Radu Tudor Ionescu, Andreea-Iuliana Miron, Olivian Savencu, Nicolae-Catalin Ristea, Nicolae Verga, Fahad Shahbaz Khan

2022-04-08Super-ResolutionComputed Tomography (CT)Image Super-Resolution
PaperPDFCode(official)

Abstract

Super-resolving medical images can help physicians in providing more accurate diagnostics. In many situations, computed tomography (CT) or magnetic resonance imaging (MRI) techniques capture several scans (modes) during a single investigation, which can jointly be used (in a multimodal fashion) to further boost the quality of super-resolution results. To this end, we propose a novel multimodal multi-head convolutional attention module to super-resolve CT and MRI scans. Our attention module uses the convolution operation to perform joint spatial-channel attention on multiple concatenated input tensors, where the kernel (receptive field) size controls the reduction rate of the spatial attention, and the number of convolutional filters controls the reduction rate of the channel attention, respectively. We introduce multiple attention heads, each head having a distinct receptive field size corresponding to a particular reduction rate for the spatial attention. We integrate our multimodal multi-head convolutional attention (MMHCA) into two deep neural architectures for super-resolution and conduct experiments on three data sets. Our empirical results show the superiority of our attention module over the state-of-the-art attention mechanisms used in super-resolution. Moreover, we conduct an ablation study to assess the impact of the components involved in our attention module, e.g. the number of inputs or the number of heads. Our code is freely available at https://github.com/lilygeorgescu/MHCA.

Results

TaskDatasetMetricValueModel
Super-ResolutionIXIPSNR 2x T2w40.43EDSR+MMHCA
Super-ResolutionIXIPSNR 4x T2w32.7EDSR+MMHCA
Super-ResolutionIXISSIM 4x T2w0.9469EDSR+MMHCA
Super-ResolutionIXISSIM for 2x T2w0.9877EDSR+MMHCA
Image Super-ResolutionIXIPSNR 2x T2w40.43EDSR+MMHCA
Image Super-ResolutionIXIPSNR 4x T2w32.7EDSR+MMHCA
Image Super-ResolutionIXISSIM 4x T2w0.9469EDSR+MMHCA
Image Super-ResolutionIXISSIM for 2x T2w0.9877EDSR+MMHCA
3D Object Super-ResolutionIXIPSNR 2x T2w40.43EDSR+MMHCA
3D Object Super-ResolutionIXIPSNR 4x T2w32.7EDSR+MMHCA
3D Object Super-ResolutionIXISSIM 4x T2w0.9469EDSR+MMHCA
3D Object Super-ResolutionIXISSIM for 2x T2w0.9877EDSR+MMHCA
16kIXIPSNR 2x T2w40.43EDSR+MMHCA
16kIXIPSNR 4x T2w32.7EDSR+MMHCA
16kIXISSIM 4x T2w0.9469EDSR+MMHCA
16kIXISSIM for 2x T2w0.9877EDSR+MMHCA

Related Papers

SpectraLift: Physics-Guided Spectral-Inversion Network for Self-Supervised Hyperspectral Image Super-Resolution2025-07-17From Variability To Accuracy: Conditional Bernoulli Diffusion Models with Consensus-Driven Correction for Thin Structure Segmentation2025-07-17Latent Space Consistency for Sparse-View CT Reconstruction2025-07-15IM-LUT: Interpolation Mixing Look-Up Tables for Image Super-Resolution2025-07-14PanoDiff-SR: Synthesizing Dental Panoramic Radiographs using Diffusion and Super-resolution2025-07-12HNOSeg-XS: Extremely Small Hartley Neural Operator for Efficient and Resolution-Robust 3D Image Segmentation2025-07-104KAgent: Agentic Any Image to 4K Super-Resolution2025-07-09Enhancing Synthetic CT from CBCT via Multimodal Fusion and End-To-End Registration2025-07-08