TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Filmy Cloud Removal on Satellite Imagery with Multispectra...

Filmy Cloud Removal on Satellite Imagery with Multispectral Conditional Generative Adversarial Nets

Kenji Enomoto, Ken Sakurada, Weimin WANG, Hiroshi Fukui, Masashi Matsuoka, Ryosuke Nakamura, Nobuo Kawaguchi

2017-10-13Cloud Removal
PaperPDF

Abstract

In this paper, we propose a method for cloud removal from visible light RGB satellite images by extending the conditional Generative Adversarial Networks (cGANs) from RGB images to multispectral images. Satellite images have been widely utilized for various purposes, such as natural environment monitoring (pollution, forest or rivers), transportation improvement and prompt emergency response to disasters. However, the obscurity caused by clouds makes it unstable to monitor the situation on the ground with the visible light camera. Images captured by a longer wavelength are introduced to reduce the effects of clouds. Synthetic Aperture Radar (SAR) is such an example that improves visibility even the clouds exist. On the other hand, the spatial resolution decreases as the wavelength increases. Furthermore, the images captured by long wavelengths differs considerably from those captured by visible light in terms of their appearance. Therefore, we propose a network that can remove clouds and generate visible light images from the multispectral images taken as inputs. This is achieved by extending the input channels of cGANs to be compatible with multispectral images. The networks are trained to output images that are close to the ground truth using the images synthesized with clouds over the ground truth as inputs. In the available dataset, the proportion of images of the forest or the sea is very high, which will introduce bias in the training dataset if uniformly sampled from the original dataset. Thus, we utilize the t-Distributed Stochastic Neighbor Embedding (t-SNE) to improve the problem of bias in the training dataset. Finally, we confirm the feasibility of the proposed network on the dataset of four bands images, which include three visible light bands and one near-infrared (NIR) band.

Results

TaskDatasetMetricValueModel
Image GenerationSEN12MS-CRMAE0.048McGAN
Image GenerationSEN12MS-CRPSNR25.14McGAN
Image GenerationSEN12MS-CRSAM15.676McGAN
Image GenerationSEN12MS-CRSSIM0.744McGAN
Image InpaintingSEN12MS-CRMAE0.048McGAN
Image InpaintingSEN12MS-CRPSNR25.14McGAN
Image InpaintingSEN12MS-CRSAM15.676McGAN
Image InpaintingSEN12MS-CRSSIM0.744McGAN

Related Papers

Image Restoration via Multi-domain Learning2025-05-07DGMR: Diffusion Guided Masked Reconstruction Framework for Multimodal Cloud Removal2025-05-01When Cloud Removal Meets Diffusion Model in Remote Sensing2025-04-21SAR-to-RGB Translation with Latent Diffusion for Earth Observation2025-04-15Cross-Frequency Implicit Neural Representation with Self-Evolving Parameters2025-04-15MIMRS: A Survey on Masked Image Modeling in Remote Sensing2025-04-04Multimodal Diffusion Bridge with Attention-Based SAR Fusion for Satellite Image Cloud Removal2025-04-04Effective Cloud Removal for Remote Sensing Images by an Improved Mean-Reverting Denoising Model with Elucidated Design Space2025-03-31