FloodNet-to-FloodGAN: Generating Flood Scenes in Aerial Images

Nov 09, 2023

Abstract

A global rise in the occurrences of natural disasters and human-borne conflicts has put a spotlight on the need for Earth Observation (EO) data in designing practical Humanitarian Assistance and Disaster Relief (HADR) interventions. Novel techniques that leverage remotely sensed data are leading to a paradigm shift in our understanding of such situations and improving the efficacy of our response. Aerial flood maps can provide localized insight into the extent of flood-related damage and the degree to which communities’ access to shelter, clean water, and communication channels have been compromised. Unfortunately, such insights typically only emerge hours or days after a flooding event has occurred. Moreover, a dearth of available historical data restricts the development of practical machine learning based methods. This work examines the use of Generative Adversarial Networks (GANs) in simulating flooding in aerial images. We first introduce the Houston UAV dataset, an extension of the FloodNet dataset. Our dataset accommodates more well-defined semantic classes and significantly reduces the label noise in semantic masks. We propose a GAN-based pipeline to generate flood conditions in non-flooded regions, generating synthetic flooding scenes for predictive mapping. Code and dataset are available at https://github. com/granularai/flood-synthesis.

Contributed by

Shubham Goswami , Sagar Verma , Kavya Gupta , Siddharth Gupta

Related Research

Post Wildfire Burnt-up Detection using Siamese UNet

In this article, we present an approach for detecting burnt area due to wild fire in Sentinel-2 images by leveraging the power of Siamese neural networks. By employing a Siamese network, we are able to efficiently encode the feature extraction process for pairs of images. This is achieved by utilizing two branches within the Siamese network, which capture and combine information at different resolutions to make predictions. The weights are shared between these two branches in siamese networks. This design allows to effectively analyze the changes between two remote sensing images, enabling precise identification of areas impacted by forest wildfires in the state of California as part of ChaBuD challenge thereby assisting local authorities in effectively monitoring the impacted regions and facilitating the restoration process. We experimented with various model architectures to train ChaBuD dataset and carefully evaluated the performance. Through rigorous testing and analysis, we have achieved promising results, ultimately obtaining a final private score (IoU) of 0.7495 on the hidden test dataset. The code is available at https://github.com/kavyagupta/chabud. We also deploy the final model as a point solution for anyone to use at https://firemap.io.

09 November 2023

Detecting Urban Changes with Recurrent Neural Networks from Multitemporal Sentinel-2 Data

The advent of multitemporal high resolution data, like the Copernicus Sentinel-2, has enhanced significantly the potential of monitoring the earth's surface and environmental dynamics. In this paper, we present a novel deep learning framework for urban change detection which combines state-of-the-art fully convolutional networks (similar to U-Net) for feature representation and powerful recurrent networks (such as LSTMs) for temporal modeling. We report our results on the recently publicly available bi-temporal Onera Satellite Change Detection (OSCD) Sentinel-2 dataset, enhancing the temporal information with additional images of the same region on different dates. Moreover, we evaluate the performance of the recurrent networks as well as the use of the additional dates on the unseen test-set using an ensemble cross-validation strategy. All the developed models during the validation phase have scored an overall accuracy of more than 95%, while the use of LSTMs and further temporal information, boost the F1 rate of the change class by an additional 1.5%.

09 November 2023