Unlocking the Potential: Multi-task Deep Learning for Spaceborne Quantitative Monitoring of Fugitive Methane Plumes
- URL: http://arxiv.org/abs/2401.12870v2
- Date: Mon, 15 Jul 2024 08:49:24 GMT
- Title: Unlocking the Potential: Multi-task Deep Learning for Spaceborne Quantitative Monitoring of Fugitive Methane Plumes
- Authors: Guoxin Si, Shiliang Fu, Wei Yao,
- Abstract summary: Methane concentration inversion, plume segmentation, and emission rate estimation are three subtasks of methane emission monitoring.
We introduce a novel deep learning-based framework for quantitative methane emission monitoring from remote sensing images.
We train a U-Net network for methane concentration inversion, a Mask R-CNN network for methane plume segmentation, and a ResNet-50 network for methane emission rate estimation.
- Score: 0.7970333810038046
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As global warming intensifies, increased attention is being paid to monitoring fugitive methane emissions and detecting gas plumes from landfills. We have divided methane emission monitoring into three subtasks: methane concentration inversion, plume segmentation, and emission rate estimation. Traditional algorithms face certain limitations: methane concentration inversion typically employs the matched filter, which is sensitive to the global spectrum distribution and prone to significant noise. There is scant research on plume segmentation, with many studies depending on manual segmentation, which can be subjective. The estimation of methane emission rate frequently uses the IME algorithm, which necessitates meteorological measurement data. Utilizing the WENT landfill site in Hong Kong along with PRISMA hyperspectral satellite imagery, we introduce a novel deep learning-based framework for quantitative methane emission monitoring from remote sensing images that is grounded in physical simulation. We create simulated methane plumes using large eddy simulation (LES) and various concentration maps of fugitive emissions using the radiative transfer equation (RTE), while applying augmentation techniques to construct a simulated PRISMA dataset. We train a U-Net network for methane concentration inversion, a Mask R-CNN network for methane plume segmentation, and a ResNet-50 network for methane emission rate estimation. All three deep networks yield higher validation accuracy compared to traditional algorithms. Furthermore, we combine the first two subtasks and the last two subtasks to design multi-task learning models, MTL-01 and MTL-02, both of which outperform single-task models in terms of accuracy. Our research exemplifies the application of multi-task deep learning to quantitative methane monitoring and can be generalized to a wide array of methane monitoring tasks.
Related papers
- Gasformer: A Transformer-based Architecture for Segmenting Methane Emissions from Livestock in Optical Gas Imaging [0.0]
Methane emissions from livestock, particularly cattle, significantly contribute to climate change.
We introduce Gasformer, a novel semantic segmentation architecture for detecting low-flow rate methane emissions from livestock.
We present two unique datasets captured with a FLIR GF77 OGI camera.
arXiv Detail & Related papers (2024-04-16T18:38:23Z) - Modeling State Shifting via Local-Global Distillation for Event-Frame Gaze Tracking [61.44701715285463]
This paper tackles the problem of passive gaze estimation using both event and frame data.
We reformulate gaze estimation as the quantification of the state shifting from the current state to several prior registered anchor states.
To improve the generalization ability, instead of learning a large gaze estimation network directly, we align a group of local experts with a student network.
arXiv Detail & Related papers (2024-03-31T03:30:37Z) - Autonomous Detection of Methane Emissions in Multispectral Satellite
Data Using Deep Learning [73.01013149014865]
Methane is one of the most potent greenhouse gases.
Current methane emission monitoring techniques rely on approximate emission factors or self-reporting.
Deep learning methods can be leveraged to automatize the detection of methane leaks in Sentinel-2 satellite multispectral data.
arXiv Detail & Related papers (2023-08-21T19:36:50Z) - MethaneMapper: Spectral Absorption aware Hyperspectral Transformer for
Methane Detection [13.247385727508155]
Methane is the chief contributor to global climate change.
We propose a novel end-to-end spectral absorption wavelength aware transformer network, MethaneMapper, to detect and quantify the emissions.
MethaneMapper achieves 0.63 mAP in detection and reduces the model size (by 5x) compared to the current state of the art.
arXiv Detail & Related papers (2023-04-05T22:15:18Z) - An Adaptive GViT for Gas Mixture Identification and Concentration
Estimation [9.331787778137945]
The accuracy of gas identification can reach 97.61%, R2 of the pure gas concentration estimation is above 99.5% on average.
The GViT model can directly utilize sensor ar-rays' variable-length real-time signal data as input.
arXiv Detail & Related papers (2023-03-10T03:37:05Z) - Detecting Methane Plumes using PRISMA: Deep Learning Model and Data
Augmentation [67.32835203947133]
New generation of hyperspectral imagers, such as PRISMA, has improved significantly our detection capability of methane (CH4) plumes from space at high spatial resolution (30m)
We present here a complete framework to identify CH4 plumes using images from the PRISMA satellite mission and a deep learning model able to detect plumes over large areas.
arXiv Detail & Related papers (2022-11-17T17:36:05Z) - METER-ML: A Multi-sensor Earth Observation Benchmark for Automated
Methane Source Mapping [2.814379852040968]
Deep learning can identify the locations and characteristics of methane sources.
There is a substantial lack of publicly available data to enable machine learning researchers and practitioners to build automated mapping approaches.
We construct a multi-sensor dataset called METER-ML containing 86,625 georeferenced NAIP, Sentinel-1, and Sentinel-2 images in the U.S.
We find that our best model achieves an area under the precision recall curve of 0.915 for identifying concentrated animal feeding operations and 0.821 for oil refineries and petroleum terminals on an expert-labeled test set.
arXiv Detail & Related papers (2022-07-22T16:12:07Z) - Energy-Efficient and Federated Meta-Learning via Projected Stochastic
Gradient Ascent [79.58680275615752]
We propose an energy-efficient federated meta-learning framework.
We assume each task is owned by a separate agent, so a limited number of tasks is used to train a meta-model.
arXiv Detail & Related papers (2021-05-31T08:15:44Z) - MetricUNet: Synergistic Image- and Voxel-Level Learning for Precise CT
Prostate Segmentation via Online Sampling [66.01558025094333]
We propose a two-stage framework, with the first stage to quickly localize the prostate region and the second stage to precisely segment the prostate.
We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network.
Our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss.
arXiv Detail & Related papers (2020-05-15T10:37:02Z) - Spatial and spectral deep attention fusion for multi-channel speech
separation using deep embedding features [60.20150317299749]
Multi-channel deep clustering (MDC) has acquired a good performance for speech separation.
We propose a deep attention fusion method to dynamically control the weights of the spectral and spatial features and combine them deeply.
Experimental results show that the proposed method outperforms MDC baseline and even better than the ideal binary mask (IBM)
arXiv Detail & Related papers (2020-02-05T03:49:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.