Gasformer: A Transformer-based Architecture for Segmenting Methane Emissions from Livestock in Optical Gas Imaging
- URL: http://arxiv.org/abs/2404.10841v1
- Date: Tue, 16 Apr 2024 18:38:23 GMT
- Title: Gasformer: A Transformer-based Architecture for Segmenting Methane Emissions from Livestock in Optical Gas Imaging
- Authors: Toqi Tahamid Sarker, Mohamed G Embaby, Khaled R Ahmed, Amer AbuGhazaleh,
- Abstract summary: Methane emissions from livestock, particularly cattle, significantly contribute to climate change.
We introduce Gasformer, a novel semantic segmentation architecture for detecting low-flow rate methane emissions from livestock.
We present two unique datasets captured with a FLIR GF77 OGI camera.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Methane emissions from livestock, particularly cattle, significantly contribute to climate change. Effective methane emission mitigation strategies are crucial as the global population and demand for livestock products increase. We introduce Gasformer, a novel semantic segmentation architecture for detecting low-flow rate methane emissions from livestock, and controlled release experiments using optical gas imaging. We present two unique datasets captured with a FLIR GF77 OGI camera. Gasformer leverages a Mix Vision Transformer encoder and a Light-Ham decoder to generate multi-scale features and refine segmentation maps. Gasformer outperforms other state-of-the-art models on both datasets, demonstrating its effectiveness in detecting and segmenting methane plumes in controlled and real-world scenarios. On the livestock dataset, Gasformer achieves mIoU of 88.56%, surpassing other state-of-the-art models. Materials are available at: github.com/toqitahamid/Gasformer.
Related papers
- Machine Learning for Methane Detection and Quantification from Space -- A survey [49.7996292123687]
Methane (CH_4) is a potent anthropogenic greenhouse gas, contributing 86 times more to global warming than Carbon Dioxide (CO_2) over 20 years.
This work expands existing information on operational methane point source detection sensors in the Short-Wave Infrared (SWIR) bands.
It reviews the state-of-the-art for traditional as well as Machine Learning (ML) approaches.
arXiv Detail & Related papers (2024-08-27T15:03:20Z) - GeoViT: A Versatile Vision Transformer Architecture for Geospatial Image
Analysis [2.1647301294759624]
We introduce GeoViT, a compact vision transformer model adept in processing satellite imagery for multimodal segmentation.
We attain superior accuracy in estimating power generation rates, fuel type, plume coverage for CO2, and high-resolution NO2 concentration mapping.
arXiv Detail & Related papers (2023-11-24T06:22:38Z) - Autonomous Detection of Methane Emissions in Multispectral Satellite
Data Using Deep Learning [73.01013149014865]
Methane is one of the most potent greenhouse gases.
Current methane emission monitoring techniques rely on approximate emission factors or self-reporting.
Deep learning methods can be leveraged to automatize the detection of methane leaks in Sentinel-2 satellite multispectral data.
arXiv Detail & Related papers (2023-08-21T19:36:50Z) - MethaneMapper: Spectral Absorption aware Hyperspectral Transformer for
Methane Detection [13.247385727508155]
Methane is the chief contributor to global climate change.
We propose a novel end-to-end spectral absorption wavelength aware transformer network, MethaneMapper, to detect and quantify the emissions.
MethaneMapper achieves 0.63 mAP in detection and reduces the model size (by 5x) compared to the current state of the art.
arXiv Detail & Related papers (2023-04-05T22:15:18Z) - Detecting Methane Plumes using PRISMA: Deep Learning Model and Data
Augmentation [67.32835203947133]
New generation of hyperspectral imagers, such as PRISMA, has improved significantly our detection capability of methane (CH4) plumes from space at high spatial resolution (30m)
We present here a complete framework to identify CH4 plumes using images from the PRISMA satellite mission and a deep learning model able to detect plumes over large areas.
arXiv Detail & Related papers (2022-11-17T17:36:05Z) - Towards Generating Large Synthetic Phytoplankton Datasets for Efficient
Monitoring of Harmful Algal Blooms [77.25251419910205]
Harmful algal blooms (HABs) cause significant fish deaths in aquaculture farms.
Currently, the standard method to enumerate harmful algae and other phytoplankton is to manually observe and count them under a microscope.
We employ Generative Adversarial Networks (GANs) to generate synthetic images.
arXiv Detail & Related papers (2022-08-03T20:15:55Z) - METER-ML: A Multi-sensor Earth Observation Benchmark for Automated
Methane Source Mapping [2.814379852040968]
Deep learning can identify the locations and characteristics of methane sources.
There is a substantial lack of publicly available data to enable machine learning researchers and practitioners to build automated mapping approaches.
We construct a multi-sensor dataset called METER-ML containing 86,625 georeferenced NAIP, Sentinel-1, and Sentinel-2 images in the U.S.
We find that our best model achieves an area under the precision recall curve of 0.915 for identifying concentrated animal feeding operations and 0.821 for oil refineries and petroleum terminals on an expert-labeled test set.
arXiv Detail & Related papers (2022-07-22T16:12:07Z) - Counting Cows: Tracking Illegal Cattle Ranching From High-Resolution
Satellite Imagery [59.32805936205217]
Cattle farming is responsible for 8.8% of greenhouse gas emissions worldwide.
We obtained satellite imagery of the Amazon at 40cm resolution, and compiled a dataset of 903 images containing a total of 28498 cattle.
Our experiments show promising results and highlight important directions for the next steps on both counting algorithms and the data collection process for solving such challenges.
arXiv Detail & Related papers (2020-11-14T19:07:39Z) - Dual In-painting Model for Unsupervised Gaze Correction and Animation in
the Wild [82.42401132933462]
We present a solution that works without the need for precise annotations of the gaze angle and the head pose.
Our method consists of three novel modules: the Gaze Correction module (GCM), the Gaze Animation module (GAM), and the Pretrained Autoencoder module (PAM)
arXiv Detail & Related papers (2020-08-09T23:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.