Unified smoke and fire detection in an evolutionary framework with
self-supervised progressive data augment
- URL: http://arxiv.org/abs/2202.07954v1
- Date: Wed, 16 Feb 2022 09:48:03 GMT
- Title: Unified smoke and fire detection in an evolutionary framework with
self-supervised progressive data augment
- Authors: Hang Zhang, Su Yang, Hongyong Wang, zhongyan lu, helin sun
- Abstract summary: In this study, we collect a large image data set to re-label them as a multi-label image classification problem.
We propose a data augment method by random image stitch to deploy resizing, deforming, position variation, and background altering.
Experiments show that the proposed method can effectively improve the generalization performance of the model for concurrent smoke and fire detection.
- Score: 5.8363672020565005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few researches have studied simultaneous detection of smoke and flame
accompanying fires due to their different physical natures that lead to
uncertain fluid patterns. In this study, we collect a large image data set to
re-label them as a multi-label image classification problem so as to identify
smoke and flame simultaneously. In order to solve the generalization ability of
the detection model on account of the movable fluid objects with uncertain
shapes like fire and smoke, and their not compactible natures as well as the
complex backgrounds with high variations, we propose a data augment method by
random image stitch to deploy resizing, deforming, position variation, and
background altering so as to enlarge the view of the learner. Moreover, we
propose a self-learning data augment method by using the class activation map
to extract the highly trustable region as new data source of positive examples
to further enhance the data augment. By the mutual reinforcement between the
data augment and the detection model that are performed iteratively, both
modules make progress in an evolutionary manner. Experiments show that the
proposed method can effectively improve the generalization performance of the
model for concurrent smoke and fire detection.
Related papers
- Detecting Generated Images by Fitting Natural Image Distributions [75.31113784234877]
We propose a novel framework that exploits geometric differences between the data manifold of natural and generated images.<n>We employ a pair of functions engineered to yield consistent outputs for natural images but divergent outputs for generated ones.<n>An image is identified as generated if a transformation along its data manifold induces a significant change in the loss value of a self-supervised model pre-trained on natural images.
arXiv Detail & Related papers (2025-11-03T07:20:38Z) - Hiding Images in Diffusion Models by Editing Learned Score Functions [27.130542925771692]
Current methods exhibit limitations in achieving high extraction accuracy, model fidelity, and hiding efficiency.
We describe a simple yet effective approach that embeds images at specific timesteps in the reverse diffusion process by editing the learned score functions.
We also introduce a parameter-efficient fine-tuning method that combines gradient-based parameter selection with low-rank adaptation to enhance model fidelity and hiding efficiency.
arXiv Detail & Related papers (2025-03-24T09:04:25Z) - One-for-More: Continual Diffusion Model for Anomaly Detection [61.12622458367425]
Anomaly detection methods utilize diffusion models to generate or reconstruct normal samples when given arbitrary anomaly images.
Our study found that the diffusion model suffers from severe faithfulness hallucination'' and catastrophic forgetting''
We propose a continual diffusion model that uses gradient projection to achieve stable continual learning.
arXiv Detail & Related papers (2025-02-27T07:47:27Z) - Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence [92.07601770031236]
We investigate semantically meaningful patterns in the attention heads of an encoder-only Transformer architecture.
We find that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization.
arXiv Detail & Related papers (2024-09-20T07:41:47Z) - Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture [58.60915132222421]
We introduce an approach that is both general and parameter-efficient for face forgery detection.
We design a forgery-style mixture formulation that augments the diversity of forgery source domains.
We show that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters.
arXiv Detail & Related papers (2024-08-23T01:53:36Z) - A Simple Background Augmentation Method for Object Detection with Diffusion Model [53.32935683257045]
In computer vision, it is well-known that a lack of data diversity will impair model performance.
We propose a simple yet effective data augmentation approach by leveraging advancements in generative models.
Background augmentation, in particular, significantly improves the models' robustness and generalization capabilities.
arXiv Detail & Related papers (2024-08-01T07:40:00Z) - Select-Mosaic: Data Augmentation Method for Dense Small Object Scenes [4.418515380386838]
Mosaic data augmentation technique stitches multiple images together to increase the diversity and complexity of training data.
This paper proposes the Select-Mosaic data augmentation method, which is improved with a fine-grained region selection strategy.
The improved Select-Mosaic method demonstrates superior performance in handling dense small object detection tasks.
arXiv Detail & Related papers (2024-06-08T09:22:08Z) - DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception [78.26734070960886]
Current perceptive models heavily depend on resource-intensive datasets.
We introduce perception-aware loss (P.A. loss) through segmentation, improving both quality and controllability.
Our method customizes data augmentation by extracting and utilizing perception-aware attribute (P.A. Attr) during generation.
arXiv Detail & Related papers (2024-03-20T04:58:03Z) - Geometric Data Augmentations to Mitigate Distribution Shifts in Pollen
Classification from Microscopic Images [4.545340728210854]
We leverage the domain knowledge that geometric features are highly important for accurate pollen identification.
We introduce two novel geometric image augmentation techniques to significantly narrow the accuracy gap between the model performance on the train and test datasets.
arXiv Detail & Related papers (2023-11-18T10:35:18Z) - Boosting Human-Object Interaction Detection with Text-to-Image Diffusion
Model [22.31860516617302]
We introduce DiffHOI, a novel HOI detection scheme grounded on a pre-trained text-image diffusion model.
To fill in the gaps of HOI datasets, we propose SynHOI, a class-balance, large-scale, and high-diversity synthetic dataset.
Experiments demonstrate that DiffHOI significantly outperforms the state-of-the-art in regular detection (i.e., 41.50 mAP) and zero-shot detection.
arXiv Detail & Related papers (2023-05-20T17:59:23Z) - Local Magnification for Data and Feature Augmentation [53.04028225837681]
We propose an easy-to-implement and model-free data augmentation method called Local Magnification (LOMA)
LOMA generates additional training data by randomly magnifying a local area of the image.
Experiments show that our proposed LOMA, though straightforward, can be combined with standard data augmentation to significantly improve the performance on image classification and object detection.
arXiv Detail & Related papers (2022-11-15T02:51:59Z) - Weakly Supervised Change Detection Using Guided Anisotropic Difusion [97.43170678509478]
We propose original ideas that help us to leverage such datasets in the context of change detection.
First, we propose the guided anisotropic diffusion (GAD) algorithm, which improves semantic segmentation results.
We then show its potential in two weakly-supervised learning strategies tailored for change detection.
arXiv Detail & Related papers (2021-12-31T10:03:47Z) - Contextual Fusion For Adversarial Robustness [0.0]
Deep neural networks are usually designed to process one particular information stream and susceptible to various types of adversarial perturbations.
We developed a fusion model using a combination of background and foreground features extracted in parallel from Places-CNN and Imagenet-CNN.
For gradient based attacks, our results show that fusion allows for significant improvements in classification without decreasing performance on unperturbed data.
arXiv Detail & Related papers (2020-11-18T20:13:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.