RADiff: Controllable Diffusion Models for Radio Astronomical Maps
Generation
- URL: http://arxiv.org/abs/2307.02392v1
- Date: Wed, 5 Jul 2023 16:04:44 GMT
- Title: RADiff: Controllable Diffusion Models for Radio Astronomical Maps
Generation
- Authors: Renato Sortino, Thomas Cecconello, Andrea DeMarco, Giuseppe Fiameni,
Andrea Pilzer, Andrew M. Hopkins, Daniel Magro, Simone Riggi, Eva Sciacca,
Adriano Ingallinera, Cristobal Bordiu, Filomena Bufano, Concetto Spampinato
- Abstract summary: RADiff is a generative approach based on conditional diffusion models trained over an annotated radio dataset.
We show that it is possible to generate fully-synthetic image-annotation pairs to automatically augment any annotated dataset.
- Score: 6.128112213696457
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Along with the nearing completion of the Square Kilometre Array (SKA), comes
an increasing demand for accurate and reliable automated solutions to extract
valuable information from the vast amount of data it will allow acquiring.
Automated source finding is a particularly important task in this context, as
it enables the detection and classification of astronomical objects.
Deep-learning-based object detection and semantic segmentation models have
proven to be suitable for this purpose. However, training such deep networks
requires a high volume of labeled data, which is not trivial to obtain in the
context of radio astronomy. Since data needs to be manually labeled by experts,
this process is not scalable to large dataset sizes, limiting the possibilities
of leveraging deep networks to address several tasks. In this work, we propose
RADiff, a generative approach based on conditional diffusion models trained
over an annotated radio dataset to generate synthetic images, containing radio
sources of different morphologies, to augment existing datasets and reduce the
problems caused by class imbalances. We also show that it is possible to
generate fully-synthetic image-annotation pairs to automatically augment any
annotated dataset. We evaluate the effectiveness of this approach by training a
semantic segmentation model on a real dataset augmented in two ways: 1) using
synthetic images obtained from real masks, and 2) generating images from
synthetic semantic masks. We show an improvement in performance when applying
augmentation, gaining up to 18% in performance when using real masks and 4%
when augmenting with synthetic masks. Finally, we employ this model to generate
large-scale radio maps with the objective of simulating Data Challenges.
Related papers
- SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - Modified CycleGAN for the synthesization of samples for wheat head
segmentation [0.09999629695552192]
In the absence of an annotated dataset, synthetic data can be used for model development.
We develop a realistic annotated synthetic dataset for wheat head segmentation.
The resulting model achieved a Dice score of 83.4% on an internal dataset and 83.6% on two external Global Wheat Head Detection datasets.
arXiv Detail & Related papers (2024-02-23T06:42:58Z) - MosaicFusion: Diffusion Models as Data Augmenters for Large Vocabulary Instance Segmentation [104.03166324080917]
We present MosaicFusion, a simple yet effective diffusion-based data augmentation approach for large vocabulary instance segmentation.
Our method is training-free and does not rely on any label supervision.
Experimental results on the challenging LVIS long-tailed and open-vocabulary benchmarks demonstrate that MosaicFusion can significantly improve the performance of existing instance segmentation models.
arXiv Detail & Related papers (2023-09-22T17:59:42Z) - DiffusionEngine: Diffusion Model is Scalable Data Engine for Object
Detection [41.436817746749384]
Diffusion Model is a scalable data engine for object detection.
DiffusionEngine (DE) provides high-quality detection-oriented training pairs in a single stage.
arXiv Detail & Related papers (2023-09-07T17:55:01Z) - DatasetDM: Synthesizing Data with Perception Annotations Using Diffusion
Models [61.906934570771256]
We present a generic dataset generation model that can produce diverse synthetic images and perception annotations.
Our method builds upon the pre-trained diffusion model and extends text-guided image synthesis to perception data generation.
We show that the rich latent code of the diffusion model can be effectively decoded as accurate perception annotations using a decoder module.
arXiv Detail & Related papers (2023-08-11T14:38:11Z) - PromptMix: Text-to-image diffusion models enhance the performance of
lightweight networks [83.08625720856445]
Deep learning tasks require annotations that are too time consuming for human operators.
In this paper, we introduce PromptMix, a method for artificially boosting the size of existing datasets.
We show that PromptMix can significantly increase the performance of lightweight networks by up to 26%.
arXiv Detail & Related papers (2023-01-30T14:15:47Z) - Image Classification on Small Datasets via Masked Feature Mixing [22.105356244579745]
A proposed architecture called ChimeraMix learns a data augmentation by generating compositions of instances.
The generative model encodes images in pairs, combines the features guided by a mask, and creates new samples.
For evaluation, all methods are trained from scratch without any additional data.
arXiv Detail & Related papers (2022-02-23T16:51:22Z) - Generating Data Augmentation samples for Semantic Segmentation of Salt
Bodies in a Synthetic Seismic Image Dataset [0.0]
This work proposes a Data Augmentation method based on training two generative models to augment the number of samples in a seismic image dataset for the semantic segmentation of salt bodies.
Our method uses deep learning models to generate pairs of seismic image patches and their respective salt masks for the Data Augmentation.
arXiv Detail & Related papers (2021-06-15T16:32:32Z) - DecAug: Augmenting HOI Detection via Decomposition [54.65572599920679]
Current algorithms suffer from insufficient training samples and category imbalance within datasets.
We propose an efficient and effective data augmentation method called DecAug for HOI detection.
Experiments show that our method brings up to 3.3 mAP and 1.6 mAP improvements on V-COCO and HICODET dataset.
arXiv Detail & Related papers (2020-10-02T13:59:05Z) - Set Based Stochastic Subsampling [85.5331107565578]
We propose a set-based two-stage end-to-end neural subsampling model that is jointly optimized with an textitarbitrary downstream task network.
We show that it outperforms the relevant baselines under low subsampling rates on a variety of tasks including image classification, image reconstruction, function reconstruction and few-shot classification.
arXiv Detail & Related papers (2020-06-25T07:36:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.