Bubble identification from images with machine learning methods
- URL: http://arxiv.org/abs/2202.03107v1
- Date: Mon, 7 Feb 2022 12:38:17 GMT
- Title: Bubble identification from images with machine learning methods
- Authors: Hendrik Hessenkemper, Sebastian Starke, Yazan Atassi, Thomas
Ziegenhein, Dirk Lucas
- Abstract summary: An automated and reliable processing of bubbly flow images is needed.
Recent approaches focus on the use of deep learning algorithms for this task.
In the present work, we try to tackle these points by testing three different methods based on Convolutional Neural Networks (CNNs)
- Score: 3.4123736336071864
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An automated and reliable processing of bubbly flow images is highly needed
to analyse large data sets of comprehensive experimental series. A particular
difficulty arises due to overlapping bubble projections in recorded images,
which highly complicates the identification of individual bubbles. Recent
approaches focus on the use of deep learning algorithms for this task and have
already proven the high potential of such techniques. The main difficulties are
the capability to handle different image conditions, higher gas volume
fractions and a proper reconstruction of the hidden segment of a partly
occluded bubble. In the present work, we try to tackle these points by testing
three different methods based on Convolutional Neural Networks (CNNs) for the
two former and two individual approaches that can be used subsequently to
address the latter. To validate our methodology, we created test data sets with
synthetic images that further demonstrate the capabilities as well as
limitations of our combined approach. The generated data, code and trained
models are made accessible to facilitate the use as well as further
developments in the research field of bubble recognition in experimental
images.
Related papers
- Contrasting Deepfakes Diffusion via Contrastive Learning and Global-Local Similarities [88.398085358514]
Contrastive Deepfake Embeddings (CoDE) is a novel embedding space specifically designed for deepfake detection.
CoDE is trained via contrastive learning by additionally enforcing global-local similarities.
arXiv Detail & Related papers (2024-07-29T18:00:10Z) - DeepFeatureX Net: Deep Features eXtractors based Network for discriminating synthetic from real images [6.75641797020186]
Deepfakes, synthetic images generated by deep learning algorithms, represent one of the biggest challenges in the field of Digital Forensics.
We propose a novel approach based on three blocks called Base Models.
The generalization features extracted from each block are then processed to discriminate the origin of the input image.
arXiv Detail & Related papers (2024-04-24T07:25:36Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - Free-ATM: Exploring Unsupervised Learning on Diffusion-Generated Images
with Free Attention Masks [64.67735676127208]
Text-to-image diffusion models have shown great potential for benefiting image recognition.
Although promising, there has been inadequate exploration dedicated to unsupervised learning on diffusion-generated images.
We introduce customized solutions by fully exploiting the aforementioned free attention masks.
arXiv Detail & Related papers (2023-08-13T10:07:46Z) - Two Approaches to Supervised Image Segmentation [55.616364225463066]
The present work develops comparison experiments between deep learning and multiset neurons approaches.
The deep learning approach confirmed its potential for performing image segmentation.
The alternative multiset methodology allowed for enhanced accuracy while requiring little computational resources.
arXiv Detail & Related papers (2023-07-19T16:42:52Z) - Learning to search for and detect objects in foveal images using deep
learning [3.655021726150368]
This study employs a fixation prediction model that emulates human objective-guided attention of searching for a given class in an image.
The foveated pictures at each fixation point are then classified to determine whether the target is present or absent in the scene.
We present a novel dual task model capable of performing fixation prediction and detection simultaneously, allowing knowledge transfer between the two tasks.
arXiv Detail & Related papers (2023-04-12T09:50:25Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Toward an ImageNet Library of Functions for Global Optimization
Benchmarking [0.0]
This study proposes to transform the identification problem into an image recognition problem, with a potential to detect conception-free, machine-driven landscape features.
We address it as a supervised multi-class image recognition problem and apply basic artificial neural network models to solve it.
This evident successful learning is another step toward automated feature extraction and local structure deduction of BBO problems.
arXiv Detail & Related papers (2022-06-27T21:05:00Z) - Deep Quantized Representation for Enhanced Reconstruction [33.337794852677035]
We propose a data-driven Deep Quantized Latent Representation (DQLR) methodology for high-quality image reconstruction in the Shoot Apical Meristem (SAM) of Arabidopsis thaliana.
Our proposed framework utilizes multiple consecutive slices in the z-stack to learn a low dimensional latent space, quantize it and subsequently perform reconstruction using the quantized representation to obtain sharper images.
arXiv Detail & Related papers (2021-07-29T23:22:27Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.