DLSIA: Deep Learning for Scientific Image Analysis
- URL: http://arxiv.org/abs/2308.02559v2
- Date: Sat, 26 Aug 2023 18:03:39 GMT
- Title: DLSIA: Deep Learning for Scientific Image Analysis
- Authors: Eric J Roberts, Tanny Chavez, Alexander Hexemer, Petrus H. Zwart
- Abstract summary: DLSIA is a Python-based machine learning library that empowers scientists and researchers across diverse scientific domains with a range of customizable convolutional neural network (CNN) architectures.
DLSIA features easy-to-use architectures such as autoencoders, tunable U-Nets, and parameter-lean mixed-scale dense networks (MSDNets)
- Score: 45.81637398863868
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce DLSIA (Deep Learning for Scientific Image Analysis), a
Python-based machine learning library that empowers scientists and researchers
across diverse scientific domains with a range of customizable convolutional
neural network (CNN) architectures for a wide variety of tasks in image
analysis to be used in downstream data processing, or for
experiment-in-the-loop computing scenarios. DLSIA features easy-to-use
architectures such as autoencoders, tunable U-Nets, and parameter-lean
mixed-scale dense networks (MSDNets). Additionally, we introduce sparse
mixed-scale networks (SMSNets), generated using random graphs and sparse
connections. As experimental data continues to grow in scale and complexity,
DLSIA provides accessible CNN construction and abstracts CNN complexities,
allowing scientists to tailor their machine learning approaches, accelerate
discoveries, foster interdisciplinary collaboration, and advance research in
scientific image analysis.
Related papers
- Convergence Analysis for Deep Sparse Coding via Convolutional Neural Networks [7.956678963695681]
We introduce a novel class of Deep Sparse Coding (DSC) models.
We derive convergence rates for CNNs in their ability to extract sparse features.
Inspired by the strong connection between sparse coding and CNNs, we explore training strategies to encourage neural networks to learn more sparse features.
arXiv Detail & Related papers (2024-08-10T12:43:55Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Scalable algorithms for physics-informed neural and graph networks [0.6882042556551611]
Physics-informed machine learning (PIML) has emerged as a promising new approach for simulating complex physical and biological systems.
In PIML, we can train such networks from additional information obtained by employing the physical laws and evaluating them at random points in the space-time domain.
We review some of the prevailing trends in embedding physics into machine learning, using physics-informed neural networks (PINNs) based primarily on feed-forward neural networks and automatic differentiation.
arXiv Detail & Related papers (2022-05-16T15:46:11Z) - Classification of diffraction patterns using a convolutional neural
network in single particle imaging experiments performed at X-ray
free-electron lasers [53.65540150901678]
Single particle imaging (SPI) at X-ray free electron lasers (XFELs) is particularly well suited to determine the 3D structure of particles in their native environment.
For a successful reconstruction, diffraction patterns originating from a single hit must be isolated from a large number of acquired patterns.
We propose to formulate this task as an image classification problem and solve it using convolutional neural network (CNN) architectures.
arXiv Detail & Related papers (2021-12-16T17:03:14Z) - Dive into Layers: Neural Network Capacity Bounding using Algebraic
Geometry [55.57953219617467]
We show that the learnability of a neural network is directly related to its size.
We use Betti numbers to measure the topological geometric complexity of input data and the neural network.
We perform the experiments on a real-world dataset MNIST and the results verify our analysis and conclusion.
arXiv Detail & Related papers (2021-09-03T11:45:51Z) - NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural
Network Synthesis [53.106414896248246]
We present a framework that allows analysts to effectively build the solution sub-graph space and guide the network search by injecting their domain knowledge.
Applying this technique in an iterative manner allows analysts to converge to the best performing neural network architecture for a given application.
arXiv Detail & Related papers (2020-09-28T01:48:45Z) - Fed-Sim: Federated Simulation for Medical Imaging [131.56325440976207]
We introduce a physics-driven generative approach that consists of two learnable neural modules.
We show that our data synthesis framework improves the downstream segmentation performance on several datasets.
arXiv Detail & Related papers (2020-09-01T19:17:46Z) - The use of Convolutional Neural Networks for signal-background
classification in Particle Physics experiments [0.4301924025274017]
We present an extensive convolutional neural architecture search, achieving high accuracy for signal/background discrimination for a HEP classification use-case.
We demonstrate among other things that we can achieve the same accuracy as complex ResNet architectures with CNNs with less parameters.
arXiv Detail & Related papers (2020-02-13T19:54:46Z) - Inferring Convolutional Neural Networks' accuracies from their
architectural characterizations [0.0]
We study the relationships between a CNN's architecture and its performance.
We show that the attributes can be predictive of the networks' performance in two specific computer vision-based physics problems.
We use machine learning models to predict whether a network can perform better than a certain threshold accuracy before training.
arXiv Detail & Related papers (2020-01-07T16:41:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.