Confluence of Artificial Intelligence and High Performance Computing for
Accelerated, Scalable and Reproducible Gravitational Wave Detection
- URL: http://arxiv.org/abs/2012.08545v1
- Date: Tue, 15 Dec 2020 19:00:29 GMT
- Title: Confluence of Artificial Intelligence and High Performance Computing for
Accelerated, Scalable and Reproducible Gravitational Wave Detection
- Authors: E. A. Huerta, Asad Khan, Xiaobo Huang, Minyang Tian, Maksim Levental,
Ryan Chard, Wei Wei, Maeve Heflin, Daniel S. Katz, Volodymyr Kindratenko,
Dawei Mu, Ben Blaiszik and Ian Foster
- Abstract summary: We demonstrate how connecting DOE and NSF-sponsored cyberinfrastructure allows for new ways to publish machine learning models.
We then use this workflow to search for binary black hole gravitational wave signals in open source advanced LIGO data.
We find that using this workflow, an ensemble of four openly available deep learning models can be run on HAL and process the entire month of August 2017 of advanced LIGO data in just seven minutes.
- Score: 4.081122815035999
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Finding new ways to use artificial intelligence (AI) to accelerate the
analysis of gravitational wave data, and ensuring the developed models are
easily reusable promises to unlock new opportunities in multi-messenger
astrophysics (MMA), and to enable wider use, rigorous validation, and sharing
of developed models by the community. In this work, we demonstrate how
connecting recently deployed DOE and NSF-sponsored cyberinfrastructure allows
for new ways to publish models, and to subsequently deploy these models into
applications using computing platforms ranging from laptops to high performance
computing clusters. We develop a workflow that connects the Data and Learning
Hub for Science (DLHub), a repository for publishing machine learning models,
with the Hardware Accelerated Learning (HAL) deep learning computing cluster,
using funcX as a universal distributed computing service. We then use this
workflow to search for binary black hole gravitational wave signals in open
source advanced LIGO data. We find that using this workflow, an ensemble of
four openly available deep learning models can be run on HAL and process the
entire month of August 2017 of advanced LIGO data in just seven minutes,
identifying all four binary black hole mergers previously identified in this
dataset, and reporting no misclassifications. This approach, which combines
advances in AI, distributed computing, and scientific data infrastructure opens
new pathways to conduct reproducible, accelerated, data-driven gravitational
wave detection.
Related papers
- Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the Ugly [62.473245910234304]
This paper takes a hardware-centric approach to explore how Large Language Models can be brought to modern edge computing systems.
We provide a micro-level hardware benchmark, compare the model FLOP utilization to a state-of-the-art data center GPU, and study the network utilization in realistic conditions.
arXiv Detail & Related papers (2023-10-04T20:27:20Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - FAIR AI Models in High Energy Physics [16.744801048170732]
We propose a practical definition of FAIR principles for AI models in experimental high energy physics.
We describe a template for the application of these principles.
We report on the robustness of this FAIR AI model, its portability across hardware architectures and software frameworks, and its interpretability.
arXiv Detail & Related papers (2022-12-09T19:00:18Z) - Towards a Dynamic Composability Approach for using Heterogeneous Systems
in Remote Sensing [0.0]
We present a novel approach for using composable systems in the intersection between scientific computing, artificial intelligence (AI), and remote sensing domain.
We describe the architecture of a first working example of a composable infrastructure that federates Expanse, an NSF-funded supercomputer, with Nautilus, a geo-distributed cluster.
arXiv Detail & Related papers (2022-11-13T14:48:00Z) - FAIR principles for AI models, with a practical application for
accelerated high energy diffraction microscopy [1.9270896986812693]
We showcase how to create and share FAIR data and AI models within a unified computational framework.
We describe how this domain-agnostic computational framework may be harnessed to enable autonomous AI-driven discovery.
arXiv Detail & Related papers (2022-07-01T18:11:12Z) - The MIT Supercloud Workload Classification Challenge [10.458111248130944]
In this paper, we present a workload classification challenge based on the MIT Supercloud dataset.
The goal of this challenge is to foster algorithmic innovations in the analysis of compute workloads.
arXiv Detail & Related papers (2022-04-12T14:28:04Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Reinforcement Learning with Augmented Data [97.42819506719191]
We present Reinforcement Learning with Augmented Data (RAD), a simple plug-and-play module that can enhance most RL algorithms.
We show that augmentations such as random translate, crop, color jitter, patch cutout, random convolutions, and amplitude scale can enable simple RL algorithms to outperform complex state-of-the-art methods.
arXiv Detail & Related papers (2020-04-30T17:35:32Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.