Intelligent experiments through real-time AI: Fast Data Processing and Autonomous Detector Control for sPHENIX and future EIC detectors
- URL: http://arxiv.org/abs/2501.04845v1
- Date: Wed, 08 Jan 2025 21:13:50 GMT
- Title: Intelligent experiments through real-time AI: Fast Data Processing and Autonomous Detector Control for sPHENIX and future EIC detectors
- Authors: J. Kvapil, G. Borca-Tasciuc, H. Bossi, K. Chen, Y. Chen, Y. Corrales Morales, H. Da Costa, C. Da Silva, C. Dean, J. Durham, S. Fu, C. Hao, P. Harris, O. Hen, H. Jheng, Y. Lee, P. Li, X. Li, Y. Lin, M. X. Liu, V. Loncar, J. P. Mitrevski, A. Olvera, M. L. Purschke, J. S. Renck, G. Roland, J. Schambach, Z. Shi, N. Tran, N. Wuerfel, B. Xu, D. Yu, H. Zhang,
- Abstract summary: We develop a demonstrator for real-time processing of high-rate data streams from sPHENIX experiment tracking detectors.
The approach efficiently identifies low momentum rare heavy flavor events in high-rate p+p collisions.
For the EIC, we develop a DIS-electron tagger using Artificial Intelligence - Machine Learning (AI-ML) algorithms for real-time identification.
- Score: 1.981222453919974
- License:
- Abstract: This R\&D project, initiated by the DOE Nuclear Physics AI-Machine Learning initiative in 2022, leverages AI to address data processing challenges in high-energy nuclear experiments (RHIC, LHC, and future EIC). Our focus is on developing a demonstrator for real-time processing of high-rate data streams from sPHENIX experiment tracking detectors. The limitations of a 15 kHz maximum trigger rate imposed by the calorimeters can be negated by intelligent use of streaming technology in the tracking system. The approach efficiently identifies low momentum rare heavy flavor events in high-rate p+p collisions (3MHz), using Graph Neural Network (GNN) and High Level Synthesis for Machine Learning (hls4ml). Success at sPHENIX promises immediate benefits, minimizing resources and accelerating the heavy-flavor measurements. The approach is transferable to other fields. For the EIC, we develop a DIS-electron tagger using Artificial Intelligence - Machine Learning (AI-ML) algorithms for real-time identification, showcasing the transformative potential of AI and FPGA technologies in high-energy nuclear and particle experiments real-time data processing pipelines.
Related papers
- Analysis of Hardware Synthesis Strategies for Machine Learning in Collider Trigger and Data Acquisition [0.0]
Machine learning can be implemented in detector electronics for intelligent data processing and acquisition.
implementation of ML in real-time at colliders requires very low latencies that are unachievable with a software-based approach.
An analysis of neural network inference efficiency is presented, focusing on the application of collider trigger algorithms in field programmable gate arrays.
arXiv Detail & Related papers (2024-11-18T15:59:30Z) - FastSpiker: Enabling Fast Training for Spiking Neural Networks on Event-based Data through Learning Rate Enhancements for Autonomous Embedded Systems [5.59354286094951]
FastSpiker is a novel methodology that enables fast SNN training on event-based data through learning rate enhancements.
We show that FastSpiker offers up to 10.5x faster training time and up to 88.39% lower carbon emission to achieve higher or comparable accuracy to the state-of-the-art on the event-based automotive dataset.
arXiv Detail & Related papers (2024-07-07T05:17:17Z) - HyperSense: Hyperdimensional Intelligent Sensing for Energy-Efficient Sparse Data Processing [5.570372733437123]
HyperSense efficiently controls Analog-to-Digital Converter (ADC) modules' data generation rate based on object presence predictions in sensor data.
Our FPGA-based domain-specific accelerator tailored for HyperSense achieves a 5.6x speedup compared to YOLOv4 on NVIDIA Jetson Orin.
arXiv Detail & Related papers (2024-01-04T01:12:33Z) - Random resistive memory-based deep extreme point learning machine for
unified visual processing [67.51600474104171]
We propose a novel hardware-software co-design, random resistive memory-based deep extreme point learning machine (DEPLM)
Our co-design system achieves huge energy efficiency improvements and training cost reduction when compared to conventional systems.
arXiv Detail & Related papers (2023-12-14T09:46:16Z) - Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern
Recognition on Neuromorphic Hardware [50.380319968947035]
Recent deep learning approaches have reached accuracy in such tasks, but their implementation on conventional embedded solutions is still computationally very and energy expensive.
We propose a new benchmark for computing tactile pattern recognition at the edge through letters reading.
We trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihimorphic chip for efficient inference.
Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy
arXiv Detail & Related papers (2022-05-30T14:30:45Z) - Deep Reinforcement Learning Assisted Federated Learning Algorithm for
Data Management of IIoT [82.33080550378068]
The continuous expanded scale of the industrial Internet of Things (IIoT) leads to IIoT equipments generating massive amounts of user data every moment.
How to manage these time series data in an efficient and safe way in the field of IIoT is still an open issue.
This paper studies the FL technology applications to manage IIoT equipment data in wireless network environments.
arXiv Detail & Related papers (2022-02-03T07:12:36Z) - An Automated Data Engineering Pipeline for Anomaly Detection of IoT
Sensor Data [0.0]
System of Chip technology, Internet of Things (IoT), cloud computing, and artificial intelligence has brought more possibilities of improving and solving the current problems.
Data analytics and the use of machine learning/deep learning makes it possible to learn the underlying patterns and make decisions based on what was learned from massive data generated from IoT sensors.
Process involves the use of IoT sensors, Raspberry Pis, Amazon Web Services (AWS) and multiple machine learning techniques with the intent to identify anomalous cases for the smart home security system.
arXiv Detail & Related papers (2021-09-28T15:57:29Z) - Energy-Efficient Model Compression and Splitting for Collaborative
Inference Over Time-Varying Channels [52.60092598312894]
We propose a technique to reduce the total energy bill at the edge device by utilizing model compression and time-varying model split between the edge and remote nodes.
Our proposed solution results in minimal energy consumption and $CO$ emission compared to the considered baselines.
arXiv Detail & Related papers (2021-06-02T07:36:27Z) - A reconfigurable neural network ASIC for detector front-end data
compression at the HL-LHC [0.40690419770123604]
A neural network autoencoder model can be implemented in a radiation tolerant ASIC to perform lossy data compression.
This is the first radiation tolerant on-detector ASIC implementation of a neural network that has been designed for particle physics applications.
arXiv Detail & Related papers (2021-05-04T18:06:23Z) - Temporal Pulses Driven Spiking Neural Network for Fast Object
Recognition in Autonomous Driving [65.36115045035903]
We propose an approach to address the object recognition problem directly with raw temporal pulses utilizing the spiking neural network (SNN)
Being evaluated on various datasets, our proposed method has shown comparable performance as the state-of-the-art methods, while achieving remarkable time efficiency.
arXiv Detail & Related papers (2020-01-24T22:58:55Z) - Adaptive Anomaly Detection for IoT Data in Hierarchical Edge Computing [71.86955275376604]
We propose an adaptive anomaly detection approach for hierarchical edge computing (HEC) systems to solve this problem.
We design an adaptive scheme to select one of the models based on the contextual information extracted from input data, to perform anomaly detection.
We evaluate our proposed approach using a real IoT dataset, and demonstrate that it reduces detection delay by 84% while maintaining almost the same accuracy as compared to offloading detection tasks to the cloud.
arXiv Detail & Related papers (2020-01-10T05:29:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.