Particle Hit Clustering and Identification Using Point Set Transformers in Liquid Argon Time Projection Chambers
- URL: http://arxiv.org/abs/2504.08182v1
- Date: Fri, 11 Apr 2025 00:46:57 GMT
- Title: Particle Hit Clustering and Identification Using Point Set Transformers in Liquid Argon Time Projection Chambers
- Authors: Edgar E. Robles, Alejando Yankelevich, Wenjie Wu, Jianming Bian, Pierre Baldi,
- Abstract summary: Liquid time projection chambers are often used in neutrino physics and dark-matter searches because of their high spatial resolution.<n>Images generated by these detectors are extremely sparse, as the energy values detected by most of the detector are equal to 0.<n>Traditional machine learning methods cannot operate over data stored in this way.<n>We propose a machine learning model using a point set neural network that operates over a sparse matrix.
- Score: 44.255911829510694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Liquid argon time projection chambers are often used in neutrino physics and dark-matter searches because of their high spatial resolution. The images generated by these detectors are extremely sparse, as the energy values detected by most of the detector are equal to 0, meaning that despite their high resolution, most of the detector is unused in a particular interaction. Instead of representing all of the empty detections, the interaction is usually stored as a sparse matrix, a list of detection locations paired with their energy values. Traditional machine learning methods that have been applied to particle reconstruction such as convolutional neural networks (CNNs), however, cannot operate over data stored in this way and therefore must have the matrix fully instantiated as a dense matrix. Operating on dense matrices requires a lot of memory and computation time, in contrast to directly operating on the sparse matrix. We propose a machine learning model using a point set neural network that operates over a sparse matrix, greatly improving both processing speed and accuracy over methods that instantiate the dense matrix, as well as over other methods that operate over sparse matrices. Compared to competing state-of-the-art methods, our method improves classification performance by 14%, segmentation performance by more than 22%, while taking 80% less time and using 66% less memory. Compared to state-of-the-art CNN methods, our method improves classification performance by more than 86%, segmentation performance by more than 71%, while reducing runtime by 91% and reducing memory usage by 61%.
Related papers
- NeuMatC: A General Neural Framework for Fast Parametric Matrix Operation [75.91285900600549]
We propose textbftextitNeural Matrix Computation Framework (NeuMatC), which elegantly tackles general parametric matrix operation tasks.<n>NeuMatC unsupervisedly learns a low-rank and continuous mapping from parameters to their corresponding matrix operation results.<n> Experimental results on both synthetic and real-world datasets demonstrate the promising performance of NeuMatC.
arXiv Detail & Related papers (2025-11-28T07:21:17Z) - SMM-Conv: Scalar Matrix Multiplication with Zero Packing for Accelerated Convolution [4.14360329494344]
We present a novel approach for accelerating convolutions during inference for CPU-based architectures.
Our experiments with commonly used network architectures demonstrate a significant speedup compared to existing indirect methods.
arXiv Detail & Related papers (2024-11-23T21:43:38Z) - An Efficient Matrix Multiplication Algorithm for Accelerating Inference in Binary and Ternary Neural Networks [8.779871128906787]
We propose algorithms to improve the inference time and memory efficiency of Deep Neural Networks (DNNs)<n>We focus on matrix multiplication as the bottleneck operation of inference.<n>Our experiments show up to a 5.24x speedup in the inference time.
arXiv Detail & Related papers (2024-11-10T04:56:14Z) - Fast inference with Kronecker-sparse matrices [4.387337528923525]
We present the first energy and time benchmarks for the multiplication with Kronecker-sparse matrices.
Our benchmark also reveals that specialized implementations spend up to 50% of their total runtime on memory rewriting operations.
We implement a new kernel that achieves a median speed-up of x1.4, while also cutting energy consumption by 15%.
arXiv Detail & Related papers (2024-05-23T19:36:10Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - RSC: Accelerating Graph Neural Networks Training via Randomized Sparse
Computations [56.59168541623729]
Training graph neural networks (GNNs) is time consuming because sparse graph-based operations are hard to be accelerated by hardware.
We explore trading off the computational precision to reduce the time complexity via sampling-based approximation.
We propose Randomized Sparse Computation, which for the first time demonstrate the potential of training GNNs with approximated operations.
arXiv Detail & Related papers (2022-10-19T17:25:33Z) - Batch-efficient EigenDecomposition for Small and Medium Matrices [65.67315418971688]
EigenDecomposition (ED) is at the heart of many computer vision algorithms and applications.
We propose a QR-based ED method dedicated to the application scenarios of computer vision.
arXiv Detail & Related papers (2022-07-09T09:14:12Z) - A Structured Sparse Neural Network and Its Matrix Calculations Algorithm [0.0]
We introduce a nonsymmetric, tridiagonal matrix with offdiagonal sparse entries and offset sub and super-diagonals.
For the cases where the matrix inverse does not exist, a least square type pseudoinverse is provided.
Results show significant improvement in computational costs specially when the size of matrix increases.
arXiv Detail & Related papers (2022-07-02T19:38:48Z) - Memory-Efficient Backpropagation through Large Linear Layers [107.20037639738433]
In modern neural networks like Transformers, linear layers require significant memory to store activations during backward pass.
This study proposes a memory reduction approach to perform backpropagation through linear layers.
arXiv Detail & Related papers (2022-01-31T13:02:41Z) - Robust 1-bit Compressive Sensing with Partial Gaussian Circulant
Matrices and Generative Priors [54.936314353063494]
We provide recovery guarantees for a correlation-based optimization algorithm for robust 1-bit compressive sensing.
We make use of a practical iterative algorithm, and perform numerical experiments on image datasets to corroborate our results.
arXiv Detail & Related papers (2021-08-08T05:28:06Z) - Fast and Accurate Pseudoinverse with Sparse Matrix Reordering and
Incremental Approach [4.710916891482697]
A pseudoinverse is a generalization of a matrix inverse, which has been extensively utilized in machine learning.
FastPI is a novel incremental singular value decomposition (SVD) based pseudoinverse method for sparse matrices.
We show that FastPI computes the pseudoinverse faster than other approximate methods without loss of accuracy.
arXiv Detail & Related papers (2020-11-09T07:47:10Z) - Block-wise Dynamic Sparseness [20.801638768447948]
We present a new method for emphdynamic sparseness, whereby part of the computations are omitted dynamically, based on the input.
Our method achieves similar language modeling perplexities as the dense baseline, at half the computational cost at inference time.
arXiv Detail & Related papers (2020-01-14T10:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.