On the impact of selected modern deep-learning techniques to the
performance and celerity of classification models in an experimental
high-energy physics use case
- URL: http://arxiv.org/abs/2002.01427v4
- Date: Fri, 8 May 2020 10:29:13 GMT
- Title: On the impact of selected modern deep-learning techniques to the
performance and celerity of classification models in an experimental
high-energy physics use case
- Authors: Giles Chatham Strong
- Abstract summary: Deep learning techniques are tested in the context of a classification problem encountered in the domain of high-energy physics.
The advantages are evaluated in terms of both performance metrics and the time required to train and apply the resulting models.
A new wrapper library for PyTorch called LUMIN is presented, which incorporates all of the techniques studied.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Beginning from a basic neural-network architecture, we test the potential
benefits offered by a range of advanced techniques for machine learning, in
particular deep learning, in the context of a typical classification problem
encountered in the domain of high-energy physics, using a well-studied dataset:
the 2014 Higgs ML Kaggle dataset. The advantages are evaluated in terms of both
performance metrics and the time required to train and apply the resulting
models. Techniques examined include domain-specific data-augmentation, learning
rate and momentum scheduling, (advanced) ensembling in both model-space and
weight-space, and alternative architectures and connection methods. Following
the investigation, we arrive at a model which achieves equal performance to the
winning solution of the original Kaggle challenge, whilst being significantly
quicker to train and apply, and being suitable for use with both GPU and CPU
hardware setups. These reductions in timing and hardware requirements
potentially allow the use of more powerful algorithms in HEP analyses, where
models must be retrained frequently, sometimes at short notice, by small groups
of researchers with limited hardware resources. Additionally, a new wrapper
library for PyTorch called LUMIN is presented, which incorporates all of the
techniques studied.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Hybrid Quantum Neural Network in High-dimensional Data Classification [1.4801853435122907]
We introduce a novel model architecture that combines classical convolutional layers with a quantum neural network.
The experiment is to classify high-dimensional audio data from the Bird-CLEF 2021 dataset.
arXiv Detail & Related papers (2023-12-02T04:19:23Z) - Data Augmentations in Deep Weight Spaces [89.45272760013928]
We introduce a novel augmentation scheme based on the Mixup method.
We evaluate the performance of these techniques on existing benchmarks as well as new benchmarks we generate.
arXiv Detail & Related papers (2023-11-15T10:43:13Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - A New Clustering-Based Technique for the Acceleration of Deep
Convolutional Networks [2.7393821783237184]
Model Compression and Acceleration (MCA) techniques are used to transform large pre-trained networks into smaller models.
We propose a clustering-based approach that is able to increase the number of employed centroids/representatives.
This is achieved by imposing a special structure to the employed representatives, which is enabled by the particularities of the problem at hand.
arXiv Detail & Related papers (2021-07-19T18:22:07Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Computation on Sparse Neural Networks: an Inspiration for Future
Hardware [20.131626638342706]
We describe the current status of the research on the computation of sparse neural networks.
We discuss the model accuracy influenced by the number of weight parameters and the structure of the model.
We show that for practically complicated problems, it is more beneficial to search large and sparse models in the weight dominated region.
arXiv Detail & Related papers (2020-04-24T19:13:50Z) - Gradient-Based Training and Pruning of Radial Basis Function Networks
with an Application in Materials Physics [0.24792948967354234]
We propose a gradient-based technique for training radial basis function networks with an efficient and scalable open-source implementation.
We derive novel closed-form optimization criteria for pruning the models for continuous as well as binary data.
arXiv Detail & Related papers (2020-04-06T11:32:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.