Neural Embedding: Learning the Embedding of Manifold of Physics Data
- URL: http://arxiv.org/abs/2208.05484v1
- Date: Wed, 10 Aug 2022 18:00:00 GMT
- Title: Neural Embedding: Learning the Embedding of Manifold of Physics Data
- Authors: Sang Eon Park, Philip Harris, Bryan Ostdiek
- Abstract summary: We show that it can be a powerful step in the data analysis pipeline for many applications.
We provide for the first time a viable solution to quantifying the true search capability of model search algorithms in collider physics.
- Score: 5.516715115797386
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a method of embedding physics data manifolds with
metric structure into lower dimensional spaces with simpler metrics, such as
Euclidean and Hyperbolic spaces. We then demonstrate that it can be a powerful
step in the data analysis pipeline for many applications. Using progressively
more realistic simulated collisions at the Large Hadron Collider, we show that
this embedding approach learns the underlying latent structure. With the notion
of volume in Euclidean spaces, we provide for the first time a viable solution
to quantifying the true search capability of model agnostic search algorithms
in collider physics (i.e. anomaly detection). Finally, we discuss how the ideas
presented in this paper can be employed to solve many practical challenges that
require the extraction of physically meaningful representations from
information in complex high dimensional datasets.
Related papers
- Distributional Reduction: Unifying Dimensionality Reduction and Clustering with Gromov-Wasserstein [56.62376364594194]
Unsupervised learning aims to capture the underlying structure of potentially large and high-dimensional datasets.
In this work, we revisit these approaches under the lens of optimal transport and exhibit relationships with the Gromov-Wasserstein problem.
This unveils a new general framework, called distributional reduction, that recovers DR and clustering as special cases and allows addressing them jointly within a single optimization problem.
arXiv Detail & Related papers (2024-02-03T19:00:19Z) - Higher-order topological kernels via quantum computation [68.8204255655161]
Topological data analysis (TDA) has emerged as a powerful tool for extracting meaningful insights from complex data.
We propose a quantum approach to defining Betti kernels, which is based on constructing Betti curves with increasing order.
arXiv Detail & Related papers (2023-07-14T14:48:52Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Data-Efficient Learning via Minimizing Hyperspherical Energy [48.47217827782576]
This paper considers the problem of data-efficient learning from scratch using a small amount of representative data.
We propose a MHE-based active learning (MHEAL) algorithm, and provide comprehensive theoretical guarantees for MHEAL.
arXiv Detail & Related papers (2022-06-30T11:39:12Z) - Fiberwise dimensionality reduction of topologically complex data with
vector bundles [0.0]
We propose to model topologically complex datasets using vector bundles.
The base space accounts for the large scale topology, while the fibers account for the local geometry.
This allows one to reduce the dimensionality of the fibers, while preserving the large scale topology.
arXiv Detail & Related papers (2022-06-13T22:53:46Z) - Calibrating constitutive models with full-field data via physics
informed neural networks [0.0]
We propose a physics-informed deep-learning framework for the discovery of model parameterizations given full-field displacement data.
We work with the weak form of the governing equations rather than the strong form to impose physical constraints upon the neural network predictions.
We demonstrate that informed machine learning is an enabling technology and may shift the paradigm of how full-field experimental data is utilized to calibrate models.
arXiv Detail & Related papers (2022-03-30T18:07:44Z) - Physics-informed deep-learning applications to experimental fluid
mechanics [2.992602379681373]
High-resolution reconstruction of flow-field data from low-resolution and noisy measurements is of interest in experimental fluid mechanics.
Deep-learning approaches have been shown suitable for such super-resolution tasks.
In this study, we apply physics-informed neural networks (PINNs) for super-resolution of flow-field data in time and space.
arXiv Detail & Related papers (2022-03-29T09:58:30Z) - Scalable approach to many-body localization via quantum data [69.3939291118954]
Many-body localization is a notoriously difficult phenomenon from quantum many-body physics.
We propose a flexible neural network based learning approach that circumvents any computationally expensive step.
Our approach can be applied to large-scale quantum experiments to provide new insights into quantum many-body physics.
arXiv Detail & Related papers (2022-02-17T19:00:09Z) - Manifold embedding data-driven mechanics [0.0]
This article introduces a new data-driven approach that leverages a manifold embedding generated by the invertible neural network.
We achieve this by training a deep neural network to globally map data from the manifold onto a lower-dimensional Euclidean vector space.
arXiv Detail & Related papers (2021-12-18T04:38:32Z) - DeepPhysics: a physics aware deep learning framework for real-time
simulation [0.0]
We propose a solution to simulate hyper-elastic materials using a data-driven approach.
A neural network is trained to learn the non-linear relationship between boundary conditions and the resulting displacement field.
The results show that our network architecture trained with a limited amount of data can predict the displacement field in less than a millisecond.
arXiv Detail & Related papers (2021-09-17T12:15:47Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.