Conformal Disentanglement: A Neural Framework for Perspective Synthesis and Differentiation
- URL: http://arxiv.org/abs/2408.15344v1
- Date: Tue, 27 Aug 2024 18:06:45 GMT
- Title: Conformal Disentanglement: A Neural Framework for Perspective Synthesis and Differentiation
- Authors: George A. Kevrekidis, Eleni D. Koronaki, Yannis G. Kevrekidis,
- Abstract summary: We make observations of objects from several different perspectives in space, at different points in time.
It is necessary to synthesize a complete picture of what is common' across its sources.
We introduce a neural network autoencoder framework capable of both tasks.
- Score: 0.8192907805418583
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: For multiple scientific endeavors it is common to measure a phenomenon of interest in more than one ways. We make observations of objects from several different perspectives in space, at different points in time; we may also measure different properties of a mixture using different types of instruments. After collecting this heterogeneous information, it is necessary to be able to synthesize a complete picture of what is `common' across its sources: the subject we ultimately want to study. However, isolated (`clean') observations of a system are not always possible: observations often contain information about other systems in its environment, or about the measuring instruments themselves. In that sense, each observation may contain information that `does not matter' to the original object of study; this `uncommon' information between sensors observing the same object may still be important, and decoupling it from the main signal(s) useful. We introduce a neural network autoencoder framework capable of both tasks: it is structured to identify `common' variables, and, making use of orthogonality constraints to define geometric independence, to also identify disentangled `uncommon' information originating from the heterogeneous sensors. We demonstrate applications in several computational examples.
Related papers
- Unsupervised discovery of the shared and private geometry in multi-view data [1.8816600430294537]
We develop a nonlinear neural network-based method that disentangles low-dimensional shared and private latent variables.
We demonstrate our model's ability to discover interpretable shared and private structure across different noise conditions.
Applying our method to simultaneous Neuropixels recordings of hippocampus and prefrontal cortex while mice run on a linear track, we discover a low-dimensional shared latent space that encodes the animal's position.
arXiv Detail & Related papers (2024-08-22T03:00:21Z) - Graph-based Virtual Sensing from Sparse and Partial Multivariate
Observations [22.567497617912046]
We introduce a novel graph-based methodology to exploit such relationships and design a graph deep learning architecture, named GgNet, implementing the framework.
The proposed approach relies on propagating information over a nested graph structure that is used to learn dependencies between variables as well as locations.
GgNet is extensively evaluated under different virtual sensing scenarios, demonstrating higher reconstruction accuracy compared to the state-of-the-art.
arXiv Detail & Related papers (2024-02-19T23:22:30Z) - High-dimensional monitoring and the emergence of realism via multiple observers [41.94295877935867]
Correlation is the basic mechanism of every measurement model.
We introduce a model that interpolates between weak and strong non-selective measurements for qudits.
arXiv Detail & Related papers (2023-05-13T13:42:19Z) - Gacs-Korner Common Information Variational Autoencoder [102.89011295243334]
We propose a notion of common information that allows one to quantify and separate the information that is shared between two random variables.
We demonstrate that our formulation allows us to learn semantically meaningful common and unique factors of variation even on high-dimensional data such as images and videos.
arXiv Detail & Related papers (2022-05-24T17:47:26Z) - Equivariance Allows Handling Multiple Nuisance Variables When Analyzing
Pooled Neuroimaging Datasets [53.34152466646884]
In this paper, we show how bringing recent results on equivariant representation learning instantiated on structured spaces together with simple use of classical results on causal inference provides an effective practical solution.
We demonstrate how our model allows dealing with more than one nuisance variable under some assumptions and can enable analysis of pooled scientific datasets in scenarios that would otherwise entail removing a large portion of the samples.
arXiv Detail & Related papers (2022-03-29T04:54:06Z) - Combining Observational and Randomized Data for Estimating Heterogeneous
Treatment Effects [82.20189909620899]
Estimating heterogeneous treatment effects is an important problem across many domains.
Currently, most existing works rely exclusively on observational data.
We propose to estimate heterogeneous treatment effects by combining large amounts of observational data and small amounts of randomized data.
arXiv Detail & Related papers (2022-02-25T18:59:54Z) - Ranking the information content of distance measures [61.754016309475745]
We introduce a statistical test that can assess the relative information retained when using two different distance measures.
This in turn allows finding the most informative distance measure out of a pool of candidates.
arXiv Detail & Related papers (2021-04-30T15:57:57Z) - Latent Feature Representation via Unsupervised Learning for Pattern
Discovery in Massive Electron Microscopy Image Volumes [4.278591555984395]
In particular, we give an unsupervised deep learning approach to learning a latent representation that captures semantic similarity in the data set.
We demonstrate the utility of our method applied to nano-scale electron microscopy data, where even relatively small portions of animal brains can require terabytes of image data.
arXiv Detail & Related papers (2020-12-22T17:14:19Z) - The role of feature space in atomistic learning [62.997667081978825]
Physically-inspired descriptors play a key role in the application of machine-learning techniques to atomistic simulations.
We introduce a framework to compare different sets of descriptors, and different ways of transforming them by means of metrics and kernels.
We compare representations built in terms of n-body correlations of the atom density, quantitatively assessing the information loss associated with the use of low-order features.
arXiv Detail & Related papers (2020-09-06T14:12:09Z) - Bayesian Sparse Factor Analysis with Kernelized Observations [67.60224656603823]
Multi-view problems can be faced with latent variable models.
High-dimensionality and non-linear issues are traditionally handled by kernel methods.
We propose merging both approaches into single model.
arXiv Detail & Related papers (2020-06-01T14:25:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.