Learning Similarity Metrics for Numerical Simulations
- URL: http://arxiv.org/abs/2002.07863v2
- Date: Tue, 23 Jun 2020 18:14:55 GMT
- Title: Learning Similarity Metrics for Numerical Simulations
- Authors: Georg Kohl, Kiwon Um, Nils Thuerey
- Abstract summary: We propose a neural network-based approach that computes a stable and generalizing metric (LSiM) to compare data from a variety of numerical simulation sources.
Our method employs a Siamese network architecture that is motivated by the mathematical properties of a metric.
- Score: 29.39625644221578
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a neural network-based approach that computes a stable and
generalizing metric (LSiM) to compare data from a variety of numerical
simulation sources. We focus on scalar time-dependent 2D data that commonly
arises from motion and transport-based partial differential equations (PDEs).
Our method employs a Siamese network architecture that is motivated by the
mathematical properties of a metric. We leverage a controllable data generation
setup with PDE solvers to create increasingly different outputs from a
reference simulation in a controlled environment. A central component of our
learned metric is a specialized loss function that introduces knowledge about
the correlation between single data samples into the training process. To
demonstrate that the proposed approach outperforms existing metrics for vector
spaces and other learned, image-based metrics, we evaluate the different
methods on a large range of test data. Additionally, we analyze generalization
benefits of an adjustable training data difficulty and demonstrate the
robustness of LSiM via an evaluation on three real-world data sets.
Related papers
- You are out of context! [0.0]
New data can act as forces stretching, compressing, or twisting the geometric relationships learned by a model.
We propose a novel drift detection methodology for machine learning (ML) models based on the concept of ''deformation'' in the vector space representation of data.
arXiv Detail & Related papers (2024-11-04T10:17:43Z) - A Universal Metric of Dataset Similarity for Cross-silo Federated Learning [0.0]
Federated learning is increasingly used in domains such as healthcare to facilitate model training without data-sharing.
In this paper, we propose a novel metric for assessing dataset similarity.
We show that our metric shows a robust and interpretable relationship with model performance and can be calculated in privacy-preserving manner.
arXiv Detail & Related papers (2024-04-29T15:08:24Z) - Physics-informed and Unsupervised Riemannian Domain Adaptation for Machine Learning on Heterogeneous EEG Datasets [53.367212596352324]
We propose an unsupervised approach leveraging EEG signal physics.
We map EEG channels to fixed positions using field, source-free domain adaptation.
Our method demonstrates robust performance in brain-computer interface (BCI) tasks and potential biomarker applications.
arXiv Detail & Related papers (2024-03-07T16:17:33Z) - Self-Supervised Learning with Lie Symmetries for Partial Differential
Equations [25.584036829191902]
We learn general-purpose representations of PDEs by implementing joint embedding methods for self-supervised learning (SSL)
Our representation outperforms baseline approaches to invariant tasks, such as regressing the coefficients of a PDE, while also improving the time-stepping performance of neural solvers.
We hope that our proposed methodology will prove useful in the eventual development of general-purpose foundation models for PDEs.
arXiv Detail & Related papers (2023-07-11T16:52:22Z) - Exploiting Temporal Structures of Cyclostationary Signals for
Data-Driven Single-Channel Source Separation [98.95383921866096]
We study the problem of single-channel source separation (SCSS)
We focus on cyclostationary signals, which are particularly suitable in a variety of application domains.
We propose a deep learning approach using a U-Net architecture, which is competitive with the minimum MSE estimator.
arXiv Detail & Related papers (2022-08-22T14:04:56Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Learning Similarity Metrics for Volumetric Simulations with Multiscale
CNNs [25.253880881581956]
We propose a similarity model based on entropy, which allows for the creation of physically meaningful ground truth distances.
We create collections of fields from numerical PDE solvers and existing simulation data repositories.
A multiscale CNN architecture that computes a volumetric similarity metric (VolSiM) is proposed.
arXiv Detail & Related papers (2022-02-08T19:19:08Z) - An Extensible Benchmark Suite for Learning to Simulate Physical Systems [60.249111272844374]
We introduce a set of benchmark problems to take a step towards unified benchmarks and evaluation protocols.
We propose four representative physical systems, as well as a collection of both widely used classical time-based and representative data-driven methods.
arXiv Detail & Related papers (2021-08-09T17:39:09Z) - Estimating informativeness of samples with Smooth Unique Information [108.25192785062367]
We measure how much a sample informs the final weights and how much it informs the function computed by the weights.
We give efficient approximations of these quantities using a linearized network.
We apply these measures to several problems, such as dataset summarization.
arXiv Detail & Related papers (2021-01-17T10:29:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.