Hierarchical model reduction driven by machine learning for parametric
advection-diffusion-reaction problems in the presence of noisy data
- URL: http://arxiv.org/abs/2204.00538v1
- Date: Fri, 1 Apr 2022 16:02:05 GMT
- Title: Hierarchical model reduction driven by machine learning for parametric
advection-diffusion-reaction problems in the presence of noisy data
- Authors: Massimiliano Lupo Pasini, Simona Perotto
- Abstract summary: We propose a new approach to generate a reliable reduced model for a parametric elliptic problem in the presence of noisy data.
We show that directional HiPOD looses in terms of accuracy when problem data are affected by noise.
We replace with Machine Learning fitting models which better discriminate relevant physical features in the data from irrelevant noise.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new approach to generate a reliable reduced model for a
parametric elliptic problem, in the presence of noisy data. The reference model
reduction procedure is the directional HiPOD method, which combines
Hierarchical Model reduction with a standard Proper Orthogonal Decomposition,
according to an offline/online paradigm. In this paper we show that directional
HiPOD looses in terms of accuracy when problem data are affected by noise. This
is due to the interpolation driving the online phase, since it replicates, by
definition, the noise trend. To overcome this limit, we replace interpolation
with Machine Learning fitting models which better discriminate relevant
physical features in the data from irrelevant unstructured noise. The numerical
assessment, although preliminary, confirms the potentialities of the new
approach.
Related papers
- SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - DiffusionPCR: Diffusion Models for Robust Multi-Step Point Cloud
Registration [73.37538551605712]
Point Cloud Registration (PCR) estimates the relative rigid transformation between two point clouds.
We propose formulating PCR as a denoising diffusion probabilistic process, mapping noisy transformations to the ground truth.
Our experiments showcase the effectiveness of our DiffusionPCR, yielding state-of-the-art registration recall rates (95.3%/81.6%) on 3D and 3DLoMatch.
arXiv Detail & Related papers (2023-12-05T18:59:41Z) - Calibrating dimension reduction hyperparameters in the presence of noise [0.4143603294943439]
t-SNE and UMAP fail to acknowledge data as a combination of signal and noise when assessing performance.
We show previously recommended values for perplexity and n_neighbors are too small and overfit the noise.
arXiv Detail & Related papers (2023-12-05T18:16:17Z) - Seismic Data Interpolation via Denoising Diffusion Implicit Models with Coherence-corrected Resampling [7.755439545030289]
Deep learning models such as U-Net often underperform when the training and test missing patterns do not match.
We propose a novel framework that is built upon the multi-modal diffusion models.
Inference phase, we introduce the denoising diffusion implicit model to reduce the number of sampling steps.
To enhance the coherence and continuity between the revealed traces and the missing traces, we propose two strategies.
arXiv Detail & Related papers (2023-07-09T16:37:47Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - A DeepONet multi-fidelity approach for residual learning in reduced
order modeling [0.0]
We introduce a novel approach to enhance the precision of reduced order models by exploiting a multi-fidelity perspective and DeepONets.
We propose to couple the model reduction to a machine learning residual learning, such that the above-mentioned error can be learned by a neural network and inferred for new predictions.
arXiv Detail & Related papers (2023-02-24T15:15:07Z) - Denoising Deep Generative Models [23.19427801594478]
Likelihood-based deep generative models have been shown to exhibit pathological behaviour under the manifold hypothesis.
We propose two methodologies aimed at addressing this problem.
arXiv Detail & Related papers (2022-11-30T19:00:00Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - Nonlinear proper orthogonal decomposition for convection-dominated flows [0.0]
We propose an end-to-end Galerkin-free model combining autoencoders with long short-term memory networks for dynamics.
Our approach not only improves the accuracy, but also significantly reduces the computational cost of training and testing.
arXiv Detail & Related papers (2021-10-15T18:05:34Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Identification of Probability weighted ARX models with arbitrary domains [75.91002178647165]
PieceWise Affine models guarantees universal approximation, local linearity and equivalence to other classes of hybrid system.
In this work, we focus on the identification of PieceWise Auto Regressive with eXogenous input models with arbitrary regions (NPWARX)
The architecture is conceived following the Mixture of Expert concept, developed within the machine learning field.
arXiv Detail & Related papers (2020-09-29T12:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.