RID-Noise: Towards Robust Inverse Design under Noisy Environments
- URL: http://arxiv.org/abs/2112.03912v1
- Date: Tue, 7 Dec 2021 06:32:27 GMT
- Title: RID-Noise: Towards Robust Inverse Design under Noisy Environments
- Authors: Jia-Qi Yang, Ke-Bin Fan, Hao Ma, De-Chuan Zhan
- Abstract summary: We propose Robust Inverse Design under Noise (RID-Noise) to train a conditional invertible neural network (cINN)
We estimate the robustness of a design parameter by its predictability, measured by the prediction error of a forward neural network.
With the visual results from experiments, we clearly justify how RID-Noise works by learning the distribution and robustness from data.
- Score: 30.58112077143225
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: From an engineering perspective, a design should not only perform well in an
ideal condition, but should also resist noises. Such a design methodology,
namely robust design, has been widely implemented in the industry for product
quality control. However, classic robust design requires a lot of evaluations
for a single design target, while the results of these evaluations could not be
reused for a new target. To achieve a data-efficient robust design, we propose
Robust Inverse Design under Noise (RID-Noise), which can utilize existing noisy
data to train a conditional invertible neural network (cINN). Specifically, we
estimate the robustness of a design parameter by its predictability, measured
by the prediction error of a forward neural network. We also define a
sample-wise weight, which can be used in the maximum weighted likelihood
estimation of an inverse model based on a cINN. With the visual results from
experiments, we clearly justify how RID-Noise works by learning the
distribution and robustness from data. Further experiments on several
real-world benchmark tasks with noises confirm that our method is more
effective than other state-of-the-art inverse design methods. Code and
supplementary is publicly available at
https://github.com/ThyrixYang/rid-noise-aaai22
Related papers
- Robust Classification under Noisy Labels: A Geometry-Aware Reliability Framework for Foundation Models [22.68107594048035]
We present a two-stage framework to ensure robust classification in the presence of label noise without model retraining.<n>Recent work has shown that simple k-nearest neighbor approaches using an embedding derived from an FM can achieve good performance even in the presence of severe label noise.<n>In this paper, following a similar two-stage procedure, reliability estimation followed by reliability-weighted inference, we show that improved performance can be achieved by introducing geometry information.
arXiv Detail & Related papers (2025-07-31T23:01:32Z) - Accelerated Test-Time Scaling with Model-Free Speculative Sampling [58.69141724095398]
We introduce STAND (STochastic Adaptive N-gram Drafting), a novel model-free speculative decoding approach.<n>We show that STAND reduces inference latency by 60-65% compared to standard autoregressive decoding.<n>As a model-free approach, STAND can be applied to any existing language model without additional training.
arXiv Detail & Related papers (2025-06-05T07:31:18Z) - Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Unrolled denoising networks provably learn optimal Bayesian inference [54.79172096306631]
We prove the first rigorous learning guarantees for neural networks based on unrolling approximate message passing (AMP)
For compressed sensing, we prove that when trained on data drawn from a product prior, the layers of the network converge to the same denoisers used in Bayes AMP.
arXiv Detail & Related papers (2024-09-19T17:56:16Z) - Data Augmentation of Contrastive Learning is Estimating Positive-incentive Noise [54.24688963649581]
We scientifically investigate the connection between contrastive learning and $pi$-noise.
Inspired by the idea of Positive-incentive Noise (Pi-Noise or $pi$-Noise) that aims at learning the reliable noise beneficial to tasks, we develop a $pi$-noise generator.
arXiv Detail & Related papers (2024-08-19T12:07:42Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Diffusion Generative Inverse Design [28.04683283070957]
Inverse design refers to the problem of optimizing the input of an objective function in order to enact a target outcome.
Recent developments in learned graph neural networks (GNNs) can be used for accurate, efficient, differentiable estimation of simulator dynamics.
We show how denoising diffusion diffusion models can be used to solve inverse design problems efficiently and propose a particle sampling algorithm for further improving their efficiency.
arXiv Detail & Related papers (2023-09-05T08:32:07Z) - Task-specific experimental design for treatment effect estimation [59.879567967089145]
Large randomised trials (RCTs) are the standard for causal inference.
Recent work has proposed more sample-efficient alternatives to RCTs, but these are not adaptable to the downstream application for which the causal effect is sought.
We develop a task-specific approach to experimental design and derive sampling strategies customised to particular downstream applications.
arXiv Detail & Related papers (2023-06-08T18:10:37Z) - Latent Class-Conditional Noise Model [54.56899309997246]
We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
arXiv Detail & Related papers (2023-02-19T15:24:37Z) - Confidence-based Reliable Learning under Dual Noises [46.45663546457154]
Deep neural networks (DNNs) have achieved remarkable success in a variety of computer vision tasks.
Yet, the data collected from the open world are unavoidably polluted by noise, which may significantly undermine the efficacy of the learned models.
Various attempts have been made to reliably train DNNs under data noise, but they separately account for either the noise existing in the labels or that existing in the images.
This work provides a first, unified framework for reliable learning under the joint (image, label)-noise.
arXiv Detail & Related papers (2023-02-10T07:50:34Z) - Theta-Resonance: A Single-Step Reinforcement Learning Method for Design
Space Exploration [10.184056098238766]
We use Theta-Resonance to train an intelligent agent producing progressively more optimal samples.
We specialize existing policy gradient algorithms in deep reinforcement learning (D-RL) to update our policy network.
Although we only present categorical design spaces, we also outline how to use Theta-Resonance in order to explore continuous and mixed continuous-discrete design spaces.
arXiv Detail & Related papers (2022-11-03T16:08:40Z) - Targeted Adaptive Design [0.0]
Modern manufacturing and advanced materials design often require searches of relatively high-dimensional process control parameter spaces.
We describe targeted adaptive design (TAD), a new algorithm that performs this sampling task efficiently.
TAD embodies the exploration-exploitation tension in a manner that recalls, but is essentially different from, Bayesian optimization and optimal experimental design.
arXiv Detail & Related papers (2022-05-27T19:29:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.