One-Dimensional Deep Image Prior for Curve Fitting of S-Parameters from
Electromagnetic Solvers
- URL: http://arxiv.org/abs/2306.04001v1
- Date: Tue, 6 Jun 2023 20:28:37 GMT
- Title: One-Dimensional Deep Image Prior for Curve Fitting of S-Parameters from
Electromagnetic Solvers
- Authors: Sriram Ravula, Varun Gorti, Bo Deng, Swagato Chakraborty, James
Pingenot, Bhyrav Mutnury, Doug Wallace, Doug Winterberg, Adam Klivans,
Alexandros G. Dimakis
- Abstract summary: Deep Image Prior (DIP) is a technique that optimized the weights of a randomly-d convolutional neural network to fit a signal from noisy or under-determined measurements.
Relative to publicly available implementations of Vector Fitting (VF), our method shows superior performance on nearly all test examples.
- Score: 57.441926088870325
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A key problem when modeling signal integrity for passive filters and
interconnects in IC packages is the need for multiple S-parameter measurements
within a desired frequency band to obtain adequate resolution. These samples
are often computationally expensive to obtain using electromagnetic (EM) field
solvers. Therefore, a common approach is to select a small subset of the
necessary samples and use an appropriate fitting mechanism to recreate a
densely-sampled broadband representation. We present the first deep generative
model-based approach to fit S-parameters from EM solvers using one-dimensional
Deep Image Prior (DIP). DIP is a technique that optimizes the weights of a
randomly-initialized convolutional neural network to fit a signal from noisy or
under-determined measurements. We design a custom architecture and propose a
novel regularization inspired by smoothing splines that penalizes discontinuous
jumps. We experimentally compare DIP to publicly available and proprietary
industrial implementations of Vector Fitting (VF), the industry-standard tool
for fitting S-parameters. Relative to publicly available implementations of VF,
our method shows superior performance on nearly all test examples using only
5-15% of the frequency samples. Our method is also competitive to proprietary
VF tools and often outperforms them for challenging input instances.
Related papers
- FreSh: Frequency Shifting for Accelerated Neural Representation Learning [11.175745750843484]
Implicit Neural Representations (INRs) have recently gained attention as a powerful approach for continuously representing signals such as images, videos, and 3D shapes using multilayer perceptrons (MLPs)
Low-frequency details are known to exhibit a low-frequency bias, limiting their ability to capture high-frequency details accurately.
We propose frequency shifting (or FreSh) to align the frequency spectrum of the initial output with that of the target signal.
arXiv Detail & Related papers (2024-10-07T14:05:57Z) - Geometry of Sensitivity: Twice Sampling and Hybrid Clipping in Differential Privacy with Optimal Gaussian Noise and Application to Deep Learning [18.92302645198466]
We study the problem of the construction of optimal randomization in Differential Privacy.
Finding the minimal perturbation for properly-selected sensitivity sets is a central problem in DP research.
arXiv Detail & Related papers (2023-09-06T02:45:08Z) - Bayesian Kernelized Tensor Factorization as Surrogate for Bayesian
Optimization [13.896697187967545]
Kernel optimization (BO) primarily uses Gaussian processes (GP) as the key surrogate model.
In this paper, we propose to use Bayesian Factorization (BKTF) as a new surrogate model -- for BO in a $D$-dimensional product space.
BKTF offers a flexible and highly effective approach for characterizing complex functions with uncertainty quantification.
arXiv Detail & Related papers (2023-02-28T12:00:21Z) - Decision Forest Based EMG Signal Classification with Low Volume Dataset
Augmented with Random Variance Gaussian Noise [51.76329821186873]
We produce a model that can classify six different hand gestures with a limited number of samples that generalizes well to a wider audience.
We appeal to a set of more elementary methods such as the use of random bounds on a signal, but desire to show the power these methods can carry in an online setting.
arXiv Detail & Related papers (2022-06-29T23:22:18Z) - Coarse-to-Fine Sparse Transformer for Hyperspectral Image Reconstruction [138.04956118993934]
We propose a novel Transformer-based method, coarse-to-fine sparse Transformer (CST)
CST embedding HSI sparsity into deep learning for HSI reconstruction.
In particular, CST uses our proposed spectra-aware screening mechanism (SASM) for coarse patch selecting. Then the selected patches are fed into our customized spectra-aggregation hashing multi-head self-attention (SAH-MSA) for fine pixel clustering and self-similarity capturing.
arXiv Detail & Related papers (2022-03-09T16:17:47Z) - Fast Variational AutoEncoder with Inverted Multi-Index for Collaborative
Filtering [59.349057602266]
Variational AutoEncoder (VAE) has been extended as a representative nonlinear method for collaborative filtering.
We propose to decompose the inner-product-based softmax probability based on the inverted multi-index.
FastVAE can outperform the state-of-the-art baselines in terms of both sampling quality and efficiency.
arXiv Detail & Related papers (2021-09-13T08:31:59Z) - Neural Calibration for Scalable Beamforming in FDD Massive MIMO with
Implicit Channel Estimation [10.775558382613077]
Channel estimation and beamforming play critical roles in frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems.
We propose a deep learning-based approach that directly optimize the beamformers at the base station according to the received uplink pilots.
A neural calibration method is proposed to improve the scalability of the end-to-end design.
arXiv Detail & Related papers (2021-08-03T14:26:14Z) - Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors [59.33977545294148]
We show that $O(k log L)$ samples suffice to guarantee that the signal is close to any vector that minimizes an amplitude-based empirical loss function.
We adapt this result to sparse phase retrieval, and show that $O(s log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional.
arXiv Detail & Related papers (2021-06-29T12:49:54Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z) - A data-driven choice of misfit function for FWI using reinforcement
learning [0.0]
We use a deep-Q network (DQN) to learn an optimal policy to determine the proper timing to switch between different misfit functions.
Specifically, we train the state-action value function (Q) to predict when to use the conventional L2-norm misfit function or the more advanced optimal-transport matching-filter (OTMF) misfit.
arXiv Detail & Related papers (2020-02-08T12:31:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.