Improved Point Estimation for the Rayleigh Regression Model
- URL: http://arxiv.org/abs/2208.03611v1
- Date: Sun, 7 Aug 2022 01:28:39 GMT
- Title: Improved Point Estimation for the Rayleigh Regression Model
- Authors: B. G. Palm, F. M. Bayer, R. J. Cintra
- Abstract summary: The Rayleigh regression model was recently proposed for modeling amplitude values of synthetic aperture radar (SAR) image pixels.
We introduce bias-adjusted estimators tailored for the Rayleigh regression model based on: (i) the Cox and Snell's method; (ii) the Firth's scheme; and (iii) the parametric bootstrap method.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Rayleigh regression model was recently proposed for modeling amplitude
values of synthetic aperture radar (SAR) image pixels. However, inferences from
such model are based on the maximum likelihood estimators, which can be biased
for small signal lengths. The Rayleigh regression model for SAR images often
takes into account small pixel windows, which may lead to inaccurate results.
In this letter, we introduce bias-adjusted estimators tailored for the Rayleigh
regression model based on: (i) the Cox and Snell's method; (ii) the Firth's
scheme; and (iii) the parametric bootstrap method. We present numerical
experiments considering synthetic and actual SAR data sets. The bias-adjusted
estimators yield nearly unbiased estimates and accurate modeling results.
Related papers
- Forest Parameter Prediction by Multiobjective Deep Learning of
Regression Models Trained with Pseudo-Target Imputation [6.853936752111048]
In prediction of forest parameters with data from remote sensing, regression models have traditionally been trained on a small sample of ground reference data.
This paper proposes to impute this sample of true prediction targets with data from an existing RS-based prediction map that we consider as pseudo-targets.
We use prediction maps constructed from airborne laser scanning (ALS) data to provide accurate pseudo-targets and free data from Sentinel-1's C-band synthetic aperture radar (SAR) as regressors.
arXiv Detail & Related papers (2023-06-19T18:10:47Z) - Augmented balancing weights as linear regression [3.877356414450364]
We provide a novel characterization of augmented balancing weights, also known as automatic debiased machine learning (AutoDML)
We show that the augmented estimator is equivalent to a single linear model with coefficients that combine the coefficients from the original outcome model and coefficients from an unpenalized ordinary least squares (OLS) fit on the same data.
Our framework opens the black box on this increasingly popular class of estimators.
arXiv Detail & Related papers (2023-04-27T21:53:54Z) - Robust Rayleigh Regression Method for SAR Image Processing in Presence
of Outliers [0.0]
The presence of outliers (anomalous values) in synthetic aperture radar (SAR) data may result in inaccurate inferences.
This paper aims at obtaining Rayleigh regression model parameter estimators robust to the presence of outliers.
arXiv Detail & Related papers (2022-07-29T23:03:45Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - Hierarchical Gaussian Process Models for Regression Discontinuity/Kink
under Sharp and Fuzzy Designs [0.0]
We propose nonparametric Bayesian estimators for causal inference exploiting Regression Discontinuity/Kink (RD/RK)
These estimators are extended to hierarchical GP models with an intermediate Bayesian neural network layer.
Monte Carlo simulations show that our estimators perform similarly and often better than competing estimators in terms of precision, coverage and interval length.
arXiv Detail & Related papers (2021-10-03T04:23:56Z) - Model Adaptation for Image Reconstruction using Generalized Stein's
Unbiased Risk Estimator [34.08815401541628]
We introduce a Generalized Stein's Unbiased Risk Estimate (GSURE) loss metric to adapt the network to the measured k-space data.
Unlike current methods that rely on the mean square error in kspace, the proposed metric accounts for noise in the measurements.
arXiv Detail & Related papers (2021-01-29T20:16:45Z) - Score-Based Generative Modeling through Stochastic Differential
Equations [114.39209003111723]
We present a differential equation that transforms a complex data distribution to a known prior distribution by injecting noise.
A corresponding reverse-time SDE transforms the prior distribution back into the data distribution by slowly removing the noise.
By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks.
We demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.
arXiv Detail & Related papers (2020-11-26T19:39:10Z) - Probing Model Signal-Awareness via Prediction-Preserving Input
Minimization [67.62847721118142]
We evaluate models' ability to capture the correct vulnerability signals to produce their predictions.
We measure the signal awareness of models using a new metric we propose- Signal-aware Recall (SAR)
The results show a sharp drop in the model's Recall from the high 90s to sub-60s with the new metric.
arXiv Detail & Related papers (2020-11-25T20:05:23Z) - A Bayesian Perspective on Training Speed and Model Selection [51.15664724311443]
We show that a measure of a model's training speed can be used to estimate its marginal likelihood.
We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks.
Our results suggest a promising new direction towards explaining why neural networks trained with gradient descent are biased towards functions that generalize well.
arXiv Detail & Related papers (2020-10-27T17:56:14Z) - Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks [78.76880041670904]
In neural networks with binary activations and or binary weights the training by gradient descent is complicated.
We propose a new method for this estimation problem combining sampling and analytic approximation steps.
We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models.
arXiv Detail & Related papers (2020-06-04T21:51:21Z) - SUMO: Unbiased Estimation of Log Marginal Probability for Latent
Variable Models [80.22609163316459]
We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series.
We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost.
arXiv Detail & Related papers (2020-04-01T11:49:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.