B-BACN: Bayesian Boundary-Aware Convolutional Network for Crack
Characterization
- URL: http://arxiv.org/abs/2302.06827v3
- Date: Thu, 15 Jun 2023 04:37:25 GMT
- Title: B-BACN: Bayesian Boundary-Aware Convolutional Network for Crack
Characterization
- Authors: Rahul Rathnakumar, Yutian Pang, Yongming Liu
- Abstract summary: Uncertainty of crack detection is challenging due to various factors, such as measurement noises, signal processing, and model simplifications.
A machine learning-based approach is proposed to quantify both uncertainty and aleatoric uncertainties concurrently.
We introduce a Boundary-Aware Convolutional Network (B-BACN) that emphasizes uncertainty-aware boundary refinement to generate precise and reliable crack boundary detections.
- Score: 4.447467536572625
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurately detecting crack boundaries is crucial for reliability assessment
and risk management of structures and materials, such as structural health
monitoring, diagnostics, prognostics, and maintenance scheduling. Uncertainty
quantification of crack detection is challenging due to various stochastic
factors, such as measurement noises, signal processing, and model
simplifications. A machine learning-based approach is proposed to quantify both
epistemic and aleatoric uncertainties concurrently. We introduce a Bayesian
Boundary-Aware Convolutional Network (B-BACN) that emphasizes uncertainty-aware
boundary refinement to generate precise and reliable crack boundary detections.
The proposed method employs a multi-task learning approach, where we use Monte
Carlo Dropout to learn the epistemic uncertainty and a Gaussian sampling
function to predict each sample's aleatoric uncertainty. Moreover, we include a
boundary refinement loss to B-BACN to enhance the determination of defect
boundaries. The proposed method is demonstrated with benchmark experimental
results and compared with several existing methods. The experimental results
illustrate the effectiveness of our proposed approach in uncertainty-aware
crack boundary detection, minimizing misclassification rate, and improving
model calibration capabilities.
Related papers
- Conformal Segmentation in Industrial Surface Defect Detection with Statistical Guarantees [2.0257616108612373]
In industrial settings, surface defects on steel can significantly compromise its service life and elevate potential safety risks.
Traditional defect detection methods predominantly rely on manual inspection, which suffers from low efficiency and high costs.
We develop a statistically rigorous threshold based on a user-defined risk level to identify high-probability defective pixels in test images.
We demonstrate robust and efficient control over the expected test set error rate across varying calibration-to-test ratios.
arXiv Detail & Related papers (2025-04-24T16:33:56Z) - Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Achieving Well-Informed Decision-Making in Drug Discovery: A Comprehensive Calibration Study using Neural Network-Based Structure-Activity Models [4.619907534483781]
computational models that predict drug-target interactions are valuable tools to accelerate the development of new therapeutic agents.
However, such models can be poorly calibrated, which results in unreliable uncertainty estimates.
We show that combining post hoc calibration method with well-performing uncertainty quantification approaches can boost model accuracy and calibration.
arXiv Detail & Related papers (2024-07-19T10:29:00Z) - Predictive Uncertainty Quantification for Bird's Eye View Segmentation: A Benchmark and Novel Loss Function [10.193504550494486]
This paper introduces a benchmark for predictive uncertainty quantification in Bird's Eye View (BEV) segmentation.
Our study focuses on the effectiveness of quantified uncertainty in detecting misclassified and out-of-distribution pixels.
We propose a novel loss function, Uncertainty-Focal-Cross-Entropy (UFCE), specifically designed for highly imbalanced data.
arXiv Detail & Related papers (2024-05-31T16:32:46Z) - A unified uncertainty-aware exploration: Combining epistemic and
aleatory uncertainty [21.139502047972684]
We propose an algorithm that quantifies the combined effect of aleatory and epistemic uncertainty for risk-sensitive exploration.
Our method builds on a novel extension of distributional RL that estimates a parameterized return distribution.
Experimental results on tasks with exploration and risk challenges show that our method outperforms alternative approaches.
arXiv Detail & Related papers (2024-01-05T17:39:00Z) - One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Model-Assisted Probabilistic Safe Adaptive Control With Meta-Bayesian
Learning [33.75998206184497]
We develop a novel adaptive safe control framework that integrates meta learning, Bayesian models, and control barrier function (CBF) method.
Specifically, with the help of CBF method, we learn the inherent and external uncertainties by a unified adaptive Bayesian linear regression model.
For a new control task, we refine the meta-learned models using a few samples, and introduce pessimistic confidence bounds into CBF constraints to ensure safe control.
arXiv Detail & Related papers (2023-07-03T08:16:01Z) - Model-Based Uncertainty in Value Functions [89.31922008981735]
We focus on characterizing the variance over values induced by a distribution over MDPs.
Previous work upper bounds the posterior variance over values by solving a so-called uncertainty Bellman equation.
We propose a new uncertainty Bellman equation whose solution converges to the true posterior variance over values.
arXiv Detail & Related papers (2023-02-24T09:18:27Z) - Convergence of uncertainty estimates in Ensemble and Bayesian sparse
model discovery [4.446017969073817]
We show empirical success in terms of accuracy and robustness to noise with bootstrapping-based sequential thresholding least-squares estimator.
We show that this bootstrapping-based ensembling technique can perform a provably correct variable selection procedure with an exponential convergence rate of the error rate.
arXiv Detail & Related papers (2023-01-30T04:07:59Z) - Error-based Knockoffs Inference for Controlled Feature Selection [49.99321384855201]
We propose an error-based knockoff inference method by integrating the knockoff features, the error-based feature importance statistics, and the stepdown procedure together.
The proposed inference procedure does not require specifying a regression model and can handle feature selection with theoretical guarantees.
arXiv Detail & Related papers (2022-03-09T01:55:59Z) - Adversarial Attack for Uncertainty Estimation: Identifying Critical
Regions in Neural Networks [0.0]
We propose a novel method to capture data points near decision boundary in neural network that are often referred to a specific type of uncertainty.
Uncertainty estimates are derived from the input perturbations, unlike previous studies that provide perturbations on the model's parameters.
We show that the proposed method has revealed a significant outperformance over other methods and provided less risk to capture model uncertainty in machine learning.
arXiv Detail & Related papers (2021-07-15T21:30:26Z) - Quantifying Uncertainty in Deep Spatiotemporal Forecasting [67.77102283276409]
We describe two types of forecasting problems: regular grid-based and graph-based.
We analyze UQ methods from both the Bayesian and the frequentist point view, casting in a unified framework via statistical decision theory.
Through extensive experiments on real-world road network traffic, epidemics, and air quality forecasting tasks, we reveal the statistical computational trade-offs for different UQ methods.
arXiv Detail & Related papers (2021-05-25T14:35:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.