Learnable Conformal Prediction with Context-Aware Nonconformity Functions for Robotic Planning and Perception
- URL: http://arxiv.org/abs/2509.21955v1
- Date: Fri, 26 Sep 2025 06:44:58 GMT
- Title: Learnable Conformal Prediction with Context-Aware Nonconformity Functions for Robotic Planning and Perception
- Authors: Divake Kumar, Sina Tayebati, Francesco Migliarba, Ranganath Krishnan, Amit Ranjan Trivedi,
- Abstract summary: Learnable Conformal Prediction replaces fixed scores with a lightweight neural function to produce context-aware uncertainty sets.<n>It maintains CP's theoretical guarantees while reducing prediction set sizes by 18% in classification, tightening detection intervals by 52%, and improving path planning safety from 72% to 91% success with minimal overhead.<n> Hardware evaluation shows LCP adds less than 1% memory and 15.9% inference overhead, yet sustains 39 FPS on detection tasks while being 7.4 times more energy-efficient than ensembles.
- Score: 4.694504497452662
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models in robotics often output point estimates with poorly calibrated confidences, offering no native mechanism to quantify predictive reliability under novel, noisy, or out-of-distribution inputs. Conformal prediction (CP) addresses this gap by providing distribution-free coverage guarantees, yet its reliance on fixed nonconformity scores ignores context and can yield intervals that are overly conservative or unsafe. We address this with Learnable Conformal Prediction (LCP), which replaces fixed scores with a lightweight neural function that leverages geometric, semantic, and task-specific features to produce context-aware uncertainty sets. LCP maintains CP's theoretical guarantees while reducing prediction set sizes by 18% in classification, tightening detection intervals by 52%, and improving path planning safety from 72% to 91% success with minimal overhead. Across three robotic tasks on seven benchmarks, LCP consistently outperforms Standard CP and ensemble baselines. In classification on CIFAR-100 and ImageNet, it achieves smaller set sizes (4.7-9.9% reduction) at target coverage. For object detection on COCO, BDD100K, and Cityscapes, it produces 46-54% tighter bounding boxes. In path planning through cluttered environments, it improves success to 91.5% with only 4.5% path inflation, compared to 12.2% for Standard CP. The method is lightweight (approximately 4.8% runtime overhead, 42 KB memory) and supports online adaptation, making it well suited to resource-constrained autonomous systems. Hardware evaluation shows LCP adds less than 1% memory and 15.9% inference overhead, yet sustains 39 FPS on detection tasks while being 7.4 times more energy-efficient than ensembles.
Related papers
- Distributional Reinforcement Learning with Information Bottleneck for Uncertainty-Aware DRAM Equalization [8.695939803795499]
We propose a distributional risk-sensitive reinforcement learning framework integrating Information Bottleneck latent representations with Conditional Value-at-Risk optimization.<n>We introduce rate-distortion optimal signal compression achieving 51 times speedup over eye diagrams.<n>We show that the proposed framework provides a practical solution for production-scale equalizer optimization with certified worst-case guarantees.
arXiv Detail & Related papers (2026-03-05T03:34:25Z) - Reliable Hierarchical Operating System Fingerprinting via Conformal Prediction [62.40452053128524]
Conformal Prediction (CP) could be wrapped around existing methods to obtain prediction sets with guaranteed coverage.<n>This work addresses these limitations by introducing and evaluating two distinct structured CP strategies.<n>While both methods satisfy validity guarantees, they expose a fundamental trade-off between level-wise efficiency and structural consistency.
arXiv Detail & Related papers (2026-02-13T11:20:48Z) - Uncertainty-Aware Deep Learning Framework for Remaining Useful Life Prediction in Turbofan Engines with Learned Aleatoric Uncertainty [0.0]
This research introduces a novel uncertainty-aware deep learning framework that learns aleatoric uncertainty directly through probabilistic modeling.<n>Our framework achieves breakthrough critical zone performance (RUL = 30 cycles) with RMSE of 5.14, 6.89, 5.27, and 7.16.<n>The learned uncertainty provides well-calibrated 95 percent confidence intervals with coverage ranging from 93.5 percent to 95.2 percent, enabling risk-aware maintenance scheduling.
arXiv Detail & Related papers (2025-11-24T13:53:31Z) - Uncertainty Makes It Stable: Curiosity-Driven Quantized Mixture-of-Experts [6.221156050218661]
We present a curiosity-driven quantized Mixture-of-Experts framework for deep neural networks on resource-constrained devices.<n>Our 4-bit quantization maintains 99.9 percent of 16-bit accuracy (0.858 vs 0.859 F1) with 4x compression and 41 percent energy savings.<n>Our information-theoretic routing demonstrates that adaptive quantization yields accurate (0.858 F1, 1.2M params), energy-efficient (3.87 F1/mJ), and predictable edge models.
arXiv Detail & Related papers (2025-11-13T15:32:41Z) - Conformal Prediction for Privacy-Preserving Machine Learning [83.88591755871734]
Using AES-encrypted variants of the MNIST dataset, we demonstrate that Conformal Prediction methods remain effective even when applied directly in the encrypted domain.<n>Our work sets a foundation for principled uncertainty quantification in secure, privacy-aware learning systems.
arXiv Detail & Related papers (2025-07-13T15:29:14Z) - QA-HFL: Quality-Aware Hierarchical Federated Learning for Resource-Constrained Mobile Devices with Heterogeneous Image Quality [1.5612040984769855]
This paper introduces QA-HFL, a quality-aware hierarchical federated learning framework that efficiently handles heterogeneous image quality across resource-constrained mobile devices.<n>Our approach trains specialized local models for different image quality levels and aggregates their features using a quality-weighted fusion mechanism.<n> Experiments on MNIST demonstrate that QA-HFL achieves 92.31% accuracy after just three federation rounds.
arXiv Detail & Related papers (2025-06-04T16:29:35Z) - Making Every Step Effective: Jailbreaking Large Vision-Language Models Through Hierarchical KV Equalization [74.78433600288776]
HKVE (Hierarchical Key-Value Equalization) is an innovative jailbreaking framework that selectively accepts gradient optimization results.<n>We show that HKVE substantially outperforms existing methods by substantially outperforming existing methods by margins of 20.43%, 21.01% and 26.43% respectively.
arXiv Detail & Related papers (2025-03-14T17:57:42Z) - Learning Conformal Abstention Policies for Adaptive Risk Management in Large Language and Vision-Language Models [3.958317527488534]
Large Language and Vision-Language Models (LLMs/VLMs) are increasingly used in safety-critical applications.<n>Uncertainty quantification helps assess prediction confidence and enables abstention when uncertainty is high.<n>We propose learnable abstention, integrating reinforcement learning (RL) with Conformal Prediction (CP) to optimize abstention thresholds.
arXiv Detail & Related papers (2025-02-08T21:30:41Z) - $α$-OCC: Uncertainty-Aware Camera-based 3D Semantic Occupancy Prediction [32.78977564877008]
Camera-based 3D Semantic Occupancy Prediction (OCC) aims to infer scene geometry and semantics from limited observations.<n>We first introduce Depth-UP, an uncertainty propagation framework that improves geometry completion by up to 11.58%.<n>For uncertainty (UQ), we propose the hierarchical conformal prediction (HCP) method, effectively handling the high-level class imbalance in OCC datasets.
arXiv Detail & Related papers (2024-06-16T17:27:45Z) - Patch-Level Contrasting without Patch Correspondence for Accurate and
Dense Contrastive Representation Learning [79.43940012723539]
ADCLR is a self-supervised learning framework for learning accurate and dense vision representation.
Our approach achieves new state-of-the-art performance for contrastive methods.
arXiv Detail & Related papers (2023-06-23T07:38:09Z) - Quantized Neural Networks for Low-Precision Accumulation with Guaranteed
Overflow Avoidance [68.8204255655161]
We introduce a quantization-aware training algorithm that guarantees avoiding numerical overflow when reducing the precision of accumulators during inference.
We evaluate our algorithm across multiple quantized models that we train for different tasks, showing that our approach can reduce the precision of accumulators while maintaining model accuracy with respect to a floating-point baseline.
arXiv Detail & Related papers (2023-01-31T02:46:57Z) - Non-Parametric Adaptive Network Pruning [125.4414216272874]
We introduce non-parametric modeling to simplify the algorithm design.
Inspired by the face recognition community, we use a message passing algorithm to obtain an adaptive number of exemplars.
EPruner breaks the dependency on the training data in determining the "important" filters.
arXiv Detail & Related papers (2021-01-20T06:18:38Z) - Uncertainty Sets for Image Classifiers using Conformal Prediction [112.54626392838163]
We present an algorithm that modifies any classifier to output a predictive set containing the true label with a user-specified probability, such as 90%.
The algorithm is simple and fast like Platt scaling, but provides a formal finite-sample coverage guarantee for every model and dataset.
Our method modifies an existing conformal prediction algorithm to give more stable predictive sets by regularizing the small scores of unlikely classes after Platt scaling.
arXiv Detail & Related papers (2020-09-29T17:58:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.