The SET Perceptual Factors Framework: Towards Assured Perception for Autonomous Systems
- URL: http://arxiv.org/abs/2508.10798v1
- Date: Thu, 14 Aug 2025 16:22:01 GMT
- Title: The SET Perceptual Factors Framework: Towards Assured Perception for Autonomous Systems
- Authors: Troi Williams,
- Abstract summary: Key concern is assuring the reliability of robot perception, as perception seeds safe decision-making.<n>We introduce the SET (Self, Environment, and Target) Perceptual Factors Framework to systematically analyze how factors negatively impact perception.<n>Our framework aims to promote rigorous safety assurances and cultivate greater public understanding and trust in autonomous systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Future autonomous systems promise significant societal benefits, yet their deployment raises concerns about safety and trustworthiness. A key concern is assuring the reliability of robot perception, as perception seeds safe decision-making. Failures in perception are often due to complex yet common environmental factors and can lead to accidents that erode public trust. To address this concern, we introduce the SET (Self, Environment, and Target) Perceptual Factors Framework. We designed the framework to systematically analyze how factors such as weather, occlusion, or sensor limitations negatively impact perception. To achieve this, the framework employs SET State Trees to categorize where such factors originate and SET Factor Trees to model how these sources and factors impact perceptual tasks like object detection or pose estimation. Next, we develop Perceptual Factor Models using both trees to quantify the uncertainty for a given task. Our framework aims to promote rigorous safety assurances and cultivate greater public understanding and trust in autonomous systems by offering a transparent and standardized method for identifying, modeling, and communicating perceptual risks.
Related papers
- Enhancing Uncertainty Quantification for Runtime Safety Assurance Using Causal Risk Analysis and Operational Design Domain [0.0]
We propose an enhancement of traditional uncertainty quantification by explicitly incorporating environmental conditions.<n>We leverage Hazard Analysis and Risk Assessment (HARA) and fault tree modeling to identify critical operational conditions affecting system functionality.<n>At runtime, this BN is instantiated using real-time environmental observations to infer a probabilistic distribution over the safety estimation.
arXiv Detail & Related papers (2025-07-04T12:12:32Z) - Probabilistic modelling and safety assurance of an agriculture robot providing light-treatment [0.0]
Continued adoption of agricultural robots postulates the farmer's trust in the reliability, robustness and safety of the new technology.<n>This paper considers a probabilistic modelling and risk analysis framework for use in the early development phases.
arXiv Detail & Related papers (2025-06-24T13:39:32Z) - Towards Responsible AI: Advances in Safety, Fairness, and Accountability of Autonomous Systems [0.0]
This thesis advances knowledge in the safety, fairness, transparency, and accountability of AI systems.<n>We extend classical deterministic shielding techniques to become resilient against delayed observations.<n>We introduce fairness shields, a novel post-processing approach to enforce group fairness in sequential decision-making settings.
arXiv Detail & Related papers (2025-06-11T21:30:02Z) - LLM Agents Should Employ Security Principles [60.03651084139836]
This paper argues that the well-established design principles in information security should be employed when deploying Large Language Model (LLM) agents at scale.<n>We introduce AgentSandbox, a conceptual framework embedding these security principles to provide safeguards throughout an agent's life-cycle.
arXiv Detail & Related papers (2025-05-29T21:39:08Z) - On the Fairness, Diversity and Reliability of Text-to-Image Generative Models [68.62012304574012]
multimodal generative models have sparked critical discussions on their reliability, fairness and potential for misuse.<n>We propose an evaluation framework to assess model reliability by analyzing responses to global and local perturbations in the embedding space.<n>Our method lays the groundwork for detecting unreliable, bias-injected models and tracing the provenance of embedded biases.
arXiv Detail & Related papers (2024-11-21T09:46:55Z) - Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.<n>We propose methods tailored to the unique properties of perception and decision-making.<n>We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Trustworthiness for an Ultra-Wideband Localization Service [2.4979362117484714]
This paper proposes a holistic trustworthiness assessment framework for ultra-wideband self-localization.
Our goal is to provide guidance for evaluating a system's trustworthiness based on objective evidence.
Our approach guarantees that the resulting trustworthiness indicators correspond to chosen real-world threats.
arXiv Detail & Related papers (2024-08-10T11:57:10Z) - Grasping Causality for the Explanation of Criticality for Automated
Driving [0.0]
This work introduces a formalization of causal queries whose answers facilitate a causal understanding of safety-relevant influencing factors for automated driving.
Based on Judea Pearl's causal theory, we define a causal relation as a causal structure together with a context.
As availability and quality of data are imperative for validly estimating answers to the causal queries, we also discuss requirements on real-world and synthetic data acquisition.
arXiv Detail & Related papers (2022-10-27T12:37:00Z) - Risk-Driven Design of Perception Systems [47.787943101699966]
It is important that we design perception systems to minimize errors that reduce the overall safety of the system.
We develop a risk-driven approach to designing perception systems that accounts for the effect of perceptual errors on the performance of the fully-integrated, closed-loop system.
We evaluate our techniques on a realistic vision-based aircraft detect and avoid application and show that risk-driven design reduces collision risk by 37% over a baseline system.
arXiv Detail & Related papers (2022-05-21T21:14:56Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.