Towards a Principled Framework for Disclosure Avoidance
- URL: http://arxiv.org/abs/2502.07105v1
- Date: Mon, 10 Feb 2025 22:58:06 GMT
- Title: Towards a Principled Framework for Disclosure Avoidance
- Authors: Michael B Hawes, Evan M Brassell, Anthony Caruso, Ryan Cumings-Menon, Jason Devine, Cassandra Dorius, David Evans, Kenneth Haase, Michele C Hedrick, Alexandra Krause, Philip Leclerc, James Livsey, Rolando A Rodriguez, Luke T Rogers, Matthew Spence, Victoria Velkoff, Michael Walsh, James Whitehorne, Sallie Ann Keller,
- Abstract summary: As data users' needs change, agencies must redesign the disclosure avoidance system(s) they use.<n>System's ability to calibrate the strength of protection to suit the underlying disclosure risk of the data is a worthwhile feature.<n>This paper proposes a framework for distinguishing these inherent features from the implementation decisions that need to be made independent of the system selected.
- Score: 36.57924530885649
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Responsible disclosure limitation is an iterative exercise in risk assessment and mitigation. From time to time, as disclosure risks grow and evolve and as data users' needs change, agencies must consider redesigning the disclosure avoidance system(s) they use. Discussions about candidate systems often conflate inherent features of those systems with implementation decisions independent of those systems. For example, a system's ability to calibrate the strength of protection to suit the underlying disclosure risk of the data (e.g., by varying suppression thresholds), is a worthwhile feature regardless of the independent decision about how much protection is actually necessary. Having a principled discussion of candidate disclosure avoidance systems requires a framework for distinguishing these inherent features of the systems from the implementation decisions that need to be made independent of the system selected. For statistical agencies, this framework must also reflect the applied nature of these systems, acknowledging that candidate systems need to be adaptable to requirements stemming from the legal, scientific, resource, and stakeholder environments within which they would be operating. This paper proposes such a framework. No approach will be perfectly adaptable to every potential system requirement. Because the selection of some methodologies over others may constrain the resulting systems' efficiency and flexibility to adapt to particular statistical product specifications, data user needs, or disclosure risks, agencies may approach these choices in an iterative fashion, adapting system requirements, product specifications, and implementation parameters as necessary to ensure the resulting quality of the statistical product.
Related papers
- AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.
The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - Free Energy Risk Metrics for Systemically Safe AI: Gatekeeping Multi-Agent Study [0.4166512373146748]
We investigate the Free Energy Principle as a foundation for measuring risk in agentic and multi-agent systems.<n>We introduce a Cumulative Risk Exposure metric that is flexible to differing contexts and needs.<n>We show that the introduction of gatekeepers in an AV fleet, even at low penetration, can generate significant positive externalities in terms of increased system safety.
arXiv Detail & Related papers (2025-02-06T17:38:45Z) - Stream-Based Monitoring of Algorithmic Fairness [4.811789437743092]
Stream-based monitoring is proposed as a solution for verifying the algorithmic fairness of decision and prediction systems at runtime.<n>We present a principled way to formalize algorithmic fairness over temporal data streams in the specification language RTLola.
arXiv Detail & Related papers (2025-01-30T13:18:59Z) - Towards Formal Fault Injection for Safety Assessment of Automated
Systems [0.0]
This paper introduces formal fault injection, a fusion of these two techniques throughout the development lifecycle.
We advocate for a more cohesive approach by identifying five areas of mutual support between formal methods and fault injection.
arXiv Detail & Related papers (2023-11-16T11:34:18Z) - Incorporating Recklessness to Collaborative Filtering based Recommender Systems [42.956580283193176]
recklessness takes into account the variance of the output probability distribution of the predicted ratings.
Experimental results demonstrate that recklessness not only allows for risk regulation but also improves the quantity and quality of predictions.
arXiv Detail & Related papers (2023-08-03T21:34:00Z) - Interactive System-wise Anomaly Detection [66.3766756452743]
Anomaly detection plays a fundamental role in various applications.
It is challenging for existing methods to handle the scenarios where the instances are systems whose characteristics are not readily observed as data.
We develop an end-to-end approach which includes an encoder-decoder module that learns system embeddings.
arXiv Detail & Related papers (2023-04-21T02:20:24Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Risk-Driven Design of Perception Systems [47.787943101699966]
It is important that we design perception systems to minimize errors that reduce the overall safety of the system.
We develop a risk-driven approach to designing perception systems that accounts for the effect of perceptual errors on the performance of the fully-integrated, closed-loop system.
We evaluate our techniques on a realistic vision-based aircraft detect and avoid application and show that risk-driven design reduces collision risk by 37% over a baseline system.
arXiv Detail & Related papers (2022-05-21T21:14:56Z) - Tailored Uncertainty Estimation for Deep Learning Systems [10.288326973530614]
We propose a framework that guides the selection of a suitable uncertainty estimation method.
Our framework provides strategies to validate this choice and to uncover structural weaknesses.
It anticipates prospective machine learning regulations that require evidences for the technical appropriateness of machine learning systems.
arXiv Detail & Related papers (2022-04-29T09:23:07Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - Optimal by Design: Model-Driven Synthesis of Adaptation Strategies for
Autonomous Systems [9.099295007630484]
We present Optimal by Design (ObD), a framework for model-based requirements-driven synthesis of optimal adaptation strategies for autonomous systems.
ObD proposes a model for the high-level description of the basic elements of self-adaptive systems, namely the system, capabilities, requirements and environment.
Based on those elements, a Markov Decision Process (MDP) is constructed to compute the optimal strategy or the most rewarding system behaviour.
arXiv Detail & Related papers (2020-01-16T12:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.