Deep opacity and AI: A threat to XAI and to privacy protection mechanisms
- URL: http://arxiv.org/abs/2509.08835v1
- Date: Sat, 30 Aug 2025 11:15:59 GMT
- Title: Deep opacity and AI: A threat to XAI and to privacy protection mechanisms
- Authors: Vincent C. Müller,
- Abstract summary: Big data analytics and AI pose a threat to privacy.<n>Some of this is due to some kind of "black box problem" in AI.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is known that big data analytics and AI pose a threat to privacy, and that some of this is due to some kind of "black box problem" in AI. I explain how this becomes a problem in the context of justification for judgments and actions. Furthermore, I suggest distinguishing three kinds of opacity: 1) the subjects do not know what the system does ("shallow opacity"), 2) the analysts do not know what the system does ("standard black box opacity"), or 3) the analysts cannot possibly know what the system might do ("deep opacity"). If the agents, data subjects as well as analytics experts, operate under opacity, then these agents cannot provide justifications for judgments that are necessary to protect privacy, e.g., they cannot give "informed consent", or guarantee "anonymity". It follows from these points that agents in big data analytics and AI often cannot make the judgments needed to protect privacy. So I conclude that big data analytics makes the privacy problems worse and the remedies less effective. As a positive note, I provide a brief outlook on technical ways to handle this situation.
Related papers
- "We are not Future-ready": Understanding AI Privacy Risks and Existing Mitigation Strategies from the Perspective of AI Developers in Europe [56.1653658714305]
We interviewed 25 AI developers based in Europe to understand which privacy threats they believe pose the greatest risk to users, developers, and businesses.<n>We find that there is little consensus among AI developers on the relative ranking of privacy risks.<n>While AI developers are aware of proposed mitigation strategies for addressing these risks, they reported minimal real-world adoption.
arXiv Detail & Related papers (2025-10-01T13:51:33Z) - Privacy Risks and Preservation Methods in Explainable Artificial Intelligence: A Scoping Review [1.2744523252873352]
We conduct a scoping review of existing literature to elicit details on the conflict between privacy and explainability.<n>We extracted 57 articles from 1,943 studies published from January 2019 to December 2024.<n>We categorize the privacy risks and preservation methods in XAI and propose the characteristics of privacy preserving explanations.
arXiv Detail & Related papers (2025-05-05T17:53:28Z) - AgentDAM: Privacy Leakage Evaluation for Autonomous Web Agents [75.85554113398626]
We introduce a new benchmark AgentDAM that measures if AI web-navigation agents follow the privacy principle of data minimization''<n>Our benchmark simulates realistic web interaction scenarios end-to-end and is adaptable to all existing web navigation agents.
arXiv Detail & Related papers (2025-03-12T19:30:31Z) - Unraveling Privacy Threat Modeling Complexity: Conceptual Privacy Analysis Layers [0.7918886297003017]
Analyzing privacy threats in software products is an essential part of software development to ensure systems are privacy-respecting.
We propose to use four conceptual layers (feature, ecosystem, business context, and environment) to capture this privacy complexity.
These layers can be used as a frame to structure and specify the privacy analysis support in a more tangible and actionable way.
arXiv Detail & Related papers (2024-08-07T06:30:20Z) - Evaluating if trust and personal information privacy concerns are
barriers to using health insurance that explicitly utilizes AI [0.6138671548064355]
This research explores whether trust and privacy concern are barriers to the adoption of AI in health insurance.
Findings show that trust is significantly lower in the second scenario where AI is visible.
Privacy concerns are higher with AI but the difference is not statistically significant within the model.
arXiv Detail & Related papers (2024-01-20T15:02:56Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - Survey of Trustworthy AI: A Meta Decision of AI [0.41292255339309647]
Trusting an opaque system involves deciding on the level of Trustworthy AI (TAI)
To underpin these domains, we create ten dimensions to measure trust: explainability/transparency, fairness/diversity, generalizability, privacy, data governance, safety/robustness, accountability, reliability, and sustainability.
arXiv Detail & Related papers (2023-06-01T06:25:01Z) - Rethinking People Analytics With Inverse Transparency by Design [57.67333075002697]
We propose a new design approach for workforce analytics we refer to as inverse transparency by design.
We find that architectural changes are made without inhibiting core functionality.
We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
arXiv Detail & Related papers (2023-05-16T21:37:35Z) - Human intuition as a defense against attribute inference [4.916067949075847]
Attribute inference has become a major threat to privacy.
One way to tackle this threat is to strategically modify one's publicly available data in order to keep one's private information hidden from attribute inference.
We evaluate people's ability to perform this task, and compare it against algorithms designed for this purpose.
arXiv Detail & Related papers (2023-04-24T06:54:17Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - More Than Privacy: Applying Differential Privacy in Key Areas of
Artificial Intelligence [62.3133247463974]
We show that differential privacy can do more than just privacy preservation in AI.
It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI.
arXiv Detail & Related papers (2020-08-05T03:07:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.