Materiality and Risk in the Age of Pervasive AI Sensors
- URL: http://arxiv.org/abs/2402.11183v2
- Date: Mon, 24 Mar 2025 12:48:08 GMT
- Title: Materiality and Risk in the Age of Pervasive AI Sensors
- Authors: Mona Sloane, Emanuel Moss, Susan Kennedy, Matthew Stewart, Pete Warden, Brian Plancher, Vijay Janapa Reddi,
- Abstract summary: We highlight the dimensions of risk associated with AI systems that arise from the material affordances of sensors and their underlying calculative models.<n>We propose a sensor-sensitive framework for diagnosing these risks, complementing existing approaches such as the US National Institute of Standards and Technology AI Risk Management Framework and the European Union AI Act.
- Score: 9.180189171335744
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence (AI) systems connected to sensor-laden devices are becoming pervasive, which has notable implications for a range of AI risks, including to privacy, the environment, autonomy and more. There is therefore a growing need for increased accountability around the responsible development and deployment of these technologies. Here we highlight the dimensions of risk associated with AI systems that arise from the material affordances of sensors and their underlying calculative models. We propose a sensor-sensitive framework for diagnosing these risks, complementing existing approaches such as the US National Institute of Standards and Technology AI Risk Management Framework and the European Union AI Act, and discuss its implementation. We conclude by advocating for increased attention to the materiality of algorithmic systems, and of on-device AI sensors in particular, and highlight the need for development of a sensor design paradigm that empowers users and communities and leads to a future of increased fairness, accountability and transparency.
Related papers
- Catastrophic Liability: Managing Systemic Risks in Frontier AI Development [0.0]
frontier AI development poses potential systemic risks that could affect society at a massive scale.
Current practices at many AI labs lack sufficient transparency around safety measures, testing procedures, and governance structures.
We propose a comprehensive approach to safety documentation and accountability in frontier AI development.
arXiv Detail & Related papers (2025-05-01T15:47:14Z) - AI Automatons: AI Systems Intended to Imitate Humans [54.19152688545896]
There is a growing proliferation of AI systems designed to mimic people's behavior, work, abilities, likenesses, or humanness.
The research, design, deployment, and availability of such AI systems have prompted growing concerns about a wide range of possible legal, ethical, and other social impacts.
arXiv Detail & Related papers (2025-03-04T03:55:38Z) - Fully Autonomous AI Agents Should Not be Developed [58.88624302082713]
This paper argues that fully autonomous AI agents should not be developed.
In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels.
Our analysis reveals that risks to people increase with the autonomy of a system.
arXiv Detail & Related papers (2025-02-04T19:00:06Z) - From Silos to Systems: Process-Oriented Hazard Analysis for AI Systems [2.226040060318401]
We translate System Theoretic Process Analysis (STPA) for analyzing AI operation and development processes.
We focus on systems that rely on machine learning algorithms and conductedA on three case studies.
We find that key concepts and steps of conducting anA readily apply, albeit with a few adaptations tailored for AI systems.
arXiv Detail & Related papers (2024-10-29T20:43:18Z) - How Could Generative AI Support Compliance with the EU AI Act? A Review for Safe Automated Driving Perception [4.075971633195745]
Deep Neural Networks (DNNs) have become central for the perception functions of autonomous vehicles.
The European Union (EU) Artificial Intelligence (AI) Act aims to address these challenges by establishing stringent norms and standards for AI systems.
This review paper summarizes the requirements arising from the EU AI Act regarding DNN-based perception systems and systematically categorizes existing generative AI applications in AD.
arXiv Detail & Related papers (2024-08-30T12:01:06Z) - Generative AI for Secure and Privacy-Preserving Mobile Crowdsensing [74.58071278710896]
generative AI has attracted much attention from both academic and industrial fields.
Secure and privacy-preserving mobile crowdsensing (SPPMCS) has been widely applied in data collection/ acquirement.
arXiv Detail & Related papers (2024-05-17T04:00:58Z) - Security Risks Concerns of Generative AI in the IoT [9.35121449708677]
In an era where the Internet of Things (IoT) intersects increasingly with generative Artificial Intelligence (AI), this article scrutinizes the emergent security risks inherent in this integration.
We explore how generative AI drives innovation in IoT and we analyze the potential for data breaches when using generative AI and the misuse of generative AI technologies in IoT ecosystems.
arXiv Detail & Related papers (2024-03-29T20:28:30Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Progress in artificial intelligence applications based on the
combination of self-driven sensors and deep learning [6.117706409613191]
Wang Zhong lin and his team invented the triboelectric nanogenerator (TENG), which uses Maxwell displacement current as a driving force to directly convert mechanical stimuli into electrical signals.
This paper is based on the intelligent sound monitoring and recognition system of TENG, which has good sound recognition capability.
arXiv Detail & Related papers (2024-01-30T08:53:54Z) - Sense, Predict, Adapt, Repeat: A Blueprint for Design of New Adaptive
AI-Centric Sensing Systems [2.465689259704613]
Current global trends reveal that the volume of generated data already exceeds human consumption capacity, making AI algorithms the primary consumers of data worldwide.
This paper provides an overview of efficient sensing and perception methods in both AI and sensing domains, emphasizing the necessity of co-designing AI algorithms and sensing systems for dynamic perception.
arXiv Detail & Related papers (2023-12-11T15:14:49Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Causal Reasoning: Charting a Revolutionary Course for Next-Generation
AI-Native Wireless Networks [63.246437631458356]
Next-generation wireless networks (e.g., 6G) will be artificial intelligence (AI)-native.
This article introduces a novel framework for building AI-native wireless networks; grounded in the emerging field of causal reasoning.
We highlight several wireless networking challenges that can be addressed by causal discovery and representation.
arXiv Detail & Related papers (2023-09-23T00:05:39Z) - Joint Sensing, Communication, and AI: A Trifecta for Resilient THz User
Experiences [118.91584633024907]
A novel joint sensing, communication, and artificial intelligence (AI) framework is proposed so as to optimize extended reality (XR) experiences over terahertz (THz) wireless systems.
arXiv Detail & Related papers (2023-04-29T00:39:50Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Smart Anomaly Detection in Sensor Systems: A Multi-Perspective Review [0.0]
Anomaly detection is concerned with identifying data patterns that deviate remarkably from the expected behaviour.
This is an important research problem, due to its broad set of application domains, from data analysis to e-health, cybersecurity, predictive maintenance, fault prevention, and industrial automation.
We review state-of-the-art methods that may be employed to detect anomalies in the specific area of sensor systems.
arXiv Detail & Related papers (2020-10-27T09:56:16Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - AAAI FSS-19: Human-Centered AI: Trustworthiness of AI Models and Data
Proceedings [8.445274192818825]
It is crucial for predictive models to be uncertainty-aware and yield trustworthy predictions.
The focus of this symposium was on AI systems to improve data quality and technical robustness and safety.
submissions from broadly defined areas also discussed approaches addressing requirements such as explainable models, human trust and ethical aspects of AI.
arXiv Detail & Related papers (2020-01-15T15:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.