Persuasion Meets AI: Ethical Considerations for the Design of Social
Engineering Countermeasures
- URL: http://arxiv.org/abs/2009.12853v1
- Date: Sun, 27 Sep 2020 14:24:29 GMT
- Title: Persuasion Meets AI: Ethical Considerations for the Design of Social
Engineering Countermeasures
- Authors: Nicolas E. D\'iaz Ferreyra, Esma A\"imeur, Hicham Hage, Maritta Heisel
and Catherine Garc\'ia van Hoogstraten
- Abstract summary: Privacy in Social Network Sites (SNSs) like Facebook or Instagram is closely related to people's self-disclosure decisions.
Online privacy decisions are often based on spurious risk judgements that make people liable to reveal sensitive data to untrusted recipients.
This paper elaborates on the ethical challenges that nudging mechanisms can introduce to the development of AI-based countermeasures.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy in Social Network Sites (SNSs) like Facebook or Instagram is closely
related to people's self-disclosure decisions and their ability to foresee the
consequences of sharing personal information with large and diverse audiences.
Nonetheless, online privacy decisions are often based on spurious risk
judgements that make people liable to reveal sensitive data to untrusted
recipients and become victims of social engineering attacks. Artificial
Intelligence (AI) in combination with persuasive mechanisms like nudging is a
promising approach for promoting preventative privacy behaviour among the users
of SNSs. Nevertheless, combining behavioural interventions with high levels of
personalization can be a potential threat to people's agency and autonomy even
when applied to the design of social engineering countermeasures. This paper
elaborates on the ethical challenges that nudging mechanisms can introduce to
the development of AI-based countermeasures, particularly to those addressing
unsafe self-disclosure practices in SNSs. Overall, it endorses the elaboration
of personalized risk awareness solutions as i) an ethical approach to
counteract social engineering, and ii) as an effective means for promoting
reflective privacy decisions.
Related papers
- Human Decision-making is Susceptible to AI-driven Manipulation [71.20729309185124]
AI systems may exploit users' cognitive biases and emotional vulnerabilities to steer them toward harmful outcomes.
This study examined human susceptibility to such manipulation in financial and emotional decision-making contexts.
arXiv Detail & Related papers (2025-02-11T15:56:22Z) - Towards Privacy-aware Mental Health AI Models: Advances, Challenges, and Opportunities [61.633126163190724]
Mental illness is a widespread and debilitating condition with substantial societal and personal costs.
Recent advances in Artificial Intelligence (AI) hold great potential for recognizing and addressing conditions such as depression, anxiety disorder, bipolar disorder, schizophrenia, and post-traumatic stress disorder.
Privacy concerns, including the risk of sensitive data leakage from datasets and trained models, remain a critical barrier to deploying these AI systems in real-world clinical settings.
arXiv Detail & Related papers (2025-02-01T15:10:02Z) - Toward Ethical AI: A Qualitative Analysis of Stakeholder Perspectives [0.0]
This study explores stakeholder perspectives on privacy in AI systems, focusing on educators, parents, and AI professionals.
Using qualitative analysis of survey responses from 227 participants, the research identifies key privacy risks, including data breaches, ethical misuse, and excessive data collection.
The findings provide actionable insights into balancing the benefits of AI with robust privacy protections.
arXiv Detail & Related papers (2025-01-23T02:06:25Z) - Human services organizations and the responsible integration of AI: Considering ethics and contextualizing risk(s) [0.0]
Authors argue that ethical concerns about AI deployment vary significantly based on implementation context and specific use cases.
They propose a dimensional risk assessment approach that considers factors like data sensitivity, professional oversight requirements, and potential impact on client wellbeing.
arXiv Detail & Related papers (2025-01-20T19:38:21Z) - Transparency, Security, and Workplace Training & Awareness in the Age of Generative AI [0.0]
As AI technologies advance, ethical considerations, transparency, data privacy, and their impact on human labor intersect with the drive for innovation and efficiency.
Our research explores publicly accessible large language models (LLMs) that often operate on the periphery, away from mainstream scrutiny.
Specifically, we examine Gab AI, a platform that centers around unrestricted communication and privacy, allowing users to interact freely without censorship.
arXiv Detail & Related papers (2024-12-19T17:40:58Z) - AI Delegates with a Dual Focus: Ensuring Privacy and Strategic Self-Disclosure [42.96087647326612]
We conduct a pilot study to investigate user preferences for AI delegates across various social relations and task scenarios.
We then propose a novel AI delegate system that enables privacy-conscious self-disclosure.
Our user study demonstrates that the proposed AI delegate strategically protects privacy, pioneering its use in diverse and dynamic social interactions.
arXiv Detail & Related papers (2024-09-26T08:45:15Z) - The Illusion of Anonymity: Uncovering the Impact of User Actions on Privacy in Web3 Social Ecosystems [11.501563549824466]
We investigate the nuanced dynamics between user engagement on Web3 social platforms and the consequent privacy concerns.
We scrutinize the widespread phenomenon of fabricated activities, which encompasses the establishment of bogus accounts aimed at mimicking popularity.
We highlight the urgent need for more stringent privacy measures and ethical protocols to navigate the complex web of social exchanges.
arXiv Detail & Related papers (2024-05-22T06:26:15Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.