Persuasion Meets AI: Ethical Considerations for the Design of Social
Engineering Countermeasures
- URL: http://arxiv.org/abs/2009.12853v1
- Date: Sun, 27 Sep 2020 14:24:29 GMT
- Title: Persuasion Meets AI: Ethical Considerations for the Design of Social
Engineering Countermeasures
- Authors: Nicolas E. D\'iaz Ferreyra, Esma A\"imeur, Hicham Hage, Maritta Heisel
and Catherine Garc\'ia van Hoogstraten
- Abstract summary: Privacy in Social Network Sites (SNSs) like Facebook or Instagram is closely related to people's self-disclosure decisions.
Online privacy decisions are often based on spurious risk judgements that make people liable to reveal sensitive data to untrusted recipients.
This paper elaborates on the ethical challenges that nudging mechanisms can introduce to the development of AI-based countermeasures.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy in Social Network Sites (SNSs) like Facebook or Instagram is closely
related to people's self-disclosure decisions and their ability to foresee the
consequences of sharing personal information with large and diverse audiences.
Nonetheless, online privacy decisions are often based on spurious risk
judgements that make people liable to reveal sensitive data to untrusted
recipients and become victims of social engineering attacks. Artificial
Intelligence (AI) in combination with persuasive mechanisms like nudging is a
promising approach for promoting preventative privacy behaviour among the users
of SNSs. Nevertheless, combining behavioural interventions with high levels of
personalization can be a potential threat to people's agency and autonomy even
when applied to the design of social engineering countermeasures. This paper
elaborates on the ethical challenges that nudging mechanisms can introduce to
the development of AI-based countermeasures, particularly to those addressing
unsafe self-disclosure practices in SNSs. Overall, it endorses the elaboration
of personalized risk awareness solutions as i) an ethical approach to
counteract social engineering, and ii) as an effective means for promoting
reflective privacy decisions.
Related papers
- Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - The Illusion of Anonymity: Uncovering the Impact of User Actions on Privacy in Web3 Social Ecosystems [11.501563549824466]
We investigate the nuanced dynamics between user engagement on Web3 social platforms and the consequent privacy concerns.
We scrutinize the widespread phenomenon of fabricated activities, which encompasses the establishment of bogus accounts aimed at mimicking popularity.
We highlight the urgent need for more stringent privacy measures and ethical protocols to navigate the complex web of social exchanges.
arXiv Detail & Related papers (2024-05-22T06:26:15Z) - Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach [45.74830585715129]
We suggest extending the Social Transparency (ST) framework to address the risks of social misattributions in Large Language Models (LLMs)
LLMs may lead to mismatches between designers' intentions and users' perceptions of social attributes, risking to promote emotional manipulation and dangerous behaviors.
We propose enhancing the ST framework with a fifth 'W-question' to clarify the specific social attributions assigned to LLMs by its designers and users.
arXiv Detail & Related papers (2024-03-26T17:02:42Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Potentiality and Awareness: A Position Paper from the Perspective of
Human-AI Teaming in Cybersecurity [18.324118502535775]
We argue that human-AI teaming is worthwhile in cybersecurity.
We emphasize the importance of a balanced approach that incorporates AI's computational power with human expertise.
arXiv Detail & Related papers (2023-09-28T01:20:44Z) - A Critical Take on Privacy in a Datafied Society [0.0]
I analyze several facets of the lack of online privacy and idiosyncrasies exhibited by privacy advocates.
I discuss of possible effects of datafication on human behavior, the prevalent market-oriented assumption at the base of online privacy, and some emerging adaptation strategies.
A glimpse on the likely problematic future is provided with a discussion on privacy related aspects of EU, UK, and China's proposed generative AI policies.
arXiv Detail & Related papers (2023-08-03T11:45:18Z) - Cross-Network Social User Embedding with Hybrid Differential Privacy
Guarantees [81.6471440778355]
We propose a Cross-network Social User Embedding framework, namely DP-CroSUE, to learn the comprehensive representations of users in a privacy-preserving way.
In particular, for each heterogeneous social network, we first introduce a hybrid differential privacy notion to capture the variation of privacy expectations for heterogeneous data types.
To further enhance user embeddings, a novel cross-network GCN embedding model is designed to transfer knowledge across networks through those aligned users.
arXiv Detail & Related papers (2022-09-04T06:22:37Z) - Artificial intelligence across company borders [17.27331855560747]
Cross-company AI can be effective without data disclosure.
In this Viewpoint, we discuss the use, value, and implications of this approach in a cross-company setting.
arXiv Detail & Related papers (2021-06-21T11:56:41Z) - Voluntary safety commitments provide an escape from over-regulation in
AI development [8.131948859165432]
This work reveals for the first time how voluntary commitments, with sanctions either by peers or an institution, leads to socially beneficial outcomes.
Results are directly relevant for the design of governance and regulatory policies that aim to ensure an ethical and responsible AI technology development process.
arXiv Detail & Related papers (2021-04-08T12:54:56Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.