Persuasion Meets AI: Ethical Considerations for the Design of Social
Engineering Countermeasures
- URL: http://arxiv.org/abs/2009.12853v1
- Date: Sun, 27 Sep 2020 14:24:29 GMT
- Title: Persuasion Meets AI: Ethical Considerations for the Design of Social
Engineering Countermeasures
- Authors: Nicolas E. D\'iaz Ferreyra, Esma A\"imeur, Hicham Hage, Maritta Heisel
and Catherine Garc\'ia van Hoogstraten
- Abstract summary: Privacy in Social Network Sites (SNSs) like Facebook or Instagram is closely related to people's self-disclosure decisions.
Online privacy decisions are often based on spurious risk judgements that make people liable to reveal sensitive data to untrusted recipients.
This paper elaborates on the ethical challenges that nudging mechanisms can introduce to the development of AI-based countermeasures.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy in Social Network Sites (SNSs) like Facebook or Instagram is closely
related to people's self-disclosure decisions and their ability to foresee the
consequences of sharing personal information with large and diverse audiences.
Nonetheless, online privacy decisions are often based on spurious risk
judgements that make people liable to reveal sensitive data to untrusted
recipients and become victims of social engineering attacks. Artificial
Intelligence (AI) in combination with persuasive mechanisms like nudging is a
promising approach for promoting preventative privacy behaviour among the users
of SNSs. Nevertheless, combining behavioural interventions with high levels of
personalization can be a potential threat to people's agency and autonomy even
when applied to the design of social engineering countermeasures. This paper
elaborates on the ethical challenges that nudging mechanisms can introduce to
the development of AI-based countermeasures, particularly to those addressing
unsafe self-disclosure practices in SNSs. Overall, it endorses the elaboration
of personalized risk awareness solutions as i) an ethical approach to
counteract social engineering, and ii) as an effective means for promoting
reflective privacy decisions.
Related papers
- Smoke Screens and Scapegoats: The Reality of General Data Protection Regulation Compliance -- Privacy and Ethics in the Case of Replika AI [1.325665193924634]
This paper takes a critical approach towards examining the intricacies of these issues within AI companion services.
We analyze articles from public media about the company and its practices to gain insight into the trustworthiness of information provided in the policy.
The results reveal despite privacy notices, data collection practices might harvest personal data without users' full awareness.
arXiv Detail & Related papers (2024-11-07T07:36:19Z) - MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - AI Delegates with a Dual Focus: Ensuring Privacy and Strategic Self-Disclosure [42.96087647326612]
We conduct a pilot study to investigate user preferences for AI delegates across various social relations and task scenarios.
We then propose a novel AI delegate system that enables privacy-conscious self-disclosure.
Our user study demonstrates that the proposed AI delegate strategically protects privacy, pioneering its use in diverse and dynamic social interactions.
arXiv Detail & Related papers (2024-09-26T08:45:15Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - The Illusion of Anonymity: Uncovering the Impact of User Actions on Privacy in Web3 Social Ecosystems [11.501563549824466]
We investigate the nuanced dynamics between user engagement on Web3 social platforms and the consequent privacy concerns.
We scrutinize the widespread phenomenon of fabricated activities, which encompasses the establishment of bogus accounts aimed at mimicking popularity.
We highlight the urgent need for more stringent privacy measures and ethical protocols to navigate the complex web of social exchanges.
arXiv Detail & Related papers (2024-05-22T06:26:15Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Potentiality and Awareness: A Position Paper from the Perspective of
Human-AI Teaming in Cybersecurity [18.324118502535775]
We argue that human-AI teaming is worthwhile in cybersecurity.
We emphasize the importance of a balanced approach that incorporates AI's computational power with human expertise.
arXiv Detail & Related papers (2023-09-28T01:20:44Z) - A Critical Take on Privacy in a Datafied Society [0.0]
I analyze several facets of the lack of online privacy and idiosyncrasies exhibited by privacy advocates.
I discuss of possible effects of datafication on human behavior, the prevalent market-oriented assumption at the base of online privacy, and some emerging adaptation strategies.
A glimpse on the likely problematic future is provided with a discussion on privacy related aspects of EU, UK, and China's proposed generative AI policies.
arXiv Detail & Related papers (2023-08-03T11:45:18Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.