A Mental Trespass? Unveiling Truth, Exposing Thoughts and Threatening
Civil Liberties with Non-Invasive AI Lie Detection
- URL: http://arxiv.org/abs/2102.08004v1
- Date: Tue, 16 Feb 2021 08:09:38 GMT
- Title: A Mental Trespass? Unveiling Truth, Exposing Thoughts and Threatening
Civil Liberties with Non-Invasive AI Lie Detection
- Authors: Taylan Sen, Kurtis Haut, Denis Lomakin and Ehsan Hoque
- Abstract summary: We argue why artificial intelligence-based, non-invasive lie detection technologies are likely to experience a rapid advancement in the coming years.
Legal and popular perspectives are reviewed to evaluate the potential for these technologies to cause societal harm.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Imagine an app on your phone or computer that can tell if you are being
dishonest, just by processing affective features of your facial expressions,
body movements, and voice. People could ask about your political preferences,
your sexual orientation, and immediately determine which of your responses are
honest and which are not. In this paper we argue why artificial
intelligence-based, non-invasive lie detection technologies are likely to
experience a rapid advancement in the coming years, and that it would be
irresponsible to wait any longer before discussing its implications. Legal and
popular perspectives are reviewed to evaluate the potential for these
technologies to cause societal harm. To understand the perspective of a
reasonable person, we conducted a survey of 129 individuals, and identified
consent and accuracy as the major factors in their decision-making process
regarding the use of these technologies. In our analysis, we distinguish two
types of lie detection technology, accurate truth metering and accurate thought
exposing. We generally find that truth metering is already largely within the
scope of existing US federal and state laws, albeit with some notable
exceptions. In contrast, we find that current regulation of thought exposing
technologies is ambiguous and inadequate to safeguard civil liberties. In order
to rectify these shortcomings, we introduce the legal concept of mental
trespass and use this concept as the basis for proposed regulation.
Related papers
- The Technology of Outrage: Bias in Artificial Intelligence [1.2289361708127877]
Artificial intelligence and machine learning are increasingly used to offload decision making from people.
In the past, one of the rationales for this replacement was that machines, unlike people, can be fair and unbiased.
We identify three forms of outrage-intellectual, moral, and political-that are at play when people react emotionally to algorithmic bias.
arXiv Detail & Related papers (2024-09-25T20:23:25Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Law Informs Code: A Legal Informatics Approach to Aligning Artificial
Intelligence with Humans [0.0]
Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives.
"Law Informs Code" is the research agenda capturing complex computational legal processes, and embedding them in AI.
arXiv Detail & Related papers (2022-09-14T00:49:09Z) - Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against
Fact-Verification Systems [80.3811072650087]
We show that it is possible to subtly modify claim-salient snippets in the evidence and generate diverse and claim-aligned evidence.
The attacks are also robust against post-hoc modifications of the claim.
These attacks can have harmful implications on the inspectable and human-in-the-loop usage scenarios.
arXiv Detail & Related papers (2022-09-07T13:39:24Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Watch Those Words: Video Falsification Detection Using Word-Conditioned
Facial Motion [82.06128362686445]
We propose a multi-modal semantic forensic approach to handle both cheapfakes and visually persuasive deepfakes.
We leverage the idea of attribution to learn person-specific biometric patterns that distinguish a given speaker from others.
Unlike existing person-specific approaches, our method is also effective against attacks that focus on lip manipulation.
arXiv Detail & Related papers (2021-12-21T01:57:04Z) - Truthful AI: Developing and governing AI that does not lie [0.26385121748044166]
Lying -- the use of verbal falsehoods to deceive -- is harmful.
While lying has traditionally been a human affair, AI systems are becoming increasingly prevalent.
This raises the question of how we should limit the harm caused by AI "lies"
arXiv Detail & Related papers (2021-10-13T12:18:09Z) - Collecting the Public Perception of AI and Robot Rights [10.791267046450077]
The European Parliament proposed advanced robots could be granted "electronic personalities"
This paper collects online users' first impressions of 11 possible rights that could be granted to autonomous electronic agents of the future.
arXiv Detail & Related papers (2020-08-04T05:35:29Z) - A vision for global privacy bridges: Technical and legal measures for
international data markets [77.34726150561087]
Despite data protection laws and an acknowledged right to privacy, trading personal information has become a business equated with "trading oil"
An open conflict is arising between business demands for data and a desire for privacy.
We propose and test a vision of a personal information market with privacy.
arXiv Detail & Related papers (2020-05-13T13:55:50Z) - How Does NLP Benefit Legal System: A Summary of Legal Artificial
Intelligence [81.04070052740596]
Legal Artificial Intelligence (LegalAI) focuses on applying the technology of artificial intelligence, especially natural language processing, to benefit tasks in the legal domain.
This paper introduces the history, the current state, and the future directions of research in LegalAI.
arXiv Detail & Related papers (2020-04-25T14:45:15Z) - The Conflict Between People's Urge to Punish AI and Legal Systems [12.935691101666453]
We present two studies to obtain people's views of electronic legal personhood vis-a-vis existing liability models.
Our study reveals people's desire to punish automated agents even though these entities are not recognized any mental state.
We conclude by discussing how future design and legal decisions may influence how the public reacts to automated agents' wrongdoings.
arXiv Detail & Related papers (2020-03-13T23:19:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.