YASM (Yet Another Surveillance Mechanism)
- URL: http://arxiv.org/abs/2205.14601v1
- Date: Sun, 29 May 2022 08:42:59 GMT
- Title: YASM (Yet Another Surveillance Mechanism)
- Authors: Kaspar Rosager Ludvigsen, Shishir Nagaraja, Angela Daly
- Abstract summary: Apple proposed to scan their systems for such imagery. CSAMD was since pushed back, but the European Union decided to propose forced CSS.
We argue why CSS should be limited or not used and discuss issues with the way pictures cryptographically are handled.
In the second part, we analyse the possible human rights violations which CSS in general can cause within the regime of the European Convention on Human Rights.
- Score: 1.332091725929965
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Client-Side Scanning (CSS) see in the Child Sexual Abuse Material Detection
(CSAMD) represent ubiquitous mass scanning. Apple proposed to scan their
systems for such imagery. CSAMD was since pushed back, but the European Union
decided to propose forced CSS to combat and prevent child sexual abuse and
weaken encryption. CSS is mass surveillance of personal property, pictures and
text, without considerations of privacy and cybersecurity and the law. We first
argue why CSS should be limited or not used and discuss issues with the way
pictures cryptographically are handled and how the CSAMD preserves privacy. In
the second part, we analyse the possible human rights violations which CSS in
general can cause within the regime of the European Convention on Human Rights.
The focus is the harm which the system may cause to individuals, and we also
comment on the proposed Child Abuse Regulation. We find that CSS is problematic
because they can rarely fulfil their purposes, as seen with antivirus software.
The costs for attempting to solve issues such as CSAM outweigh the benefits and
is not likely to change. The CSAMD as proposed is not likely to preserve the
privacy or security in the way of which it is described source materials. We
also find that CSS in general would likely violate the Right to a Fair Trial,
Right to Privacy and Freedom of Expression. Pictures could have been obtained
in a way that could make any trial against a legitimate perpetrator
inadmissible or violate their right for a fair trial, the lack of any
safeguards to protect privacy on national legal level, which would violate the
Right for Privacy, and it is unclear if the kind of scanning could pass the
legal test which Freedom of Expression requires. Finally, we find significant
issues with the proposed Regulation, as it relies on techno-solutionist
arguments and disregards knowledge on cybersecurity.
Related papers
- A Hate Speech Moderated Chat Application: Use Case for GDPR and DSA Compliance [0.0]
This research presents a novel application capable of implementing legal and ethical reasoning into the content moderation process.
Two use cases fundamental to online communication are presented and implemented using technologies such as GPT-3.5, Solid Pods, and the rule language Prova.
The work proposes a novel approach to reason within different legal and ethical definitions of hate speech and plan the fitting counter hate speech.
arXiv Detail & Related papers (2024-10-10T08:28:38Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - A User-Driven Framework for Regulating and Auditing Social Media [94.70018274127231]
We propose that algorithmic filtering should be regulated with respect to a flexible, user-driven baseline.
We require that the feeds a platform filters contain "similar" informational content as their respective baseline feeds.
We present an auditing procedure that checks whether a platform honors this requirement.
arXiv Detail & Related papers (2023-04-20T17:53:34Z) - Chat Control or Child Protection? [3.408452800179907]
Debate on terrorism similarly needs to be grounded in the context in which young people are radicalised.
The idea of using 'artificial intelligence' to replace police officers, social workers and teachers is just the sort of magical thinking that leads to bad policy.
arXiv Detail & Related papers (2022-10-11T15:55:51Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Privacy-Preserving Face Recognition with Learnable Privacy Budgets in
Frequency Domain [77.8858706250075]
This paper proposes a privacy-preserving face recognition method using differential privacy in the frequency domain.
Our method performs very well with several classical face recognition test sets.
arXiv Detail & Related papers (2022-07-15T07:15:36Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446]
We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy.
arXiv Detail & Related papers (2021-12-11T15:23:28Z) - The Evolving Path of "the Right to Be Left Alone" - When Privacy Meets
Technology [0.0]
This paper proposes a novel vision of the privacy ecosystem, introducing privacy dimensions, the related users' expectations, the privacy violations, and the changing factors.
We believe that promising approaches to tackle the privacy challenges move in two directions: (i) identification of effective privacy metrics; and (ii) adoption of formal tools to design privacy-compliant applications.
arXiv Detail & Related papers (2021-11-24T11:27:55Z) - Bugs in our Pockets: The Risks of Client-Side Scanning [8.963278092315946]
We argue that client-side scanning (CSS) neither guarantees efficacious crime prevention nor prevents surveillance.
CSS by its nature creates serious security and privacy risks for all society.
There are multiple ways in which client-side scanning can fail, can be evaded, and can be abused.
arXiv Detail & Related papers (2021-10-14T15:18:49Z) - A Mental Trespass? Unveiling Truth, Exposing Thoughts and Threatening
Civil Liberties with Non-Invasive AI Lie Detection [0.0]
We argue why artificial intelligence-based, non-invasive lie detection technologies are likely to experience a rapid advancement in the coming years.
Legal and popular perspectives are reviewed to evaluate the potential for these technologies to cause societal harm.
arXiv Detail & Related papers (2021-02-16T08:09:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.