Chat Control or Child Protection?
- URL: http://arxiv.org/abs/2210.08958v1
- Date: Tue, 11 Oct 2022 15:55:51 GMT
- Title: Chat Control or Child Protection?
- Authors: Ross Anderson
- Abstract summary: Debate on terrorism similarly needs to be grounded in the context in which young people are radicalised.
The idea of using 'artificial intelligence' to replace police officers, social workers and teachers is just the sort of magical thinking that leads to bad policy.
- Score: 3.408452800179907
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Ian Levy and Crispin Robinson's position paper "Thoughts on child safety on
commodity platforms" is to be welcomed for extending the scope of the debate
about the extent to which child safety concerns justify legal limits to online
privacy. Their paper's context is the laws proposed in both the UK and the EU
to give the authorities the power to undermine end-to-end cryptography in
online communications services, with a justification of preventing and
detecting of child abuse and terrorist recruitment. Both jurisdictions plan to
make it easier to get service firms to take down a range of illegal material
from their servers; but they also propose to mandate client-side scanning - not
just for known illegal images, but for text messages indicative of sexual
grooming or terrorist recruitment. In this initial response, I raise technical
issues about the capabilities of the technologies the authorities propose to
mandate, and a deeper strategic issue: that we should view the child safety
debate from the perspective of children at risk of violence, rather than from
that of the security and intelligence agencies and the firms that sell
surveillance software. The debate on terrorism similarly needs to be grounded
in the context in which young people are radicalised. Both political violence
and violence against children tend to be politicised and as a result are often
poorly policed. Effective policing, particularly of crimes embedded in wicked
social problems, must be locally led and involve multiple stakeholders; the
idea of using 'artificial intelligence' to replace police officers, social
workers and teachers is just the sort of magical thinking that leads to bad
policy. The debate must also be conducted within the boundary conditions set by
human rights and privacy law, and to be pragmatic must also consider reasonable
police priorities.
Related papers
- Moving beyond privacy and airspace safety: Guidelines for just drones in policing [0.0]
Police forces should consider the perception of bystanders and broader society to maximize drones' potential.
This article examines the concerns expressed by members of the public during a field trial involving 52 test participants.
We propose a catalogue of guidelines for just operation of drones to supplement the existing policy.
arXiv Detail & Related papers (2024-08-08T09:04:01Z) - Navigating the United States Legislative Landscape on Voice Privacy: Existing Laws, Proposed Bills, Protection for Children, and Synthetic Data for AI [28.82435149220576]
This paper presents the state of the privacy legislation at the U.S. Congress.
It outlines how voice data is considered as part of the legislation definition.
It also reviews additional privacy protection for children.
arXiv Detail & Related papers (2024-07-29T03:43:16Z) - Demarked: A Strategy for Enhanced Abusive Speech Moderation through Counterspeech, Detoxification, and Message Management [71.99446449877038]
We propose a more comprehensive approach called Demarcation scoring abusive speech based on four aspect -- (i) severity scale; (ii) presence of a target; (iii) context scale; (iv) legal scale.
Our work aims to inform future strategies for effectively addressing abusive speech online.
arXiv Detail & Related papers (2024-06-27T21:45:33Z) - Security for Children in the Digital Society -- A Rights-based and
Research Ethics Approach [0.0]
The project is situated in a German context with a focus on European frameworks for the development of Artificial Intelligence and the protection of children from security risks arising in the course of algorithm-mediated online communication.
The project develops a children's rights approach to questions of security for children online while also developing a research ethics approach for conducting research with children on online harms such as cybergrooming and sexual violence against children.
arXiv Detail & Related papers (2023-08-24T08:13:02Z) - A Secure Open-Source Intelligence Framework For Cyberbullying
Investigation [0.0]
This paper proposes an open-source intelligence pipeline using data from Twitter to track keywords relevant to cyberbullying in social media.
An OSINT dashboard with real-time monitoring empowers law enforcement to swiftly take action, protect victims, and make significant strides toward creating a safer online environment.
arXiv Detail & Related papers (2023-07-27T23:03:57Z) - Mitigating Covertly Unsafe Text within Natural Language Systems [55.26364166702625]
Uncontrolled systems may generate recommendations that lead to injury or life-threatening consequences.
In this paper, we distinguish types of text that can lead to physical harm and establish one particularly underexplored category: covertly unsafe text.
arXiv Detail & Related papers (2022-10-17T17:59:49Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Certifiably Robust Policy Learning against Adversarial Communication in
Multi-agent Systems [51.6210785955659]
Communication is important in many multi-agent reinforcement learning (MARL) problems for agents to share information and make good decisions.
However, when deploying trained communicative agents in a real-world application where noise and potential attackers exist, the safety of communication-based policies becomes a severe issue that is underexplored.
In this work, we consider an environment with $N$ agents, where the attacker may arbitrarily change the communication from any $CfracN-12$ agents to a victim agent.
arXiv Detail & Related papers (2022-06-21T07:32:18Z) - Bugs in our Pockets: The Risks of Client-Side Scanning [8.963278092315946]
We argue that client-side scanning (CSS) neither guarantees efficacious crime prevention nor prevents surveillance.
CSS by its nature creates serious security and privacy risks for all society.
There are multiple ways in which client-side scanning can fail, can be evaded, and can be abused.
arXiv Detail & Related papers (2021-10-14T15:18:49Z) - A vision for global privacy bridges: Technical and legal measures for
international data markets [77.34726150561087]
Despite data protection laws and an acknowledged right to privacy, trading personal information has become a business equated with "trading oil"
An open conflict is arising between business demands for data and a desire for privacy.
We propose and test a vision of a personal information market with privacy.
arXiv Detail & Related papers (2020-05-13T13:55:50Z) - Is 40 the new 60? How popular media portrays the employability of older
software developers [78.42660996736939]
We analyzed popular online articles and related discussions on Hacker News through the lens of employability issues and potential mitigation strategies.
We highlight the importance of keeping up-to-date, specializing in certain tasks or technologies, and present role transitions as a way forward for veteran developers.
arXiv Detail & Related papers (2020-04-13T10:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.