EU law and emotion data
- URL: http://arxiv.org/abs/2309.10776v1
- Date: Tue, 19 Sep 2023 17:25:02 GMT
- Title: EU law and emotion data
- Authors: Andreas Hauselmann, Alan M. Sears, Lex Zard and Eduard
Fosch-Villaronga
- Abstract summary: Article sheds light on legal implications and challenges surrounding emotion data processing within the EU's legal framework.
We discuss the nuances of different approaches to affective computing and their relevance to the processing of special data.
We highlight some of the consequences, including harm, that processing of emotion data may have for individuals concerned.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article sheds light on legal implications and challenges surrounding
emotion data processing within the EU's legal framework. Despite the sensitive
nature of emotion data, the GDPR does not categorize it as special data,
resulting in a lack of comprehensive protection. The article also discusses the
nuances of different approaches to affective computing and their relevance to
the processing of special data under the GDPR. Moreover, it points to potential
tensions with data protection principles, such as fairness and accuracy. Our
article also highlights some of the consequences, including harm, that
processing of emotion data may have for individuals concerned. Additionally, we
discuss how the AI Act proposal intends to regulate affective computing.
Finally, the article outlines the new obligations and transparency requirements
introduced by the DSA for online platforms utilizing emotion data. Our article
aims at raising awareness among the affective computing community about the
applicable legal requirements when developing AC systems intended for the EU
market, or when working with study participants located in the EU. We also
stress the importance of protecting the fundamental rights of individuals even
when the law struggles to keep up with technological developments that capture
sensitive emotion data.
Related papers
- Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Navigating the United States Legislative Landscape on Voice Privacy: Existing Laws, Proposed Bills, Protection for Children, and Synthetic Data for AI [28.82435149220576]
This paper presents the state of the privacy legislation at the U.S. Congress.
It outlines how voice data is considered as part of the legislation definition.
It also reviews additional privacy protection for children.
arXiv Detail & Related papers (2024-07-29T03:43:16Z) - Federated Learning Priorities Under the European Union Artificial
Intelligence Act [68.44894319552114]
We perform a first-of-its-kind interdisciplinary analysis (legal and ML) of the impact the AI Act may have on Federated Learning.
We explore data governance issues and the concern for privacy.
Most noteworthy are the opportunities to defend against data bias and enhance private and secure computation.
arXiv Detail & Related papers (2024-02-05T19:52:19Z) - SoK: The Gap Between Data Rights Ideals and Reality [46.14715472341707]
Do rights-based privacy laws effectively empower individuals over their data?
This paper scrutinizes these approaches by reviewing empirical studies, news articles, and blog posts.
arXiv Detail & Related papers (2023-12-03T21:52:51Z) - A Critical Take on Privacy in a Datafied Society [0.0]
I analyze several facets of the lack of online privacy and idiosyncrasies exhibited by privacy advocates.
I discuss of possible effects of datafication on human behavior, the prevalent market-oriented assumption at the base of online privacy, and some emerging adaptation strategies.
A glimpse on the likely problematic future is provided with a discussion on privacy related aspects of EU, UK, and China's proposed generative AI policies.
arXiv Detail & Related papers (2023-08-03T11:45:18Z) - The Design and Implementation of a National AI Platform for Public
Healthcare in Italy: Implications for Semantics and Interoperability [62.997667081978825]
The Italian National Health Service is adopting Artificial Intelligence through its technical agencies.
Such a vast programme requires special care in formalising the knowledge domain.
Questions have been raised about the impact that AI could have on patients, practitioners, and health systems.
arXiv Detail & Related papers (2023-04-24T08:00:02Z) - Regulating Gatekeeper AI and Data: Transparency, Access, and Fairness
under the DMA, the GDPR, and beyond [2.608935407927351]
We analyze the impact of the DMA and related EU acts on AI models and their underlying data across four key areas.
We show how, based on CJEU jurisprudence, a coherent interpretation of the concept of non-discrimination in both traditional non-discrimination and competition law may be found.
arXiv Detail & Related papers (2022-12-09T17:29:19Z) - Data Protection Impact Assessment for the Corona App [0.0]
SARS-CoV-2 started spreading in Europe in early 2020 and there has been a strong call for technical solutions to combat or contain the pandemic.
There has been a strong call for technical solutions with contact tracing apps at the heart of debates.
The EU's General Daten Protection Regulation (DPIA) requires controllers to carry out a data protection assessment.
We present a scientific DPIA which thoroughly examines three published contact tracing app designs that are considered to be the most "privacy-friendly"
arXiv Detail & Related papers (2021-01-18T19:23:30Z) - Second layer data governance for permissioned blockchains: the privacy
management challenge [58.720142291102135]
In pandemic situations, such as the COVID-19 and Ebola outbreak, the action related to sharing health data is crucial to avoid the massive infection and decrease the number of deaths.
In this sense, permissioned blockchain technology emerges to empower users to get their rights providing data ownership, transparency, and security through an immutable, unified, and distributed database ruled by smart contracts.
arXiv Detail & Related papers (2020-10-22T13:19:38Z) - Learning Emotional-Blinded Face Representations [77.7653702071127]
We propose two face representations that are blind to facial expressions associated to emotional responses.
This work is motivated by new international regulations for personal data protection.
arXiv Detail & Related papers (2020-09-18T09:24:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.