Operationalizing Digital Self Determination
- URL: http://arxiv.org/abs/2211.08539v1
- Date: Tue, 15 Nov 2022 22:28:51 GMT
- Title: Operationalizing Digital Self Determination
- Authors: Stefaan G. Verhulst
- Abstract summary: We live in an era of datafication, in which life is increasingly quantified and transformed into intelligence for private or public benefit.
Existing methods to limit asymmetries (e.g., consent) have limitations to adequately address the challenges at hand.
A new principle and practice of digital self-determination (DSD) is therefore required.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We live in an era of datafication, one in which life is increasingly
quantified and transformed into intelligence for private or public benefit.
When used responsibly, this offers new opportunities for public good. However,
three key forms of asymmetry currently limit this potential, especially for
already vulnerable and marginalized groups: data asymmetries, information
asymmetries, and agency asymmetries. These asymmetries limit human potential,
both in a practical and psychological sense, leading to feelings of
disempowerment and eroding public trust in technology. Existing methods to
limit asymmetries (e.g., consent) as well as some alternatives under
consideration (data ownership, collective ownership, personal information
management systems) have limitations to adequately address the challenges at
hand. A new principle and practice of digital self-determination (DSD) is
therefore required.
DSD is based on existing concepts of self-determination, as articulated in
sources as varied as Kantian philosophy and the 1966 International Covenant on
Economic, Social and Cultural Rights. Updated for the digital age, DSD contains
several key characteristics, including the fact that it has both an individual
and collective dimension; is designed to especially benefit vulnerable and
marginalized groups; and is context-specific (yet also enforceable).
Operationalizing DSD in this (and other) contexts so as to maximize the
potential of data while limiting its harms requires a number of steps. In
particular, a responsible operationalization of DSD would consider four key
prongs or categories of action: processes, people and organizations, policies,
and products and technologies.
Related papers
- MaSS: Multi-attribute Selective Suppression for Utility-preserving Data Transformation from an Information-theoretic Perspective [10.009178591853058]
We propose a formal information-theoretic definition for this utility-preserving privacy protection problem.
We design a data-driven learnable data transformation framework that is capable of suppressing sensitive attributes from target datasets.
Results demonstrate the effectiveness and generalizability of our method under various configurations.
arXiv Detail & Related papers (2024-05-23T18:35:46Z) - One-Dimensional Adapter to Rule Them All: Concepts, Diffusion Models and
Erasing Applications [65.66700972754118]
Existing concept erasing methods in academia are all based on full parameter or specification-based fine-tuning.
Previous model-specific erasure impedes the flexible combination of concepts and the training-free transfer towards other models.
We ground our erasing framework on one-dimensional adapters to erase multiple concepts from most DMs at once across versatile erasing applications.
arXiv Detail & Related papers (2023-12-26T18:08:48Z) - MaSS: Multi-attribute Selective Suppression [8.337285030303285]
We propose Multi-attribute Selective Suppression, or MaSS, a framework for performing precisely targeted data surgery.
MaSS learns a data modifier through adversarial games between two sets of networks, where one is aimed at suppressing selected attributes.
We carried out an extensive evaluation of our proposed method using multiple datasets from different domains.
arXiv Detail & Related papers (2022-10-18T14:44:08Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - Exosoul: ethical profiling in the digital world [3.6245424131171813]
The project Exosoul aims at developing a personalized software exoskeleton which mediates actions in the digital world according to the moral preferences of the user.
The approach is hybrid, first based on the identification of profiles in a top-down manner, and then on the refinement of profiles by a personalized data-driven approach.
We consider the correlations between ethics positions (idealism and relativism) personality traits (honesty/humility, conscientiousness, Machiavellianism and narcissism) and worldview (normativism)
arXiv Detail & Related papers (2022-03-30T10:54:00Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - Digital Twins: State of the Art Theory and Practice, Challenges, and
Open Research Questions [62.67593386796497]
This work explores the various DT features and current approaches, the shortcomings and reasons behind the delay in the implementation and adoption of digital twin.
The major reasons for this delay are the lack of a universal reference framework, domain dependence, security concerns of shared data, reliance of digital twin on other technologies, and lack of quantitative metrics.
arXiv Detail & Related papers (2020-11-02T19:08:49Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - Public Goods From Private Data -- An Efficacy and Justification Paradox
for Digital Contact Tracing [0.0]
Privacy-centric analysis treats data as private property, frames the relationship between individuals and governments as adversarial and entrenches technology platforms as gatekeepers.
To overcome the barriers to ethical and effective DCT, and develop infrastructure and policy that supports the realization of potential public benefits of digital technology, a public resource conception of aggregate data should be developed.
arXiv Detail & Related papers (2020-07-14T13:08:29Z) - The Visual Social Distancing Problem [99.69094590087408]
We introduce the Visual Social Distancing problem, defined as the automatic estimation of the inter-personal distance from an image.
We discuss how VSD relates with previous literature in Social Signal Processing and indicate which existing Computer Vision methods can be used to manage such problem.
arXiv Detail & Related papers (2020-05-11T00:04:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.