Covert Embodied Choice: Decision-Making and the Limits of Privacy Under
Biometric Surveillance
- URL: http://arxiv.org/abs/2101.00771v1
- Date: Mon, 4 Jan 2021 04:45:22 GMT
- Title: Covert Embodied Choice: Decision-Making and the Limits of Privacy Under
Biometric Surveillance
- Authors: Jeremy Gordon, Max Curran, John Chuang, Coye Cheshire
- Abstract summary: We present results from a virtual reality task in which gaze, movement, and other physiological signals are tracked.
We find that while participants use a variety of strategies, data collected remains highly predictive of choice (80% accuracy).
A significant portion of participants became more predictable despite efforts to obfuscate, possibly indicating mistaken priors about the dynamics of algorithmic prediction.
- Score: 6.92628425870087
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Algorithms engineered to leverage rich behavioral and biometric data to
predict individual attributes and actions continue to permeate public and
private life. A fundamental risk may emerge from misconceptions about the
sensitivity of such data, as well as the agency of individuals to protect their
privacy when fine-grained (and possibly involuntary) behavior is tracked. In
this work, we examine how individuals adjust their behavior when incentivized
to avoid the algorithmic prediction of their intent. We present results from a
virtual reality task in which gaze, movement, and other physiological signals
are tracked. Participants are asked to decide which card to select without an
algorithmic adversary anticipating their choice. We find that while
participants use a variety of strategies, data collected remains highly
predictive of choice (80% accuracy). Additionally, a significant portion of
participants became more predictable despite efforts to obfuscate, possibly
indicating mistaken priors about the dynamics of algorithmic prediction.
Related papers
- Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content [66.71102704873185]
We test for user strategization by conducting a lab experiment and survey.
We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes"
Our findings suggest that platforms cannot ignore the effect of their algorithms on user behavior.
arXiv Detail & Related papers (2024-05-09T07:36:08Z) - Training Private Models That Know What They Don't Know [40.19666295972155]
We find that several popular selective prediction approaches are ineffective in a differentially private setting.
We propose a novel evaluation mechanism which isolate selective prediction performance across model utility levels.
arXiv Detail & Related papers (2023-05-28T12:20:07Z) - Behavioral Intention Prediction in Driving Scenes: A Survey [70.53285924851767]
Behavioral Intention Prediction (BIP) simulates a human consideration process and fulfills the early prediction of specific behaviors.
This work provides a comprehensive review of BIP from the available datasets, key factors and challenges, pedestrian-centric and vehicle-centric BIP approaches, and BIP-aware applications.
arXiv Detail & Related papers (2022-11-01T11:07:37Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Towards a Data Privacy-Predictive Performance Trade-off [2.580765958706854]
We evaluate the existence of a trade-off between data privacy and predictive performance in classification tasks.
Unlike previous literature, we confirm that the higher the level of privacy, the higher the impact on predictive performance.
arXiv Detail & Related papers (2022-01-13T21:48:51Z) - Anonymization for Skeleton Action Recognition [6.772319578308409]
We propose two variants of anonymization algorithms to protect the potential privacy leakage from the skeleton dataset.
Experimental results show that the anonymized dataset can reduce the risk of privacy leakage while having marginal effects on the action recognition performance.
arXiv Detail & Related papers (2021-11-30T05:13:20Z) - Learning Language and Multimodal Privacy-Preserving Markers of Mood from
Mobile Data [74.60507696087966]
Mental health conditions remain underdiagnosed even in countries with common access to advanced medical care.
One promising data source to help monitor human behavior is daily smartphone usage.
We study behavioral markers of daily mood using a recent dataset of mobile behaviors from adolescent populations at high risk of suicidal behaviors.
arXiv Detail & Related papers (2021-06-24T17:46:03Z) - Multimodal Privacy-preserving Mood Prediction from Mobile Data: A
Preliminary Study [34.550824104906255]
Mental health conditions remain under-diagnosed even in countries with common access to advanced medical care.
One promising data source to help monitor human behavior is from daily smartphone usage.
We study behavioral markers or daily mood using a recent dataset of mobile behaviors from high-risk adolescent populations.
arXiv Detail & Related papers (2020-12-04T01:44:22Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - Differentially Private Algorithms for Statistical Verification of
Cyber-Physical Systems [5.987774571079633]
We show that revealing the number of the samples drawn can violate privacy.
We propose a new notion of differential privacy which we call expected differential privacy.
arXiv Detail & Related papers (2020-04-01T08:14:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.