ENAGRAM: An App to Evaluate Preventative Nudges for Instagram
- URL: http://arxiv.org/abs/2208.04649v2
- Date: Thu, 18 Aug 2022 16:05:12 GMT
- Title: ENAGRAM: An App to Evaluate Preventative Nudges for Instagram
- Authors: Nicol\'as E. D\'iaz Ferreyra, Sina Ostendorf, Esma A\"imeur, Maritta
Heisel and Matthias Brand
- Abstract summary: This work presents ENAGRAM, an app for evaluating preventative nudges.
We used ENAGRAM as a vehicle to test a risk-based strategy for nudging the self-disclosure decisions of Instagram users.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online self-disclosure is perhaps one of the last decade's most studied
communication processes, thanks to the introduction of Online Social Networks
(OSNs) like Facebook. Self-disclosure research has contributed significantly to
the design of preventative nudges seeking to support and guide users when
revealing private information in OSNs. Still, assessing the effectiveness of
these solutions is often challenging since changing or modifying the choice
architecture of OSN platforms is practically unfeasible. In turn, the
effectiveness of numerous nudging designs is supported primarily by
self-reported data instead of actual behavioral information. This work presents
ENAGRAM, an app for evaluating preventative nudges, and reports the first
results of an empirical study conducted with it. Such a study aims to showcase
how the app (and the data collected with it) can be leveraged to assess the
effectiveness of a particular nudging approach. We used ENAGRAM as a vehicle to
test a risk-based strategy for nudging the self-disclosure decisions of
Instagram users. For this, we created two variations of the same nudge and
tested it in a between-subjects experimental setting. Study participants (N=22)
were recruited via Prolific and asked to use the app regularly for 7 days. An
online survey was distributed at the end of the experiment to measure some
privacy-related constructs. From the data collected with ENAGRAM, we observed
lower (though non-significant) self-disclosure levels when applying risk-based
interventions. The constructs measured with the survey were not significant
either, except for participants' External Information Privacy Concerns. Our
results suggest that (i) ENAGRAM is a suitable alternative for conducting
longitudinal experiments in a privacy-friendly way, and (ii) it provides a
flexible framework for the evaluation of a broad spectrum of nudging solutions.
Related papers
- Large Language Models for Next Point-of-Interest Recommendation [53.93503291553005]
Location-Based Social Network (LBSN) data is often used for the next Point of Interest (POI) recommendation task.
One frequently disregarded challenge is how to effectively use the abundant contextual information present in LBSN data.
We propose a framework that uses pretrained Large Language Models (LLMs) to tackle this challenge.
arXiv Detail & Related papers (2024-04-19T13:28:36Z) - Federated Experiment Design under Distributed Differential Privacy [31.06808163362162]
We focus on the rigorous protection of users' privacy while minimizing the trust toward service providers.
Although a vital component in modern A/B testing, private distributed experimentation has not previously been studied.
We show how these mechanisms can be scaled up to handle the very large number of participants commonly found in practice.
arXiv Detail & Related papers (2023-11-07T22:38:56Z) - Interactive Graph Convolutional Filtering [79.34979767405979]
Interactive Recommender Systems (IRS) have been increasingly used in various domains, including personalized article recommendation, social media, and online advertising.
These problems are exacerbated by the cold start problem and data sparsity problem.
Existing Multi-Armed Bandit methods, despite their carefully designed exploration strategies, often struggle to provide satisfactory results in the early stages.
Our proposed method extends interactive collaborative filtering into the graph model to enhance the performance of collaborative filtering between users and items.
arXiv Detail & Related papers (2023-09-04T09:02:31Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - Turning Privacy-preserving Mechanisms against Federated Learning [22.88443008209519]
We design an attack capable of deceiving state-of-the-art defenses for federated learning.
The proposed attack includes two operating modes, the first one focusing on convergence inhibition (Adversarial Mode) and the second one aiming at building a deceptive rating injection on the global federated model (Backdoor Mode)
The experimental results show the effectiveness of our attack in both its modes, returning on average 60% performance detriment in all the tests on Adversarial Mode and fully effective backdoors in 93% of cases for the tests performed on Backdoor Mode.
arXiv Detail & Related papers (2023-05-09T11:43:31Z) - No Free Lunch in "Privacy for Free: How does Dataset Condensation Help
Privacy" [75.98836424725437]
New methods designed to preserve data privacy require careful scrutiny.
Failure to preserve privacy is hard to detect, and yet can lead to catastrophic results when a system implementing a privacy-preserving'' method is attacked.
arXiv Detail & Related papers (2022-09-29T17:50:23Z) - Black-box Dataset Ownership Verification via Backdoor Watermarking [67.69308278379957]
We formulate the protection of released datasets as verifying whether they are adopted for training a (suspicious) third-party model.
We propose to embed external patterns via backdoor watermarking for the ownership verification to protect them.
Specifically, we exploit poison-only backdoor attacks ($e.g.$, BadNets) for dataset watermarking and design a hypothesis-test-guided method for dataset verification.
arXiv Detail & Related papers (2022-08-04T05:32:20Z) - Privacy Information Classification: A Hybrid Approach [9.642559585173517]
This study proposes and develops a hybrid privacy classification approach to detect and classify privacy information from OSNs.
The proposed hybrid approach employs both deep learning models and ontology-based models for privacy-related information extraction.
arXiv Detail & Related papers (2021-01-27T18:03:18Z) - Towards Mass Adoption of Contact Tracing Apps -- Learning from Users'
Preferences to Improve App Design [3.187723878624947]
We explore user preferences for contact tracing apps using market research techniques and conjoint analysis.
Our results confirm the privacy-preserving design of most European contact tracing apps.
We conclude that adding goal-congruent features will play an important role in fostering mass adoption.
arXiv Detail & Related papers (2020-11-24T19:08:09Z) - Emerging App Issue Identification via Online Joint Sentiment-Topic
Tracing [66.57888248681303]
We propose a novel emerging issue detection approach named MERIT.
Based on the AOBST model, we infer the topics negatively reflected in user reviews for one app version.
Experiments on popular apps from Google Play and Apple's App Store demonstrate the effectiveness of MERIT.
arXiv Detail & Related papers (2020-08-23T06:34:05Z) - Privacy for Rescue: A New Testimony Why Privacy is Vulnerable In Deep
Models [6.902994369582068]
We present a formal definition of the privacy protection problem in the edge-cloud system running models.
We analyze the-state-of-the-art methods and point out the drawbacks of their methods.
We propose two new metrics that are more accurate to measure the effectiveness of privacy protection methods.
arXiv Detail & Related papers (2019-12-31T15:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.