Privacy, Informed Consent and the Demand for Anonymisation of Smart Meter Data
- URL: http://arxiv.org/abs/2509.00101v1
- Date: Wed, 27 Aug 2025 20:05:09 GMT
- Title: Privacy, Informed Consent and the Demand for Anonymisation of Smart Meter Data
- Authors: Saurab Chhachhi, Fei Teng,
- Abstract summary: We use a mixed-methods approach to estimate non-monetary (willingness-to-share and smart metering demand) and monetary (willingness-to-pay/accept) preferences for anonymisation.<n>On average, consumers are willing to pay for anonymisation, are more willing to share data when anonymised and less willing to share non-anonymised data once anonymisation is presented as an option.
- Score: 2.111461702802409
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Access to smart meter data offers system-wide benefits but raises significant privacy concerns due to the personal information it contains. Privacy-preserving techniques could facilitate wider access, though they introduce privacy-utility trade-offs. Understanding consumer valuations for anonymisation can help identify appropriate trade-offs. However, existing studies do not focus on anonymisation specifically or account for information asymmetries regarding privacy risks, raising questions about the validity of informed consent under current regulations. We use a mixed-methods approach to estimate non-monetary (willingness-to-share and smart metering demand) and monetary (willingness-to-pay/accept) preferences for anonymisation, based on a representative sample of 965 GB bill payers. An embedded randomised control trial examines the effect of providing information about privacy implications. On average, consumers are willing to pay for anonymisation, are more willing to share data when anonymised and less willing to share non-anonymised data once anonymisation is presented as an option. However, a significant minority remains unwilling to adopt smart meters, despite anonymisation. We find strong evidence of information asymmetries that suppress demand for anonymisation and identify substantial variation across demographic and electricity supply characteristics. Qualitative responses corroborate the quantitative findings, underscoring the need for stronger privacy defaults, user-centric design, and consent mechanisms that enable truly informed decisions.
Related papers
- Your Privacy Depends on Others: Collusion Vulnerabilities in Individual Differential Privacy [50.66105844449181]
Individual Differential Privacy (iDP) promises users control over their privacy, but this promise can be broken in practice.<n>We reveal a previously overlooked vulnerability in sampling-based iDP mechanisms.<n>We propose $(varepsilon_i,_i,overline)$-iDP a privacy contract that uses $$-divergences to provide users with a hard upper bound on their excess vulnerability.
arXiv Detail & Related papers (2026-01-19T10:26:12Z) - Exposing Privacy Risks in Anonymizing Clinical Data: Combinatorial Refinement Attacks on k-Anonymity Without Auxiliary Information [3.3423762257383216]
We introduce a new class of privacy attacks targeting k-anonymized datasets produced using local recoding.<n>Our results on real-world clinical microdata reveal that even in the absence of external information, established anonymization frameworks do not deliver the promised level of privacy.
arXiv Detail & Related papers (2025-09-03T14:36:06Z) - Information-theoretic Estimation of the Risk of Privacy Leaks [0.0]
dependencies between items in a dataset can lead to privacy leaks.<n>We measure the correlation between the original data and their noisy responses from a randomizer as an indicator of potential privacy breaches.
arXiv Detail & Related papers (2025-06-14T03:39:11Z) - Fair Play for Individuals, Foul Play for Groups? Auditing Anonymization's Impact on ML Fairness [1.1999555634662633]
Anonymization techniques can make it more difficult to accurately identify individuals.<n>Group fairness metrics can be degraded by up to four orders of magnitude.<n>Individual fairness metrics tend to improve under stronger anonymization.<n>This study provides critical insights into the trade-offs between privacy, fairness, and utility.
arXiv Detail & Related papers (2025-05-12T18:32:28Z) - A False Sense of Privacy: Evaluating Textual Data Sanitization Beyond Surface-level Privacy Leakage [77.83757117924995]
We propose a new framework that evaluates re-identification attacks to quantify individual privacy risks upon data release.<n>Our approach shows that seemingly innocuous auxiliary information can be used to infer sensitive attributes like age or substance use history from sanitized data.
arXiv Detail & Related papers (2025-04-28T01:16:27Z) - Defining 'Good': Evaluation Framework for Synthetic Smart Meter Data [14.779917834583577]
We show that standard privacy attack methods are inadequate for assessing privacy risks of smart meter datasets.
We propose an improved method by injecting training data with implausible outliers, then launching privacy attacks directly on these outliers.
arXiv Detail & Related papers (2024-07-16T14:41:27Z) - Protecting User Privacy in Online Settings via Supervised Learning [69.38374877559423]
We design an intelligent approach to online privacy protection that leverages supervised learning.
By detecting and blocking data collection that might infringe on a user's privacy, we can restore a degree of digital privacy to the user.
arXiv Detail & Related papers (2023-04-06T05:20:16Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Releasing survey microdata with exact cluster locations and additional
privacy safeguards [77.34726150561087]
We propose an alternative microdata dissemination strategy that leverages the utility of the original microdata with additional privacy safeguards.
Our strategy reduces the respondents' re-identification risk for any number of disclosed attributes by 60-80% even under re-identification attempts.
arXiv Detail & Related papers (2022-05-24T19:37:11Z) - Statistical anonymity: Quantifying reidentification risks without
reidentifying users [4.103598036312231]
Data anonymization is an approach to privacy-preserving data release aimed at preventing participants reidentification.
Existing algorithms for enforcing $k$-anonymity in the released data assume that the curator performing the anonymization has complete access to the original data.
This paper explores ideas for reducing the trust that must be placed in the curator, while still maintaining a statistical notion of $k$-anonymity.
arXiv Detail & Related papers (2022-01-28T18:12:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.