A Simulated Reconstruction and Reidentification Attack on the 2010 U.S. Census
- URL: http://arxiv.org/abs/2312.11283v3
- Date: Mon, 28 Jul 2025 11:10:54 GMT
- Title: A Simulated Reconstruction and Reidentification Attack on the 2010 U.S. Census
- Authors: John M. Abowd, Tamara Adams, Robert Ashmead, David Darais, Sourya Dey, Simson L. Garfinkel, Nathan Goldschlag, Michael B. Hawes, Daniel Kifer, Philip Leclerc, Ethan Lew, Scott Moore, Rolando A. RodrÃguez, Ramy N. Tadros, Lars Vilhuber,
- Abstract summary: We show that individual, confidential microdata records from the 2010 U.S. Census can be accurately reconstructed.<n>Ninety-seven million person records (every resident in 70% of all census blocks) are exactly reconstructed with provable certainty using only public information.<n>We show that the methods used for the 2020 Census, based on a differential privacy framework, provide better protection against this type of attack, with better published data accuracy, than feasible alternatives.
- Score: 6.851716728384597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We show that individual, confidential microdata records from the 2010 U.S. Census of Population and Housing can be accurately reconstructed from the published tabular summaries. Ninety-seven million person records (every resident in 70% of all census blocks) are exactly reconstructed with provable certainty using only public information. We further show that a hypothetical attacker using our methods can reidentify with 95% accuracy population unique individuals who are perfectly reconstructed and not in the modal race and ethnicity category in their census block (3.4 million persons)--a result that is only possible because their confidential records were used in the published tabulations. Finally, we show that the methods used for the 2020 Census, based on a differential privacy framework, provide better protection against this type of attack, with better published data accuracy, than feasible alternatives.
Related papers
- Benchmarking Fraud Detectors on Private Graph Data [70.4654745317714]
Currently, many types of fraud are managed in part by automated detection algorithms that operate over graphs.<n>We consider the scenario where a data holder wishes to outsource development of fraud detectors to third parties.<n>Third parties submit their fraud detectors to the data holder, who evaluates these algorithms on a private dataset and then publicly communicates the results.<n>We propose a realistic privacy attack on this system that allows an adversary to de-anonymize individuals' data based only on the evaluation results.
arXiv Detail & Related papers (2025-07-30T03:20:15Z) - Generate-then-Verify: Reconstructing Data from Limited Published Statistics [22.649631494395653]
We focus on the regime where many possible datasets match the published statistics, making it impossible to reconstruct the entire private dataset perfectly.
We introduce a novel integer programming approach that first $textbfgenerates$ a set of claims and then $textbfverifies$ whether each claim holds for all possible datasets.
We evaluate our approach on the housing-level microdata from the U.S. Decennial Census release, demonstrating that privacy violations can still persist even when information published about such data is relatively sparse.
arXiv Detail & Related papers (2025-04-29T22:06:04Z) - Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - The 2020 United States Decennial Census Is More Private Than You (Might) Think [25.32778927275117]
We show that between 8.50% and 13.76% of the privacy budget for the 2020 U.S. Census remains unused for each of the eight geographical levels.
We mitigate noise variances by 15.08% to 24.82% while maintaining the same privacy budget for each geographical level.
arXiv Detail & Related papers (2024-10-11T23:06:15Z) - Understanding and Mitigating the Impacts of Differentially Private Census Data on State Level Redistricting [4.589972411795548]
Data users were shaken by the adoption of differential privacy in the 2020 DAS.
We consider two redistricting settings in which a data user might be concerned about the impacts of privacy preserving noise.
We observe that an analyst may come to incorrect conclusions if they do not account for noise.
arXiv Detail & Related papers (2024-09-10T18:11:54Z) - Noisy Measurements Are Important, the Design of Census Products Is Much More Important [1.52292571922932]
McCartan et al. (2023) call for "making differential privacy work for census data users"
This commentary explains why the 2020 Census Noisy Measurement Files (NMFs) are not the best focus for that plea.
arXiv Detail & Related papers (2023-12-20T15:43:04Z) - An Examination of the Alleged Privacy Threats of Confidence-Ranked Reconstruction of Census Microdata [3.2156268397508314]
We show that the proposed reconstruction is neither effective as a reconstruction method nor attribute to disclosure as claimed by its authors.
We report empirical results showing the proposed ranking cannot guide reidentification or conducive disclosure attacks.
arXiv Detail & Related papers (2023-11-06T15:04:03Z) - Evaluating Bias and Noise Induced by the U.S. Census Bureau's Privacy
Protection Methods [0.0]
The U.S. Census Bureau faces a difficult trade-off between the accuracy of Census statistics and the protection of individual information.
We conduct the first independent evaluation of bias and noise induced by the Bureau's two main disclosure avoidance systems.
TopDown's post-processing dramatically reduces the NMF noise and produces data whose accuracy is similar to that of swapping.
arXiv Detail & Related papers (2023-06-13T03:30:19Z) - Membership Inference Attacks against Synthetic Data through Overfitting
Detection [84.02632160692995]
We argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution.
We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model.
arXiv Detail & Related papers (2023-02-24T11:27:39Z) - Confidence-Ranked Reconstruction of Census Microdata from Published
Statistics [45.39928315344449]
A reconstruction attack on a private dataset takes as input some publicly accessible information about the dataset.
We show that our attacks can not only reconstruct full rows from the aggregate query statistics $Q(D)Rmm$, but can do so in a way that reliably ranks reconstructed rows by their odds.
Our attacks significantly outperform those that are based only on access to a public distribution or population from which the private dataset $D$ was sampled.
arXiv Detail & Related papers (2022-11-06T14:08:43Z) - Comment: The Essential Role of Policy Evaluation for the 2020 Census
Disclosure Avoidance System [0.0]
boyd and Sarathy, "Differential Perspectives: Epistemic Disconnects Surrounding the US Census Bureau's Use of Differential Privacy"
We argue that empirical evaluations of the Census Disclosure Avoidance System failed to recognize how the benchmark data is never a ground truth of population counts.
We argue that policy makers must confront a key trade-off between data utility and privacy protection.
arXiv Detail & Related papers (2022-10-15T21:41:54Z) - No Free Lunch in "Privacy for Free: How does Dataset Condensation Help
Privacy" [75.98836424725437]
New methods designed to preserve data privacy require careful scrutiny.
Failure to preserve privacy is hard to detect, and yet can lead to catastrophic results when a system implementing a privacy-preserving'' method is attacked.
arXiv Detail & Related papers (2022-09-29T17:50:23Z) - Releasing survey microdata with exact cluster locations and additional
privacy safeguards [77.34726150561087]
We propose an alternative microdata dissemination strategy that leverages the utility of the original microdata with additional privacy safeguards.
Our strategy reduces the respondents' re-identification risk for any number of disclosed attributes by 60-80% even under re-identification attempts.
arXiv Detail & Related papers (2022-05-24T19:37:11Z) - The Impact of the U.S. Census Disclosure Avoidance System on
Redistricting and Voting Rights Analysis [0.0]
The US Census Bureau plans to protect the privacy of 2020 Census respondents through its Disclosure Avoidance System (DAS)
We find that the protected data are not of sufficient quality for redistricting purposes.
Our analysis finds that the DAS-protected data are biased against certain areas, depending on voter turnout and partisan and racial composition.
arXiv Detail & Related papers (2021-05-29T03:32:36Z) - Magnify Your Population: Statistical Downscaling to Augment the Spatial
Resolution of Socioeconomic Census Data [48.7576911714538]
We present a new statistical downscaling approach to derive fine-scale estimates of key socioeconomic attributes.
For each selected socioeconomic variable, a Random Forest model is trained on the source Census units and then used to generate fine-scale gridded predictions.
As a case study, we apply this method to Census data in the United States, downscaling the selected socioeconomic variables available at the block group level, to a grid of 300 spatial resolution.
arXiv Detail & Related papers (2020-06-23T16:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.