Preserving Privacy Without Compromising Accuracy: Machine Unlearning for Handwritten Text Recognition
- URL: http://arxiv.org/abs/2504.08616v1
- Date: Fri, 11 Apr 2025 15:21:12 GMT
- Title: Preserving Privacy Without Compromising Accuracy: Machine Unlearning for Handwritten Text Recognition
- Authors: Lei Kang, Xuanshuo Fu, Lluis Gomez, Alicia Fornés, Ernest Valveny, Dimosthenis Karatzas,
- Abstract summary: Handwritten Text Recognition (HTR) is essential for document analysis and digitization.<n>Legislation like the right to be forgotten'' underscores the necessity for methods that can expunge sensitive information from trained models.<n>We introduce a novel two-stage unlearning strategy for a multi-head transformer-based HTR model, integrating pruning and random labeling.
- Score: 12.228611784356412
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Handwritten Text Recognition (HTR) is essential for document analysis and digitization. However, handwritten data often contains user-identifiable information, such as unique handwriting styles and personal lexicon choices, which can compromise privacy and erode trust in AI services. Legislation like the ``right to be forgotten'' underscores the necessity for methods that can expunge sensitive information from trained models. Machine unlearning addresses this by selectively removing specific data from models without necessitating complete retraining. Yet, it frequently encounters a privacy-accuracy tradeoff, where safeguarding privacy leads to diminished model performance. In this paper, we introduce a novel two-stage unlearning strategy for a multi-head transformer-based HTR model, integrating pruning and random labeling. Our proposed method utilizes a writer classification head both as an indicator and a trigger for unlearning, while maintaining the efficacy of the recognition head. To our knowledge, this represents the first comprehensive exploration of machine unlearning within HTR tasks. We further employ Membership Inference Attacks (MIA) to evaluate the effectiveness of unlearning user-identifiable information. Extensive experiments demonstrate that our approach effectively preserves privacy while maintaining model accuracy, paving the way for new research directions in the document analysis community. Our code will be publicly available upon acceptance.
Related papers
- Privacy-Preserving Biometric Verification with Handwritten Random Digit String [49.77172854374479]
Handwriting verification has stood as a steadfast identity authentication method for decades.<n>However, this technique risks potential privacy breaches due to the inclusion of personal information in handwritten biometrics such as signatures.<n>We propose using the Random Digit String (RDS) for privacy-preserving handwriting verification.
arXiv Detail & Related papers (2025-03-17T03:47:25Z) - Technical Report for the Forgotten-by-Design Project: Targeted Obfuscation for Machine Learning [0.03749861135832072]
This paper explores the concept of the Right to be Forgotten (RTBF) within AI systems, contrasting it with traditional data erasure methods.<n>We introduce Forgotten by Design, a proactive approach to privacy preservation that integrates instance-specific obfuscation techniques.<n>Our experiments on the CIFAR-10 dataset demonstrate that our techniques reduce privacy risks by at least an order of magnitude while maintaining model accuracy.
arXiv Detail & Related papers (2025-01-20T15:07:59Z) - RESTOR: Knowledge Recovery through Machine Unlearning [71.75834077528305]
Large language models trained on web-scale corpora can memorize undesirable datapoints.<n>Many machine unlearning algorithms have been proposed that aim to erase' these datapoints.<n>We propose the RESTOR framework for machine unlearning, which evaluates the ability of unlearning algorithms to perform targeted data erasure.
arXiv Detail & Related papers (2024-10-31T20:54:35Z) - MUSE: Machine Unlearning Six-Way Evaluation for Language Models [109.76505405962783]
Language models (LMs) are trained on vast amounts of text data, which may include private and copyrighted content.
We propose MUSE, a comprehensive machine unlearning evaluation benchmark.
We benchmark how effectively eight popular unlearning algorithms can unlearn Harry Potter books and news articles.
arXiv Detail & Related papers (2024-07-08T23:47:29Z) - IDT: Dual-Task Adversarial Attacks for Privacy Protection [8.312362092693377]
Methods to protect privacy can involve using representations inside models that are not to detect sensitive attributes.
We propose IDT, a method that analyses predictions made by auxiliary and interpretable models to identify which tokens are important to change.
We evaluate different datasets for NLP suitable for different tasks.
arXiv Detail & Related papers (2024-06-28T04:14:35Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Machine Unlearning for Document Classification [14.71726430657162]
A novel approach, known as machine unlearning, has emerged to make AI models forget about a particular class of data.
This work represents a pioneering step towards the development of machine unlearning methods aimed at addressing privacy concerns in document analysis applications.
arXiv Detail & Related papers (2024-04-29T18:16:13Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - CSSL-RHA: Contrastive Self-Supervised Learning for Robust Handwriting
Authentication [23.565017967901618]
We propose a novel Contrastive Self-Supervised Learning framework for Robust Handwriting Authentication.
It can dynamically learn complex yet important features and accurately predict writer identities.
Our proposed model can still effectively achieve authentication even under abnormal circumstances, such as data falsification and corruption.
arXiv Detail & Related papers (2023-07-18T02:20:46Z) - Unintended Memorization and Timing Attacks in Named Entity Recognition
Models [5.404816271595691]
We study the setting when NER models are available as a black-box service for identifying sensitive information in user documents.
With updated pre-trained NER models from spaCy, we demonstrate two distinct membership attacks on these models.
arXiv Detail & Related papers (2022-11-04T03:32:16Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.