Preventing Unauthorized AI Over-Analysis by Medical Image Adversarial
Watermarking
- URL: http://arxiv.org/abs/2303.09858v3
- Date: Thu, 14 Sep 2023 03:37:24 GMT
- Title: Preventing Unauthorized AI Over-Analysis by Medical Image Adversarial
Watermarking
- Authors: Xingxing Wei, Bangzheng Pu, Shiji Zhao, Chen Chi and Huazhu Fu
- Abstract summary: We present a pioneering solution named Medical Image Adversarial watermarking (MIAD-MARK)
Our approach introduces watermarks that strategically mislead unauthorized AI diagnostic models, inducing erroneous predictions without compromising the integrity of the visual content.
Our solution effectively mitigates unauthorized exploitation of medical images even in the presence of sophisticated watermark removal networks.
- Score: 43.17275405041853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advancement of deep learning has facilitated the integration of
Artificial Intelligence (AI) into clinical practices, particularly in
computer-aided diagnosis. Given the pivotal role of medical images in various
diagnostic procedures, it becomes imperative to ensure the responsible and
secure utilization of AI techniques. However, the unauthorized utilization of
AI for image analysis raises significant concerns regarding patient privacy and
potential infringement on the proprietary rights of data custodians.
Consequently, the development of pragmatic and cost-effective strategies that
safeguard patient privacy and uphold medical image copyrights emerges as a
critical necessity. In direct response to this pressing demand, we present a
pioneering solution named Medical Image Adversarial watermarking (MIAD-MARK).
Our approach introduces watermarks that strategically mislead unauthorized AI
diagnostic models, inducing erroneous predictions without compromising the
integrity of the visual content. Importantly, our method integrates an
authorization protocol tailored for legitimate users, enabling the removal of
the MIAD-MARK through encryption-generated keys. Through extensive experiments,
we validate the efficacy of MIAD-MARK across three prominent medical image
datasets. The empirical outcomes demonstrate the substantial impact of our
approach, notably reducing the accuracy of standard AI diagnostic models to a
mere 8.57% under white box conditions and 45.83% in the more challenging black
box scenario. Additionally, our solution effectively mitigates unauthorized
exploitation of medical images even in the presence of sophisticated watermark
removal networks. Notably, those AI diagnosis networks exhibit a meager average
accuracy of 38.59% when applied to images protected by MIAD-MARK, underscoring
the robustness of our safeguarding mechanism.
Related papers
- The Limits of Perception: Analyzing Inconsistencies in Saliency Maps in XAI [0.0]
Explainable artificial intelligence (XAI) plays an indispensable role in demystifying the decision-making processes of AI.
As they operate as "black boxes," with their reasoning obscured and inaccessible, there's an increased risk of misdiagnosis.
This shift towards transparency is not just beneficial -- it's a critical step towards responsible AI integration in healthcare.
arXiv Detail & Related papers (2024-03-23T02:15:23Z) - Medical Image Data Provenance for Medical Cyber-Physical System [8.554664822046966]
This study proposes using watermarking techniques to embed a device fingerprint (DFP) into captured images.
The DFP, representing the unique attributes of the capturing device and raw image, is embedded into raw images before storage.
A robust remote validation method is introduced to authenticate images, enhancing the integrity of medical image data in interconnected healthcare systems.
arXiv Detail & Related papers (2024-03-22T13:24:44Z) - OpticalDR: A Deep Optical Imaging Model for Privacy-Protective
Depression Recognition [66.91236298878383]
Depression Recognition (DR) poses a considerable challenge, especially in the context of privacy concerns.
We design a new imaging system to erase the identity information of captured facial images while retain disease-relevant features.
It is irreversible for identity information recovery while preserving essential disease-related characteristics necessary for accurate DR.
arXiv Detail & Related papers (2024-02-29T01:20:29Z) - Adversarial-Robust Transfer Learning for Medical Imaging via Domain
Assimilation [17.46080957271494]
The scarcity of publicly available medical images has led contemporary algorithms to depend on pretrained models grounded on a large set of natural images.
A significant em domain discrepancy exists between natural and medical images, which causes AI models to exhibit heightened em vulnerability to adversarial attacks.
This paper proposes a em domain assimilation approach that introduces texture and color adaptation into transfer learning, followed by a texture preservation component to suppress undesired distortion.
arXiv Detail & Related papers (2024-02-25T06:39:15Z) - COVID-Net USPro: An Open-Source Explainable Few-Shot Deep Prototypical
Network to Monitor and Detect COVID-19 Infection from Point-of-Care
Ultrasound Images [66.63200823918429]
COVID-Net USPro monitors and detects COVID-19 positive cases with high precision and recall from minimal ultrasound images.
The network achieves 99.65% overall accuracy, 99.7% recall and 99.67% precision for COVID-19 positive cases when trained with only 5 shots.
arXiv Detail & Related papers (2023-01-04T16:05:51Z) - FUTURE-AI: Guiding Principles and Consensus Recommendations for Trustworthy Artificial Intelligence in Medical Imaging [6.099257839022179]
FUTURE-AI framework comprises guiding principles for increased trust, safety, and adoption for AI in healthcare.
We transform the general FUTURE-AI healthcare principles to a concise and specific AI implementation guide tailored to the needs of the medical imaging community.
arXiv Detail & Related papers (2021-09-20T16:22:49Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z) - Collaborative Unsupervised Domain Adaptation for Medical Image Diagnosis [102.40869566439514]
We seek to exploit rich labeled data from relevant domains to help the learning in the target task via Unsupervised Domain Adaptation (UDA)
Unlike most UDA methods that rely on clean labeled data or assume samples are equally transferable, we innovatively propose a Collaborative Unsupervised Domain Adaptation algorithm.
We theoretically analyze the generalization performance of the proposed method, and also empirically evaluate it on both medical and general images.
arXiv Detail & Related papers (2020-07-05T11:49:17Z) - Review of Artificial Intelligence Techniques in Imaging Data
Acquisition, Segmentation and Diagnosis for COVID-19 [71.41929762209328]
The pandemic of coronavirus disease 2019 (COVID-19) is spreading all over the world.
Medical imaging such as X-ray and computed tomography (CT) plays an essential role in the global fight against COVID-19.
The recently emerging artificial intelligence (AI) technologies further strengthen the power of the imaging tools and help medical specialists.
arXiv Detail & Related papers (2020-04-06T15:21:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.