Membership Inference Attack for Beluga Whales Discrimination
- URL: http://arxiv.org/abs/2302.14769v1
- Date: Tue, 28 Feb 2023 17:10:32 GMT
- Title: Membership Inference Attack for Beluga Whales Discrimination
- Authors: Voncarlos Marcelo Ara\'ujo, S\'ebastien Gambs, Cl\'ement Chion, Robert
Michaud, L\'eo Schneider, Hadrien Lautraite
- Abstract summary: We are interested in the discrimination within digital photos of beluga whales.
We propose a novel approach based on the use of Membership Inference Attacks (MIAs)
We show that the problem of discriminating between known and unknown individuals can be solved efficiently using state-of-the-art approaches for MIAs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To efficiently monitor the growth and evolution of a particular wildlife
population, one of the main fundamental challenges to address in animal ecology
is the re-identification of individuals that have been previously encountered
but also the discrimination between known and unknown individuals (the
so-called "open-set problem"), which is the first step to realize before
re-identification. In particular, in this work, we are interested in the
discrimination within digital photos of beluga whales, which are known to be
among the most challenging marine species to discriminate due to their lack of
distinctive features. To tackle this problem, we propose a novel approach based
on the use of Membership Inference Attacks (MIAs), which are normally used to
assess the privacy risks associated with releasing a particular machine
learning model. More precisely, we demonstrate that the problem of
discriminating between known and unknown individuals can be solved efficiently
using state-of-the-art approaches for MIAs. Extensive experiments on three
benchmark datasets related to whales, two different neural network
architectures, and three MIA clearly demonstrate the performance of the
approach. In addition, we have also designed a novel MIA strategy that we
coined as ensemble MIA, which combines the outputs of different MIAs to
increase the attack accuracy while diminishing the false positive rate.
Overall, one of our main objectives is also to show that the research on
privacy attacks can also be leveraged "for good" by helping to address
practical challenges encountered in animal ecology.
Related papers
- OpenAnimals: Revisiting Person Re-Identification for Animals Towards Better Generalization [10.176567936487364]
We conduct a study by revisiting several state-of-the-art person re-identification methods, including BoT, AGW, SBS, and MGN.
We evaluate their effectiveness on animal re-identification benchmarks such as HyenaID, LeopardID, SeaTurtleID, and WhaleSharkID.
Our findings reveal that while some techniques well, many do not generalize, underscoring the significant differences between the two tasks.
We propose ARBase, a strong textbfBase model tailored for textbfAnimal textbfRe-
arXiv Detail & Related papers (2024-09-30T20:07:14Z) - Unveiling the Unseen: Exploring Whitebox Membership Inference through the Lens of Explainability [10.632831321114502]
We propose an attack-driven explainable framework to identify the most influential features of raw data that lead to successful membership inference attacks.
Our proposed MIA shows an improvement of up to 26% on state-of-the-art MIA.
arXiv Detail & Related papers (2024-07-01T14:07:46Z) - Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity [80.16488817177182]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - PRAT: PRofiling Adversarial aTtacks [52.693011665938734]
We introduce a novel problem of PRofiling Adversarial aTtacks (PRAT)
Given an adversarial example, the objective of PRAT is to identify the attack used to generate it.
We use AID to devise a novel framework for the PRAT objective.
arXiv Detail & Related papers (2023-09-20T07:42:51Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Disguise without Disruption: Utility-Preserving Face De-Identification [40.484745636190034]
We introduce Disguise, a novel algorithm that seamlessly de-identifies facial images while ensuring the usability of the modified data.
Our method involves extracting and substituting depicted identities with synthetic ones, generated using variational mechanisms to maximize obfuscation and non-invertibility.
We extensively evaluate our method using multiple datasets, demonstrating a higher de-identification rate and superior consistency compared to prior approaches in various downstream tasks.
arXiv Detail & Related papers (2023-03-23T13:50:46Z) - On the Privacy Effect of Data Enhancement via the Lens of Memorization [20.63044895680223]
We propose to investigate privacy from a new perspective called memorization.
Through the lens of memorization, we find that previously deployed MIAs produce misleading results as they are less likely to identify samples with higher privacy risks.
We demonstrate that the generalization gap and privacy leakage are less correlated than those of the previous results.
arXiv Detail & Related papers (2022-08-17T13:02:17Z) - Persistent Animal Identification Leveraging Non-Visual Markers [71.14999745312626]
We aim to locate and provide a unique identifier for each mouse in a cluttered home-cage environment through time.
This is a very challenging problem due to (i) the lack of distinguishing visual features for each mouse, and (ii) the close confines of the scene with constant occlusion.
Our approach achieves 77% accuracy on this animal identification problem, and is able to reject spurious detections when the animals are hidden.
arXiv Detail & Related papers (2021-12-13T17:11:32Z) - TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain
Gait Recognition [77.77786072373942]
This paper proposes a Transferable Neighborhood Discovery (TraND) framework to bridge the domain gap for unsupervised cross-domain gait recognition.
We design an end-to-end trainable approach to automatically discover the confident neighborhoods of unlabeled samples in the latent space.
Our method achieves state-of-the-art results on two public datasets, i.e., CASIA-B and OU-LP.
arXiv Detail & Related papers (2021-02-09T03:07:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.