Achieving Counterfactual Fairness for Anomaly Detection
- URL: http://arxiv.org/abs/2303.02318v1
- Date: Sat, 4 Mar 2023 04:45:12 GMT
- Title: Achieving Counterfactual Fairness for Anomaly Detection
- Authors: Xiao Han, Lu Zhang, Yongkai Wu, Shuhan Yuan
- Abstract summary: We propose a counterfactually fair anomaly detection (CFAD) framework which consists of two phases, counterfactual data generation and fair anomaly detection.
Experimental results on a synthetic dataset and two real datasets show that CFAD can effectively detect anomalies as well as ensure counterfactual fairness.
- Score: 20.586768167592112
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ensuring fairness in anomaly detection models has received much attention
recently as many anomaly detection applications involve human beings. However,
existing fair anomaly detection approaches mainly focus on association-based
fairness notions. In this work, we target counterfactual fairness, which is a
prevalent causation-based fairness notion. The goal of counterfactually fair
anomaly detection is to ensure that the detection outcome of an individual in
the factual world is the same as that in the counterfactual world where the
individual had belonged to a different group. To this end, we propose a
counterfactually fair anomaly detection (CFAD) framework which consists of two
phases, counterfactual data generation and fair anomaly detection. Experimental
results on a synthetic dataset and two real datasets show that CFAD can
effectively detect anomalies as well as ensure counterfactual fairness.
Related papers
- Fair Deepfake Detectors Can Generalize [51.21167546843708]
We show that controlling for confounders (data distribution and model capacity) enables improved generalization via fairness interventions.<n>Motivated by this insight, we propose Demographic Attribute-insensitive Intervention Detection (DAID), a plug-and-play framework composed of: i) Demographic-aware data rebalancing, which employs inverse-propensity weighting and subgroup-wise feature normalization to neutralize distributional biases; and ii) Demographic-agnostic feature aggregation, which uses a novel alignment loss to suppress sensitive-attribute signals.<n>DAID consistently achieves superior performance in both fairness and generalization compared to several state-of-the-art
arXiv Detail & Related papers (2025-07-03T14:10:02Z) - CLIP Meets Diffusion: A Synergistic Approach to Anomaly Detection [54.85000884785013]
Anomaly detection is a complex problem due to the ambiguity in defining anomalies, the diversity of anomaly types, and the scarcity of training data.<n>We propose CLIPfusion, a method that leverages both discriminative and generative foundation models.<n>We believe that our method underscores the effectiveness of multi-modal and multi-model fusion in tackling the multifaceted challenges of anomaly detection.
arXiv Detail & Related papers (2025-06-13T13:30:15Z) - Fairness-aware Anomaly Detection via Fair Projection [24.68178499460169]
Unsupervised anomaly detection is critical in high-social-impact applications such as finance, healthcare, social media, and cybersecurity.<n>In these scenarios, possible bias from anomaly detection systems can lead to unfair treatment for different groups and even exacerbate social bias.<n>We propose a novel fairness-aware anomaly detection method FairAD.
arXiv Detail & Related papers (2025-05-16T11:26:00Z) - Fair Anomaly Detection For Imbalanced Groups [33.578902826744255]
We propose FairAD, a fairness-aware anomaly detection method targeting the imbalanced scenario.
It consists of a fairness-aware contrastive learning module and a rebalancing autoencoder module to ensure fairness and handle the imbalanced data issue.
arXiv Detail & Related papers (2024-09-17T07:38:45Z) - Thinking Racial Bias in Fair Forgery Detection: Models, Datasets and Evaluations [63.52709761339949]
We first contribute a dedicated dataset called the Fair Forgery Detection (FairFD) dataset, where we prove the racial bias of public state-of-the-art (SOTA) methods.
We design novel metrics including Approach Averaged Metric and Utility Regularized Metric, which can avoid deceptive results.
We also present an effective and robust post-processing technique, Bias Pruning with Fair Activations (BPFA), which improves fairness without requiring retraining or weight updates.
arXiv Detail & Related papers (2024-07-19T14:53:18Z) - Towards a Unified Framework of Clustering-based Anomaly Detection [18.30208347233284]
Unsupervised Anomaly Detection (UAD) plays a crucial role in identifying abnormal patterns within data without labeled examples.
We propose a novel probabilistic mixture model for anomaly detection to establish a theoretical connection among representation learning, clustering, and anomaly detection.
We have devised an improved anomaly score that more effectively harnesses the combined power of representation learning and clustering.
arXiv Detail & Related papers (2024-06-01T14:30:12Z) - Anomaly Detection by Context Contrasting [57.695202846009714]
Anomaly detection focuses on identifying samples that deviate from the norm.
Recent advances in self-supervised learning have shown great promise in this regard.
We propose Con$$, which learns through context augmentations.
arXiv Detail & Related papers (2024-05-29T07:59:06Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Catching Both Gray and Black Swans: Open-set Supervised Anomaly
Detection [90.32910087103744]
A few labeled anomaly examples are often available in many real-world applications.
These anomaly examples provide valuable knowledge about the application-specific abnormality.
Those anomalies seen during training often do not illustrate every possible class of anomaly.
This paper tackles open-set supervised anomaly detection.
arXiv Detail & Related papers (2022-03-28T05:21:37Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Understanding the Effect of Bias in Deep Anomaly Detection [15.83398707988473]
Anomaly detection presents a unique challenge in machine learning, due to the scarcity of labeled anomaly data.
Recent work attempts to mitigate such problems by augmenting training of deep anomaly detection models with additional labeled anomaly samples.
In this paper, we aim to understand the effect of a biased anomaly set on anomaly detection.
arXiv Detail & Related papers (2021-05-16T03:55:02Z) - Anomaly detection using principles of human perception [0.0]
Unsupervised anomaly detection algorithm is developed that is simple, real-time and parameter-free.
The idea is to assume anomalies are observations that are unexpected to occur with respect to certain groupings made by the majority of the data.
arXiv Detail & Related papers (2021-03-23T05:46:27Z) - Towards Fair Deep Anomaly Detection [24.237000220172906]
We propose a new architecture for the fair anomaly detection approach (Deep Fair SVDD)
We show that our proposed approach can remove the unfairness largely with minimal loss on the anomaly detection performance.
arXiv Detail & Related papers (2020-12-29T22:34:45Z) - Deep Weakly-supervised Anomaly Detection [118.55172352231381]
Pairwise Relation prediction Network (PReNet) learns pairwise relation features and anomaly scores.
PReNet can detect any seen/unseen abnormalities that fit the learned pairwise abnormal patterns.
Empirical results on 12 real-world datasets show that PReNet significantly outperforms nine competing methods in detecting seen and unseen anomalies.
arXiv Detail & Related papers (2019-10-30T00:40:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.