Achieving Fair Skin Lesion Detection through Skin Tone Normalization and Channel Pruning
- URL: http://arxiv.org/abs/2509.22712v1
- Date: Wed, 24 Sep 2025 04:06:31 GMT
- Title: Achieving Fair Skin Lesion Detection through Skin Tone Normalization and Channel Pruning
- Authors: Zihan Wei, Tapabrata Chakraborti,
- Abstract summary: We propose a new Individual Typology Angle (ITA) Loss-based skin tone normalization and data augmentation method.<n>In skin tone normalization, ITA is used to estimate skin tone type and adjust automatically to target tones for dataset balancing.<n> Experiments conducted in the ISIC 2019 dataset validate the effectiveness of our method.
- Score: 2.8269946628069262
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent works have shown that deep learning based skin lesion image classification models trained on unbalanced dataset can exhibit bias toward protected demographic attributes such as race, age,and gender. Current bias mitigation methods usually either achieve high level of fairness with the degradation of accuracy, or only improve the model fairness on a single attribute. Additionally usually most bias mitigation strategies are either pre hoc through data processing or post hoc through fairness evaluation, instead of being integrated into the model learning itself. To solve these existing drawbacks, we propose a new Individual Typology Angle (ITA) Loss-based skin tone normalization and data augmentation method that directly feeds into an adaptable meta learning-based joint channel pruning framework. In skin tone normalization, ITA is used to estimate skin tone type and adjust automatically to target tones for dataset balancing. In the joint channel pruning framework, two nested optimization loops are used to find critical channels.The inner optimization loop finds and prunes the local critical channels by weighted soft nearest neighbor loss, and the outer optimization loop updates the weight of each attribute using group wise variance loss on meta-set. Experiments conducted in the ISIC2019 dataset validate the effectiveness of our method in simultaneously improving the fairness of the model on multiple sensitive attributes without significant degradation of accuracy. Finally, although the pruning mechanism adds some computational cost during training phase, usually training is done off line. More importantly,
Related papers
- SWiFT: Soft-Mask Weight Fine-tuning for Bias Mitigation [12.770721233121984]
Recent studies have shown that Machine Learning (ML) models can exhibit bias in real-world scenarios.<n>We propose Soft-Mask Weight Fine-Tuning (SWiFT), a debiasing framework that efficiently improves fairness while preserving discriminative performance.
arXiv Detail & Related papers (2025-08-26T09:03:18Z) - BOOST: Out-of-Distribution-Informed Adaptive Sampling for Bias Mitigation in Stylistic Convolutional Neural Networks [8.960561031294727]
Bias in AI presents a significant challenge to painting classification, and is getting more serious as these systems become increasingly integrated into tasks like art curation and restoration.<n>We propose a novel OOD-informed model bias adaptive sampling method called BOOST.<n>We evaluate our proposed approach to the KaoKore and PACS datasets, focusing on the model's ability to reduce class-wise bias.
arXiv Detail & Related papers (2025-07-08T22:18:36Z) - Gradient Extrapolation for Debiased Representation Learning [7.183424522250937]
Gradient Extrapolation for Debiased Representation Learning (GERNE) is designed to learn debiased representations in both known and unknown attribute training cases.<n>Our analysis shows that when the extrapolated gradient points toward the batch gradient with fewer spurious correlations, it effectively guides training toward learning a debiased model.
arXiv Detail & Related papers (2025-03-17T14:48:57Z) - Marginal Debiased Network for Fair Visual Recognition [59.05212866862219]
We propose a novel marginal debiased network (MDN) to learn debiased representations.
Our MDN can achieve a remarkable performance on under-represented samples.
arXiv Detail & Related papers (2024-01-04T08:57:09Z) - Data augmentation and explainability for bias discovery and mitigation
in deep learning [0.0]
This dissertation explores the impact of bias in deep neural networks and presents methods for reducing its influence on model performance.
The first part begins by categorizing and describing potential sources of bias and errors in data and models, with a particular focus on bias in machine learning pipelines.
The next chapter outlines a taxonomy and methods of Explainable AI as a way to justify predictions and control and improve the model.
arXiv Detail & Related papers (2023-08-18T11:02:27Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Meta Balanced Network for Fair Face Recognition [51.813457201437195]
We systematically and scientifically study bias from both data and algorithm aspects.
We propose a novel meta-learning algorithm, called Meta Balanced Network (MBN), which learns adaptive margins in large margin loss.
Extensive experiments show that MBN successfully mitigates bias and learns more balanced performance for people with different skin tones in face recognition.
arXiv Detail & Related papers (2022-05-13T10:25:44Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.