On The Fairness Impacts of Hardware Selection in Machine Learning
- URL: http://arxiv.org/abs/2312.03886v2
- Date: Fri, 30 Aug 2024 23:40:05 GMT
- Title: On The Fairness Impacts of Hardware Selection in Machine Learning
- Authors: Sree Harsha Nelaturu, Nishaanth Kanna Ravichandran, Cuong Tran, Sara Hooker, Ferdinando Fioretto,
- Abstract summary: This paper investigates the influence of hardware on the delicate balance between model performance and fairness.
We demonstrate that hardware choices can exacerbate existing disparities, attributing these discrepancies to variations in gradient flows and loss surfaces across different demographic groups.
- Score: 47.64314140984432
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the machine learning ecosystem, hardware selection is often regarded as a mere utility, overshadowed by the spotlight on algorithms and data. This oversight is particularly problematic in contexts like ML-as-a-service platforms, where users often lack control over the hardware used for model deployment. How does the choice of hardware impact generalization properties? This paper investigates the influence of hardware on the delicate balance between model performance and fairness. We demonstrate that hardware choices can exacerbate existing disparities, attributing these discrepancies to variations in gradient flows and loss surfaces across different demographic groups. Through both theoretical and empirical analysis, the paper not only identifies the underlying factors but also proposes an effective strategy for mitigating hardware-induced performance imbalances.
Related papers
- A Hybrid Framework for Statistical Feature Selection and Image-Based Noise-Defect Detection [55.2480439325792]
This paper presents a hybrid framework that integrates both statistical feature selection and classification techniques to improve defect detection accuracy.
We present around 55 distinguished features that are extracted from industrial images, which are then analyzed using statistical methods.
By integrating these methods with flexible machine learning applications, the proposed framework improves detection accuracy and reduces false positives and misclassifications.
arXiv Detail & Related papers (2024-12-11T22:12:21Z) - Explainable fault and severity classification for rolling element bearings using Kolmogorov-Arnold networks [4.46753539114796]
Bearing faults are a leading cause of machinery failures.
This study utilizes Kolmogorov-Arnold Networks to address these challenges.
It produces lightweight models that deliver explainable results.
arXiv Detail & Related papers (2024-12-02T09:40:03Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Enhancing Hardware Fault Tolerance in Machines with Reinforcement Learning Policy Gradient Algorithms [2.473948454680334]
Reinforcement learning-based robotic control offers a new perspective on achieving hardware fault tolerance.
This paper investigates the potential of two state-of-the-art reinforcement learning algorithms, Proximal Policy Optimization (PPO) and Soft Actor-Critic (SAC)
We show PPO exhibits the fastest adaptation when retaining the knowledge within its models, while SAC performs best when discarding all acquired knowledge.
arXiv Detail & Related papers (2024-07-21T22:24:16Z) - Fair Mixed Effects Support Vector Machine [0.0]
Fairness in machine learning aims to mitigate biases present in the training data and model imperfections.
This is achieved by preventing the model from making decisions based on sensitive characteristics like ethnicity or sexual orientation.
We present a fair mixed effects support vector machine algorithm that can handle both problems simultaneously.
arXiv Detail & Related papers (2024-05-10T12:25:06Z) - The Grand Illusion: The Myth of Software Portability and Implications
for ML Progress [4.855502010124377]
We conduct a large-scale study of the portability of mainstream ML frameworks across different hardware types.
We find that frameworks can lose more than 40% of their key functions when ported to other hardware.
Our results suggest that specialization of hardware impedes innovation in machine learning research.
arXiv Detail & Related papers (2023-09-12T22:11:55Z) - Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes [72.13373216644021]
We study the societal impact of machine learning by considering the collection of models that are deployed in a given context.
We find deployed machine learning is prone to systemic failure, meaning some users are exclusively misclassified by all models available.
These examples demonstrate ecosystem-level analysis has unique strengths for characterizing the societal impact of machine learning.
arXiv Detail & Related papers (2023-07-12T01:11:52Z) - Ising machines as hardware solvers of combinatorial optimization
problems [1.8732539895890135]
Ising machines are hardware solvers which aim to find the absolute or approximate ground states of the Ising model.
A scalable Ising machine that outperforms existing standard digital computers could have a huge impact for practical applications.
arXiv Detail & Related papers (2022-04-01T08:24:06Z) - HardVis: Visual Analytics to Handle Instance Hardness Using Undersampling and Oversampling Techniques [48.82319198853359]
HardVis is a visual analytics system designed to handle instance hardness mainly in imbalanced classification scenarios.
Users can explore subsets of data from different perspectives to decide all those parameters.
The efficacy and effectiveness of HardVis are demonstrated with a hypothetical usage scenario and a use case.
arXiv Detail & Related papers (2022-03-29T17:04:16Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Tiny, always-on and fragile: Bias propagation through design choices in
on-device machine learning workflows [8.690490406134339]
We study the propagation of bias through design choices in on-device machine learning development.
We identify complex and interacting technical design choices that can lead to disparate performance across user groups.
We leverage our insights to suggest strategies for developers to develop fairer on-device ML.
arXiv Detail & Related papers (2022-01-19T15:59:41Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.