Unraveling the Interconnected Axes of Heterogeneity in Machine Learning
for Democratic and Inclusive Advancements
- URL: http://arxiv.org/abs/2306.10043v1
- Date: Sun, 11 Jun 2023 20:47:58 GMT
- Title: Unraveling the Interconnected Axes of Heterogeneity in Machine Learning
for Democratic and Inclusive Advancements
- Authors: Maryam Molamohammadi, Afaf Taik, Nicolas Le Roux, Golnoosh Farnadi
- Abstract summary: We identify and analyze three axes of heterogeneity that significantly influence the trajectory of machine learning products.
We demonstrate how these axes are interdependent and mutually influence one another, emphasizing the need to consider and address them jointly.
We discuss how this fragmented study of the three axes poses a significant challenge, leading to an impractical solution space that lacks reflection of real-world scenarios.
- Score: 16.514990457235932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing utilization of machine learning (ML) in decision-making processes
raises questions about its benefits to society. In this study, we identify and
analyze three axes of heterogeneity that significantly influence the trajectory
of ML products. These axes are i) values, culture and regulations, ii) data
composition, and iii) resource and infrastructure capacity. We demonstrate how
these axes are interdependent and mutually influence one another, emphasizing
the need to consider and address them jointly. Unfortunately, the current
research landscape falls short in this regard, often failing to adopt a
holistic approach. We examine the prevalent practices and methodologies that
skew these axes in favor of a selected few, resulting in power concentration,
homogenized control, and increased dependency. We discuss how this fragmented
study of the three axes poses a significant challenge, leading to an
impractical solution space that lacks reflection of real-world scenarios.
Addressing these issues is crucial to ensure a more comprehensive understanding
of the interconnected nature of society and to foster the democratic and
inclusive development of ML systems that are more aligned with real-world
complexities and its diverse requirements.
Related papers
- MARS: Benchmarking the Metaphysical Reasoning Abilities of Language Models with a Multi-task Evaluation Dataset [50.36095192314595]
Large Language Models (LLMs) function as conscious agents with generalizable reasoning capabilities.
This ability remains underexplored due to the complexity of modeling infinite possible changes in an event.
We introduce the first-ever benchmark, MARS, comprising three tasks corresponding to each step.
arXiv Detail & Related papers (2024-06-04T08:35:04Z) - Quantifying the Cross-sectoral Intersecting Discrepancies within Multiple Groups Using Latent Class Analysis Towards Fairness [6.683051393349788]
This research introduces an innovative approach to quantify cross-sectoral intersecting discrepancies.
We validate our approach using both proprietary and public datasets.
Our findings reveal significant discrepancies between minority ethnic groups, highlighting the need for targeted interventions in real-world AI applications.
arXiv Detail & Related papers (2024-05-24T08:10:31Z) - Advances in Robust Federated Learning: Heterogeneity Considerations [25.261572089655264]
Key challenge is to efficiently train models across multiple clients with different data distributions, model structures, task objectives, computational capabilities, and communication resources.
In this paper, we first outline the basic concepts of heterogeneous federated learning.
We then summarize the research challenges in federated learning in terms of five aspects: data, model, task, device, and communication.
arXiv Detail & Related papers (2024-05-16T06:35:42Z) - SoK: Taming the Triangle -- On the Interplays between Fairness,
Interpretability and Privacy in Machine Learning [0.0]
Machine learning techniques are increasingly used for high-stakes decision-making.
It is crucial to ensure that the models learnt can be audited or understood by human users.
interpretability, fairness and privacy are key requirements for the development of responsible machine learning.
arXiv Detail & Related papers (2023-12-22T08:11:33Z) - Causal Deep Learning [77.49632479298745]
Causality has the potential to transform the way we solve real-world problems.
But causality often requires crucial assumptions which cannot be tested in practice.
We propose a new way of thinking about causality -- we call this causal deep learning.
arXiv Detail & Related papers (2023-03-03T19:19:18Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Which Mutual-Information Representation Learning Objectives are
Sufficient for Control? [80.2534918595143]
Mutual information provides an appealing formalism for learning representations of data.
This paper formalizes the sufficiency of a state representation for learning and representing the optimal policy.
Surprisingly, we find that two of these objectives can yield insufficient representations given mild and common assumptions on the structure of the MDP.
arXiv Detail & Related papers (2021-06-14T10:12:34Z) - An Empirical Comparison of Bias Reduction Methods on Real-World Problems
in High-Stakes Policy Settings [13.037143215464132]
We investigate the performance of several methods that operate at different points in the machine learning pipeline across four real-world public policy and social good problems.
We find a wide degree of variability and inconsistency in the ability of many of these methods to improve model fairness, but post-processing by choosing group-specific score thresholds consistently removes disparities.
arXiv Detail & Related papers (2021-05-13T17:33:28Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - Heterogeneous Representation Learning: A Review [66.12816399765296]
Heterogeneous Representation Learning (HRL) brings some unique challenges.
We present a unified learning framework which is able to model most existing learning settings with the heterogeneous inputs.
We highlight the challenges that are less-touched in HRL and present future research directions.
arXiv Detail & Related papers (2020-04-28T05:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.