Intersectional Inquiry, on the Ground and in the Algorithm
- URL: http://arxiv.org/abs/2308.15668v1
- Date: Tue, 29 Aug 2023 23:43:58 GMT
- Title: Intersectional Inquiry, on the Ground and in the Algorithm
- Authors: Shanthi Robertson, Liam Magee, and Karen Soldati\'c
- Abstract summary: We argue that methods in this field must account for intersections of social difference, such as race, class, ethnicity, culture, and disability.
We consider the complexities of bringing together computational and qualitative methods in an intersectional methodological approach.
- Score: 1.0923877073891446
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This article makes two key contributions to methodological debates in
automation research. First, we argue for and demonstrate how methods in this
field must account for intersections of social difference, such as race, class,
ethnicity, culture, and disability, in more nuanced ways. Second, we consider
the complexities of bringing together computational and qualitative methods in
an intersectional methodological approach while also arguing that in their
respective subjects (machines and human subjects) and conceptual scope they
enable a specific dialogue on intersectionality and automation to be
articulated. We draw on field reflections from a project that combines an
analysis of intersectional bias in language models with findings from a
community workshop on the frustrations and aspirations produced through
engagement with everyday AI-driven technologies in the context of care.
Related papers
- Argumentation and Machine Learning [4.064849471241967]
This chapter provides an overview of research works that present approaches with some degree of cross-fertilisation between Computational Argumentation and Machine Learning.
Two broad themes representing the purpose of the interaction between these two areas were identified.
We evaluate the spectrum of works across various dimensions, including the type of learning and the form of argumentation framework used.
arXiv Detail & Related papers (2024-10-31T08:19:58Z) - Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - An Empirical Design Justice Approach to Identifying Ethical Considerations in the Intersection of Large Language Models and Social Robotics [0.31378963995109616]
The integration of Large Language Models (LLMs) in social robotics presents a unique set of ethical challenges and social impacts.
This research is set out to identify ethical considerations that arise in the design and development of these two technologies in combination.
arXiv Detail & Related papers (2024-06-10T15:53:50Z) - Two in One Go: Single-stage Emotion Recognition with Decoupled Subject-context Transformer [78.35816158511523]
We present a single-stage emotion recognition approach, employing a Decoupled Subject-Context Transformer (DSCT) for simultaneous subject localization and emotion classification.
We evaluate our single-stage framework on two widely used context-aware emotion recognition datasets, CAER-S and EMOTIC.
arXiv Detail & Related papers (2024-04-26T07:30:32Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Theme and Topic: How Qualitative Research and Topic Modeling Can Be
Brought Together [5.862480696321741]
Probabilistic topic modelling is a machine learning approach that is also based around the analysis of text.
We use this analogy as the basis for our Theme and Topic system.
This is an example of a more general approach to the design of interactive machine learning systems.
arXiv Detail & Related papers (2022-10-03T04:21:08Z) - Foundations and Recent Trends in Multimodal Machine Learning:
Principles, Challenges, and Open Questions [68.6358773622615]
This paper provides an overview of the computational and theoretical foundations of multimodal machine learning.
We propose a taxonomy of 6 core technical challenges: representation, alignment, reasoning, generation, transference, and quantification.
Recent technical achievements will be presented through the lens of this taxonomy, allowing researchers to understand the similarities and differences across new approaches.
arXiv Detail & Related papers (2022-09-07T19:21:19Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - AI Development for the Public Interest: From Abstraction Traps to
Sociotechnical Risks [2.765897573789737]
We track the emergence of sociotechnical inquiry in three distinct subfields of AI research: AI Safety, Fair Machine Learning (Fair ML) and Human-in-the-Loop (HIL) Autonomy.
We show that for each subfield, perceptions of Public Interest Technology (PIT) stem from the particular dangers faced by past integration of technical systems within a normative social order.
We present a roadmap for a unified approach to sociotechnical graduate pedagogy in AI.
arXiv Detail & Related papers (2021-02-04T18:54:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.