The Method of Critical AI Studies, A Propaedeutic
- URL: http://arxiv.org/abs/2411.18833v2
- Date: Tue, 10 Dec 2024 19:11:52 GMT
- Title: The Method of Critical AI Studies, A Propaedeutic
- Authors: Fabian Offert, Ranjodh Singh Dhaliwal,
- Abstract summary: We outline some common methodological issues in the field of critical AI studies.
We call for, and point towards, a future set of methodologies that might take into account existing strengths in the humanistic close analysis of cultural objects.
- Score: 1.9567015559455132
- License:
- Abstract: We outline some common methodological issues in the field of critical AI studies, including a tendency to overestimate the explanatory power of individual samples (the benchmark casuistry), a dependency on theoretical frameworks derived from earlier conceptualizations of computation (the black box casuistry), and a preoccupation with a cause-and-effect model of algorithmic harm (the stack casuistry). In the face of these issues, we call for, and point towards, a future set of methodologies that might take into account existing strengths in the humanistic close analysis of cultural objects.
Related papers
- Causality can systematically address the monsters under the bench(marks) [64.36592889550431]
Benchmarks are plagued by various biases, artifacts, or leakage.
Models may behave unreliably due to poorly explored failure modes.
causality offers an ideal framework to systematically address these challenges.
arXiv Detail & Related papers (2025-02-07T17:01:37Z) - Socio-Economic Consequences of Generative AI: A Review of Methodological Approaches [0.0]
We identify the primary methodologies that may be used to help predict the economic and social impacts of generative AI adoption.
Through a comprehensive literature review, we uncover a range of methodologies poised to assess the multifaceted impacts of this technological revolution.
arXiv Detail & Related papers (2024-11-14T09:40:25Z) - A Comprehensive Survey on Evidential Deep Learning and Its Applications [64.83473301188138]
Evidential Deep Learning (EDL) provides reliable uncertainty estimation with minimal additional computation in a single forward pass.
We first delve into the theoretical foundation of EDL, the subjective logic theory, and discuss its distinctions from other uncertainty estimation frameworks.
We elaborate on its extensive applications across various machine learning paradigms and downstream tasks.
arXiv Detail & Related papers (2024-09-07T05:55:06Z) - An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems [49.1574468325115]
We show that human rights already underpin the decisions in the field of data use.
This work presents a methodology and a model for a Human Rights Impact Assessment (HRIA)
The proposed methodology is tested in concrete case-studies to prove its feasibility and effectiveness.
arXiv Detail & Related papers (2024-07-30T16:27:52Z) - Coding for Intelligence from the Perspective of Category [66.14012258680992]
Coding targets compressing and reconstructing data, and intelligence.
Recent trends demonstrate the potential homogeneity of these two fields.
We propose a novel problem of Coding for Intelligence from the category theory view.
arXiv Detail & Related papers (2024-07-01T07:05:44Z) - Reconciling Heterogeneous Effects in Causal Inference [44.99833362998488]
We apply the Reconcile algorithm for model multiplicity in machine learning to reconcile heterogeneous effects in causal inference.
Our results have tangible implications for ensuring fair outcomes in high-stakes such as healthcare, insurance, and housing.
arXiv Detail & Related papers (2024-06-05T18:43:46Z) - Toward Understanding the Disagreement Problem in Neural Network Feature Attribution [0.8057006406834466]
neural networks have demonstrated their remarkable ability to discern intricate patterns and relationships from raw data.
Understanding the inner workings of these black box models remains challenging, yet crucial for high-stake decisions.
Our work addresses this confusion by investigating the explanations' fundamental and distributional behavior.
arXiv Detail & Related papers (2024-04-17T12:45:59Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Do Abstractions Have Politics? Toward a More Critical Algorithm Analysis [19.08810272234958]
We argue for affordance analysis, a more critical algorithm analysis based on an affordance account of value embedding.
We illustrate 5 case studies of how affordance analysis refutes social determination of technology.
arXiv Detail & Related papers (2021-01-04T05:59:26Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - Thresholds of descending algorithms in inference problems [4.594159253008448]
We review recent works on analyzing the dynamics of gradient-based algorithms in a statistical inference problem.
Using methods and insights from the physics of glassy systems, these works showed how to understand quantitatively and qualitatively the performance of gradient-based algorithms.
arXiv Detail & Related papers (2020-01-02T15:08:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.