Measurement as governance in and for responsible AI
- URL: http://arxiv.org/abs/2109.05658v1
- Date: Mon, 13 Sep 2021 01:04:22 GMT
- Title: Measurement as governance in and for responsible AI
- Authors: Abigail Z. Jacobs
- Abstract summary: Measurement of social phenomena is everywhere, unavoidably, in sociotechnical systems.
We use the language of measurement to uncover hidden governance decisions.
We then explore the constructs of fairness, robustness, and responsibility in the context of governance in and for responsible AI.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Measurement of social phenomena is everywhere, unavoidably, in sociotechnical
systems. This is not (only) an academic point: Fairness-related harms emerge
when there is a mismatch in the measurement process between the thing we
purport to be measuring and the thing we actually measure. However, the
measurement process -- where social, cultural, and political values are
implicitly encoded in sociotechnical systems -- is almost always obscured.
Furthermore, this obscured process is where important governance decisions are
encoded: governance about which systems are fair, which individuals belong in
which categories, and so on. We can then use the language of measurement, and
the tools of construct validity and reliability, to uncover hidden governance
decisions. In particular, we highlight two types of construct validity, content
validity and consequential validity, that are useful to elicit and characterize
the feedback loops between the measurement, social construction, and
enforcement of social categories. We then explore the constructs of fairness,
robustness, and responsibility in the context of governance in and for
responsible AI. Together, these perspectives help us unpack how measurement
acts as a hidden governance process in sociotechnical systems. Understanding
measurement as governance supports a richer understanding of the governance
processes already happening in AI -- responsible or otherwise -- revealing
paths to more effective interventions.
Related papers
- Evaluating Generative AI Systems is a Social Science Measurement Challenge [78.35388859345056]
We present a framework for measuring concepts related to the capabilities, impacts, opportunities, and risks of GenAI systems.
The framework distinguishes between four levels: the background concept, the systematized concept, the measurement instrument(s), and the instance-level measurements themselves.
arXiv Detail & Related papers (2024-11-17T02:35:30Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Transparency, Compliance, And Contestability When Code Is(n't) Law [91.85674537754346]
Both technical security mechanisms and legal processes serve as mechanisms to deal with misbehaviour according to a set of norms.
While they share general similarities, there are also clear differences in how they are defined, act, and the effect they have on subjects.
This paper considers the similarities and differences between both types of mechanisms as ways of dealing with misbehaviour.
arXiv Detail & Related papers (2022-05-08T18:03:07Z) - Multiscale Governance [0.0]
Humandemics will propagate because of the pathways that connect the different systems.
The emerging fragility or robustness of the system will depend on how this complex network of systems is governed.
arXiv Detail & Related papers (2021-04-06T19:23:44Z) - Outlining Traceability: A Principle for Operationalizing Accountability
in Computing Systems [1.0152838128195467]
Traceability requires establishing not only how a system worked but how it was created and for what purpose.
Traceability connects records of how the system was constructed and what the system did mechanically to the broader goals of governance.
This map reframes existing discussions around accountability and transparency, using the principle of traceability to show how, when, and why transparency can be deployed to serve accountability goals.
arXiv Detail & Related papers (2021-01-23T00:13:20Z) - Accuracy-Efficiency Trade-Offs and Accountability in Distributed ML
Systems [32.79201607581628]
Trade-offs between accuracy and efficiency pervade law, public health, and other non-computing domains.
We argue that since examining these trade-offs has been useful for guiding governance in other domains, we need to similarly reckon with these trade-offs in governing computer systems.
arXiv Detail & Related papers (2020-07-04T23:00:52Z) - Steps Towards Value-Aligned Systems [0.0]
Algorithmic (including AI/ML) decision-making artifacts are an established and growing part of our decision-making ecosystem.
Current literature is full of examples of how individual artifacts violate societal norms and expectations.
This discussion argues for a more structured systems-level approach for assessing value-alignment in sociotechnical systems.
arXiv Detail & Related papers (2020-02-10T22:47:30Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.