From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility Gap
- URL: http://arxiv.org/abs/2404.13131v1
- Date: Fri, 19 Apr 2024 18:36:14 GMT
- Title: From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility Gap
- Authors: Tianqi Kou,
- Abstract summary: Two goals - improving replicability and accountability of Machine Learning research.
This paper posits that reconceptualizing replicability can help bridge the gap.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Two goals - improving replicability and accountability of Machine Learning research respectively, have accrued much attention from the AI ethics and the Machine Learning community. Despite sharing the measures of improving transparency, the two goals are discussed in different registers - replicability registers with scientific reasoning whereas accountability registers with ethical reasoning. Given the existing challenge of the Responsibility Gap - holding Machine Learning scientists accountable for Machine Learning harms due to them being far from sites of application, this paper posits that reconceptualizing replicability can help bridge the gap. Through a shift from model performance replicability to claim replicability, Machine Learning scientists can be held accountable for producing non-replicable claims that are prone to eliciting harm due to misuse and misinterpretation. In this paper, I make the following contributions. First, I define and distinguish two forms of replicability for ML research that can aid constructive conversations around replicability. Second, I formulate an argument for claim-replicability's advantage over model performance replicability in justifying assigning accountability to Machine Learning scientists for producing non-replicable claims and show how it enacts a sense of responsibility that is actionable. In addition, I characterize the implementation of claim replicability as more of a social project than a technical one by discussing its competing epistemological principles, practical implications on Circulating Reference, Interpretative Labor, and research communication.
Related papers
- Identifiable Exchangeable Mechanisms for Causal Structure and Representation Learning [54.69189620971405]
We provide a unified framework, termed Identifiable Exchangeable Mechanisms (IEM), for representation and structure learning.
IEM provides new insights that let us relax the necessary conditions for causal structure identification in exchangeable non-i.i.d. data.
We also demonstrate the existence of a duality condition in identifiable representation learning, leading to new identifiability results.
arXiv Detail & Related papers (2024-06-20T13:30:25Z) - A Theoretical Understanding of Self-Correction through In-context Alignment [51.622068973630796]
Large language models (LLMs) are capable of improving their abilities purely by self-correction.
We show that when LLMs give relatively accurate self-examinations as rewards, they are capable of refining responses in an in-context way.
Inspired by these findings, we also illustrate applications of self-correction, such as defending against LLM jailbreaks.
arXiv Detail & Related papers (2024-05-28T22:33:02Z) - What is Reproducibility in Artificial Intelligence and Machine Learning Research? [0.7373617024876725]
We introduce a validation framework that clarifies the roles and definitions of key validation efforts.
This structured framework aims to provide AI/ML researchers with the necessary clarity on these essential concepts.
arXiv Detail & Related papers (2024-04-29T18:51:20Z) - Distilling Reasoning Ability from Large Language Models with Adaptive Thinking [54.047761094420174]
Chain of thought finetuning (cot-finetuning) aims to endow small language models (SLM) with reasoning ability to improve their performance towards specific tasks.
Most existing cot-finetuning methods adopt a pre-thinking mechanism, allowing the SLM to generate a rationale before providing an answer.
This mechanism enables SLM to analyze and think about complex questions, but it also makes answer correctness highly sensitive to minor errors in rationale.
We propose a robust post-thinking mechanism to generate answers before rationale.
arXiv Detail & Related papers (2024-04-14T07:19:27Z) - Self-RAG: Learning to Retrieve, Generate, and Critique through
Self-Reflection [74.51523859064802]
We introduce a new framework called Self-Reflective Retrieval-Augmented Generation (Self-RAG)
Self-RAG enhances an LM's quality and factuality through retrieval and self-reflection.
It significantly outperforms state-of-the-art LLMs and retrieval-augmented models on a diverse set of tasks.
arXiv Detail & Related papers (2023-10-17T18:18:32Z) - Replicable Reinforcement Learning [15.857503103543308]
We provide a provably replicable algorithm for parallel value iteration, and a provably replicable version of R-max in the episodic setting.
These are the first formal replicability results for control problems, which present different challenges for replication than batch learning settings.
arXiv Detail & Related papers (2023-05-24T16:05:15Z) - Defining Replicability of Prediction Rules [2.4366811507669124]
I propose an approach for defining replicability for prediction rules.
I focus specifically on the meaning of "consistent results" in typical utilization contexts.
arXiv Detail & Related papers (2023-04-30T13:27:55Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Abduction and Argumentation for Explainable Machine Learning: A Position
Survey [2.28438857884398]
This paper presents Abduction and Argumentation as two principled forms for reasoning.
It fleshes out the fundamental role that they can play within Machine Learning.
arXiv Detail & Related papers (2020-10-24T13:23:44Z) - Social Commonsense Reasoning with Multi-Head Knowledge Attention [24.70946979449572]
Social Commonsense Reasoning requires understanding of text, knowledge about social events and their pragmatic implications, as well as commonsense reasoning skills.
We propose a novel multi-head knowledge attention model that encodes semi-structured commonsense inference rules and learns to incorporate them in a transformer-based reasoning cell.
arXiv Detail & Related papers (2020-10-12T10:24:40Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.