Accountable and Explainable Methods for Complex Reasoning over Text
- URL: http://arxiv.org/abs/2211.04946v1
- Date: Wed, 9 Nov 2022 15:14:52 GMT
- Title: Accountable and Explainable Methods for Complex Reasoning over Text
- Authors: Pepa Atanasova
- Abstract summary: Accountability and transparency of Machine Learning models have been posed as critical desiderata by works in policy and law, philosophy, and computer science.
This thesis expands our collective knowledge in the areas of accountability and transparency of ML models developed for complex reasoning tasks over text.
- Score: 5.571369922847262
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A major concern of Machine Learning (ML) models is their opacity. They are
deployed in an increasing number of applications where they often operate as
black boxes that do not provide explanations for their predictions. Among
others, the potential harms associated with the lack of understanding of the
models' rationales include privacy violations, adversarial manipulations, and
unfair discrimination. As a result, the accountability and transparency of ML
models have been posed as critical desiderata by works in policy and law,
philosophy, and computer science.
In computer science, the decision-making process of ML models has been
studied by developing accountability and transparency methods. Accountability
methods, such as adversarial attacks and diagnostic datasets, expose
vulnerabilities of ML models that could lead to malicious manipulations or
systematic faults in their predictions. Transparency methods explain the
rationales behind models' predictions gaining the trust of relevant
stakeholders and potentially uncovering mistakes and unfairness in models'
decisions. To this end, transparency methods have to meet accountability
requirements as well, e.g., being robust and faithful to the underlying
rationales of a model.
This thesis presents my research that expands our collective knowledge in the
areas of accountability and transparency of ML models developed for complex
reasoning tasks over text.
Related papers
- Privacy Implications of Explainable AI in Data-Driven Systems [0.0]
Machine learning (ML) models suffer from a lack of interpretability.
The absence of transparency, often referred to as the black box nature of ML models, undermines trust.
XAI techniques address this challenge by providing frameworks and methods to explain the internal decision-making processes.
arXiv Detail & Related papers (2024-06-22T08:51:58Z) - Machine Learning Robustness: A Primer [12.426425119438846]
The discussion begins with a detailed definition of robustness, portraying it as the ability of ML models to maintain stable performance across varied and unexpected environmental conditions.
The chapter delves into the factors that impede robustness, such as data bias, model complexity, and the pitfalls of underspecified ML pipelines.
The discussion progresses to explore amelioration strategies for bolstering robustness, starting with data-centric approaches like debiasing and augmentation.
arXiv Detail & Related papers (2024-04-01T03:49:42Z) - Multimodal Large Language Models to Support Real-World Fact-Checking [80.41047725487645]
Multimodal large language models (MLLMs) carry the potential to support humans in processing vast amounts of information.
While MLLMs are already being used as a fact-checking tool, their abilities and limitations in this regard are understudied.
We propose a framework for systematically assessing the capacity of current multimodal models to facilitate real-world fact-checking.
arXiv Detail & Related papers (2024-03-06T11:32:41Z) - Large Language Model-Based Interpretable Machine Learning Control in Building Energy Systems [3.0309252269809264]
This paper investigates and explores Interpretable Machine Learning (IML), a branch of Machine Learning (ML) that enhances transparency and understanding of models and their inferences.
We develop an innovative framework that combines the principles of Shapley values and the in-context learning feature of Large Language Models (LLMs)
The paper presents a case study to demonstrate the feasibility of the developed IML framework for model predictive control-based precooling under demand response events in a virtual testbed.
arXiv Detail & Related papers (2024-02-14T21:19:33Z) - Explaining black boxes with a SMILE: Statistical Model-agnostic
Interpretability with Local Explanations [0.1398098625978622]
One of the major barriers to widespread acceptance of machine learning (ML) is trustworthiness.
Most ML models operate as black boxes, their inner workings opaque and mysterious, and it can be difficult to trust their conclusions without understanding how those conclusions are reached.
We propose SMILE, a new method that builds on previous approaches by making use of statistical distance measures to improve explainability.
arXiv Detail & Related papers (2023-11-13T12:28:00Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models [51.3422222472898]
We document the capability of large language models (LLMs) like ChatGPT to predict stock price movements using news headlines.
We develop a theoretical model incorporating information capacity constraints, underreaction, limits-to-arbitrage, and LLMs.
arXiv Detail & Related papers (2023-04-15T19:22:37Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.