LioNets: A Neural-Specific Local Interpretation Technique Exploiting
Penultimate Layer Information
- URL: http://arxiv.org/abs/2104.06057v1
- Date: Tue, 13 Apr 2021 09:39:33 GMT
- Title: LioNets: A Neural-Specific Local Interpretation Technique Exploiting
Penultimate Layer Information
- Authors: Ioannis Mollas, Nick Bassiliades, Grigorios Tsoumakas
- Abstract summary: Interpretable machine learning (IML) is an urgent topic of research.
This paper focuses on a local-based, neural-specific interpretation process applied to textual and time-series data.
- Score: 6.570220157893279
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI) has a tremendous impact on the unexpected growth
of technology in almost every aspect. AI-powered systems are monitoring and
deciding about sensitive economic and societal issues. The future is towards
automation, and it must not be prevented. However, this is a conflicting
viewpoint for a lot of people, due to the fear of uncontrollable AI systems.
This concern could be reasonable if it was originating from considerations
associated with social issues, like gender-biased, or obscure decision-making
systems. Explainable AI (XAI) is recently treated as a huge step towards
reliable systems, enhancing the trust of people to AI. Interpretable machine
learning (IML), a subfield of XAI, is also an urgent topic of research. This
paper presents a small but significant contribution to the IML community,
focusing on a local-based, neural-specific interpretation process applied to
textual and time-series data. The proposed methodology introduces new
approaches to the presentation of feature importance based interpretations, as
well as the production of counterfactual words on textual datasets. Eventually,
an improved evaluation metric is introduced for the assessment of
interpretation techniques, which supports an extensive set of qualitative and
quantitative experiments.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Towards Possibilities & Impossibilities of AI-generated Text Detection:
A Survey [97.33926242130732]
Large Language Models (LLMs) have revolutionized the domain of natural language processing (NLP) with remarkable capabilities of generating human-like text responses.
Despite these advancements, several works in the existing literature have raised serious concerns about the potential misuse of LLMs.
To address these concerns, a consensus among the research community is to develop algorithmic solutions to detect AI-generated text.
arXiv Detail & Related papers (2023-10-23T18:11:32Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - It is not "accuracy vs. explainability" -- we need both for trustworthy
AI systems [0.0]
We are witnessing the emergence of an AI economy and society where AI technologies are increasingly impacting health care, business, transportation and many aspects of everyday life.
However, AI systems may produce errors, can exhibit bias, may be sensitive to noise in the data, and often lack technical and judicial transparency resulting in reduction in trust and challenges in their adoption.
These recent shortcomings and concerns have been documented in scientific but also in general press such as accidents with self driving cars, biases in healthcare, hiring and face recognition systems for people of color, seemingly correct medical decisions later found to be made due to wrong reasons etc.
arXiv Detail & Related papers (2022-12-16T23:33:10Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Interpretability and accessibility of machine learning in selected food
processing, agriculture and health applications [0.0]
Lack of interpretability of ML based systems is a major hindrance to widespread adoption of these powerful algorithms.
New techniques are emerging to improve ML accessibility through automated model design.
This paper provides a review of the work done to improve interpretability and accessibility of machine learning in the context of global problems.
arXiv Detail & Related papers (2022-11-30T02:44:13Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - An Objective Metric for Explainable AI: How and Why to Estimate the
Degree of Explainability [3.04585143845864]
We present a new model-agnostic metric to measure the Degree of eXplainability of correct information in an objective way.
We designed a few experiments and a user-study on two realistic AI-based systems for healthcare and finance.
arXiv Detail & Related papers (2021-09-11T17:44:13Z) - Knowledge-intensive Language Understanding for Explainable AI [9.541228711585886]
How AI-led decisions are made and what determining factors were included are crucial to understand.
It is critical to have human-centered explanations that are directly related to decision-making.
It is necessary to involve explicit domain knowledge that humans understand and use.
arXiv Detail & Related papers (2021-08-02T21:12:30Z) - Bias in Data-driven AI Systems -- An Introductory Survey [37.34717604783343]
This survey focuses on data-driven AI, as a large part of AI is powered nowadays by (big) data and powerful Machine Learning (ML) algorithms.
If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features like race, sex, etc.
arXiv Detail & Related papers (2020-01-14T09:39:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.