Visual Analytics for Explainable and Trustworthy Artificial Intelligence
- URL: http://arxiv.org/abs/2507.10240v1
- Date: Mon, 14 Jul 2025 13:03:17 GMT
- Title: Visual Analytics for Explainable and Trustworthy Artificial Intelligence
- Authors: Angelos Chatzimparmpas,
- Abstract summary: A key obstacle to AI adoption lies in the lack of transparency.<n>Many automated systems function as "black boxes," providing predictions without revealing the underlying processes.<n>Visual analytics (VA) provides a compelling solution by combining AI models with interactive visualizations.
- Score: 2.1212179660694104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Our society increasingly depends on intelligent systems to solve complex problems, ranging from recommender systems suggesting the next movie to watch to AI models assisting in medical diagnoses for hospitalized patients. With the iterative improvement of diagnostic accuracy and efficiency, AI holds significant potential to mitigate medical misdiagnoses by preventing numerous deaths and reducing an economic burden of approximately 450 EUR billion annually. However, a key obstacle to AI adoption lies in the lack of transparency: many automated systems function as "black boxes," providing predictions without revealing the underlying processes. This opacity can hinder experts' ability to trust and rely on AI systems. Visual analytics (VA) provides a compelling solution by combining AI models with interactive visualizations. These specialized charts and graphs empower users to incorporate their domain expertise to refine and improve the models, bridging the gap between AI and human understanding. In this work, we define, categorize, and explore how VA solutions can foster trust across the stages of a typical AI pipeline. We propose a design space for innovative visualizations and present an overview of our previously developed VA dashboards, which support critical tasks within the various pipeline stages, including data processing, feature engineering, hyperparameter tuning, understanding, debugging, refining, and comparing models.
Related papers
- Explainable AI for Collaborative Assessment of 2D/3D Registration Quality [50.65650507103078]
We propose the first artificial intelligence framework trained specifically for 2D/3D registration quality verification.<n>Our explainable AI (XAI) approach aims to enhance informed decision-making for human operators.
arXiv Detail & Related papers (2025-07-23T15:28:57Z) - AIGI-Holmes: Towards Explainable and Generalizable AI-Generated Image Detection via Multimodal Large Language Models [78.08374249341514]
The rapid development of AI-generated content (AIGC) has led to the misuse of AI-generated images (AIGI) in spreading misinformation.<n>We introduce a large-scale and comprehensive dataset, Holmes-Set, which includes an instruction-tuning dataset with explanations on whether images are AI-generated.<n>Our work introduces an efficient data annotation method called the Multi-Expert Jury, enhancing data generation through structured MLLM explanations and quality control.<n>In addition, we propose Holmes Pipeline, a meticulously designed three-stage training framework comprising visual expert pre-training, supervised fine-tuning, and direct preference optimization
arXiv Detail & Related papers (2025-07-03T14:26:31Z) - Beyond Black-Box AI: Interpretable Hybrid Systems for Dementia Care [2.4339626079536925]
The recent boom of large language models (LLMs) has re-ignited the hope that artificial intelligence (AI) systems could aid medical diagnosis.<n>Despite dazzling benchmark scores, LLM assistants have yet to deliver measurable improvements at the bedside.<n>This scoping review aims to highlight the areas where AI is limited to make practical contributions in the clinical setting.
arXiv Detail & Related papers (2025-07-02T01:43:06Z) - AI Automatons: AI Systems Intended to Imitate Humans [54.19152688545896]
There is a growing proliferation of AI systems designed to mimic people's behavior, work, abilities, likenesses, or humanness.<n>The research, design, deployment, and availability of such AI systems have prompted growing concerns about a wide range of possible legal, ethical, and other social impacts.
arXiv Detail & Related papers (2025-03-04T03:55:38Z) - AI-in-the-loop: The future of biomedical visual analytics applications in the era of AI [3.0942901747200975]
How will massive developments of AI in data analytics shape future data visualizations and visual analytics?<n>What are opportunities, open challenges and threats in the context of an increasingly powerful AI?<n>We highlight the potential of AI to transform biomedical visualization as a research field.
arXiv Detail & Related papers (2024-12-20T13:27:24Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - The Limits of Perception: Analyzing Inconsistencies in Saliency Maps in XAI [0.0]
Explainable artificial intelligence (XAI) plays an indispensable role in demystifying the decision-making processes of AI.
As they operate as "black boxes," with their reasoning obscured and inaccessible, there's an increased risk of misdiagnosis.
This shift towards transparency is not just beneficial -- it's a critical step towards responsible AI integration in healthcare.
arXiv Detail & Related papers (2024-03-23T02:15:23Z) - Applying Bayesian Ridge Regression AI Modeling in Virus Severity
Prediction [0.0]
We review the strengths and weaknesses of Bayesian Ridge Regression, an AI model that can be used to bring cutting edge virus analysis to healthcare professionals.
The model's accuracy assessment revealed promising results, with room for improvement.
In addition, the severity index serves as a valuable tool to gain a broad overview of patient care needs.
arXiv Detail & Related papers (2023-10-14T04:17:00Z) - Representation Engineering: A Top-Down Approach to AI Transparency [130.33981757928166]
We identify and characterize the emerging area of representation engineering (RepE)<n>RepE places population-level representations, rather than neurons or circuits, at the center of analysis.<n>We showcase how these methods can provide traction on a wide range of safety-relevant problems.
arXiv Detail & Related papers (2023-10-02T17:59:07Z) - A Conceptual Algorithm for Applying Ethical Principles of AI to Medical Practice [5.005928809654619]
AI-powered tools are increasingly matching or exceeding specialist-level performance across multiple domains.<n>These systems promise to reduce disparities in care delivery across demographic, racial, and socioeconomic boundaries.<n>The democratization of such AI tools can reduce the cost of care, optimize resource allocation, and improve the quality of care.
arXiv Detail & Related papers (2023-04-23T04:14:18Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.