VerifAI: Verified Generative AI
- URL: http://arxiv.org/abs/2307.02796v2
- Date: Wed, 11 Oct 2023 03:16:23 GMT
- Title: VerifAI: Verified Generative AI
- Authors: Nan Tang and Chenyu Yang and Ju Fan and Lei Cao and Yuyu Luo and Alon
Halevy
- Abstract summary: Generative AI has made significant strides, yet concerns about its accuracy and reliability continue to grow.
We propose that verifying the outputs of generative AI from a data management perspective is an emerging issue for generative AI.
Our vision is to promote the development of verifiable generative AI and contribute to a more trustworthy and responsible use of AI.
- Score: 22.14231506649365
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative AI has made significant strides, yet concerns about the accuracy
and reliability of its outputs continue to grow. Such inaccuracies can have
serious consequences such as inaccurate decision-making, the spread of false
information, privacy violations, legal liabilities, and more. Although efforts
to address these risks are underway, including explainable AI and responsible
AI practices such as transparency, privacy protection, bias mitigation, and
social and environmental responsibility, misinformation caused by generative AI
will remain a significant challenge. We propose that verifying the outputs of
generative AI from a data management perspective is an emerging issue for
generative AI. This involves analyzing the underlying data from multi-modal
data lakes, including text files, tables, and knowledge graphs, and assessing
its quality and consistency. By doing so, we can establish a stronger
foundation for evaluating the outputs of generative AI models. Such an approach
can ensure the correctness of generative AI, promote transparency, and enable
decision-making with greater confidence. Our vision is to promote the
development of verifiable generative AI and contribute to a more trustworthy
and responsible use of AI.
Related papers
- Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems [2.444630714797783]
We review and discuss the intricacies of AI biases, definitions, methods of detection and mitigation, and metrics for evaluating bias.
We also discuss open challenges with regard to the trustworthiness and widespread application of AI across diverse domains of human-centric decision making.
arXiv Detail & Related papers (2024-08-28T06:04:25Z) - Generative AI for Secure and Privacy-Preserving Mobile Crowdsensing [74.58071278710896]
generative AI has attracted much attention from both academic and industrial fields.
Secure and privacy-preserving mobile crowdsensing (SPPMCS) has been widely applied in data collection/ acquirement.
arXiv Detail & Related papers (2024-05-17T04:00:58Z) - Explainable AI is Responsible AI: How Explainability Creates Trustworthy
and Socially Responsible Artificial Intelligence [9.844540637074836]
This is the topic of responsible AI, which emphasizes the need to develop trustworthy AI systems.
XAI has been broadly considered as a building block for responsible AI (RAI)
Our findings lead us to conclude that XAI is an essential foundation for every pillar of RAI.
arXiv Detail & Related papers (2023-12-04T00:54:04Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - It is not "accuracy vs. explainability" -- we need both for trustworthy
AI systems [0.0]
We are witnessing the emergence of an AI economy and society where AI technologies are increasingly impacting health care, business, transportation and many aspects of everyday life.
However, AI systems may produce errors, can exhibit bias, may be sensitive to noise in the data, and often lack technical and judicial transparency resulting in reduction in trust and challenges in their adoption.
These recent shortcomings and concerns have been documented in scientific but also in general press such as accidents with self driving cars, biases in healthcare, hiring and face recognition systems for people of color, seemingly correct medical decisions later found to be made due to wrong reasons etc.
arXiv Detail & Related papers (2022-12-16T23:33:10Z) - Never trust, always verify : a roadmap for Trustworthy AI? [12.031113181911627]
We examine trust in the context of AI-based systems to understand what it means for an AI system to be trustworthy.
We suggest a trust (resp. zero-trust) model for AI and suggest a set of properties that should be satisfied to ensure the trustworthiness of AI systems.
arXiv Detail & Related papers (2022-06-23T21:13:10Z) - Rebuilding Trust: Queer in AI Approach to Artificial Intelligence Risk
Management [0.0]
Trustworthy AI has become an important topic because trust in AI systems and their creators has been lost, or was never present in the first place.
We argue that any AI development, deployment, and monitoring framework that aspires to trust must incorporate both feminist, non-exploitative design principles.
arXiv Detail & Related papers (2021-09-21T21:22:58Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.