Advances in Automatically Rating the Trustworthiness of Text Processing
Services
- URL: http://arxiv.org/abs/2302.09079v1
- Date: Sat, 4 Feb 2023 14:27:46 GMT
- Title: Advances in Automatically Rating the Trustworthiness of Text Processing
Services
- Authors: Biplav Srivastava, Kausik Lakkaraju, Mariana Bernagozzi, Marco
Valtorta
- Abstract summary: AI services are known to have unstable behavior when subjected to changes in data, models or users.
The current approach of assessing AI services in a black box setting, where the consumer does not have access to the AI's source code or training data, is limited.
Our approach is inspired by the success of nutritional labeling in food industry to promote health and seeks to assess and rate AI services for trust from the perspective of an independent stakeholder.
- Score: 9.696492590163016
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: AI services are known to have unstable behavior when subjected to changes in
data, models or users. Such behaviors, whether triggered by omission or
commission, lead to trust issues when AI works with humans. The current
approach of assessing AI services in a black box setting, where the consumer
does not have access to the AI's source code or training data, is limited. The
consumer has to rely on the AI developer's documentation and trust that the
system has been built as stated. Further, if the AI consumer reuses the service
to build other services which they sell to their customers, the consumer is at
the risk of the service providers (both data and model providers). Our
approach, in this context, is inspired by the success of nutritional labeling
in food industry to promote health and seeks to assess and rate AI services for
trust from the perspective of an independent stakeholder. The ratings become a
means to communicate the behavior of AI systems so that the consumer is
informed about the risks and can make an informed decision. In this paper, we
will first describe recent progress in developing rating methods for text-based
machine translator AI services that have been found promising with user
studies. Then, we will outline challenges and vision for a principled,
multi-modal, causality-based rating methodologies and its implication for
decision-support in real-world scenarios like health and food recommendation.
Related papers
- Ethical AI in Retail: Consumer Privacy and Fairness [0.0]
The adoption of artificial intelligence (AI) in retail has significantly transformed the industry, enabling more personalized services and efficient operations.
However, the rapid implementation of AI technologies raises ethical concerns, particularly regarding consumer privacy and fairness.
This study aims to analyze the ethical challenges of AI applications in retail, explore ways retailers can implement AI technologies ethically while remaining competitive, and provide recommendations on ethical AI practices.
arXiv Detail & Related papers (2024-10-20T12:00:14Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Trust in AI: Progress, Challenges, and Future Directions [6.724854390957174]
The increasing use of artificial intelligence (AI) systems in our daily life explains the significance of trust/distrust in AI from a user perspective.
Trust/distrust in AI plays the role of a regulator and could significantly control the level of this diffusion.
arXiv Detail & Related papers (2024-03-12T20:26:49Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Evaluating and Improving Value Judgments in AI: A Scenario-Based Study
on Large Language Models' Depiction of Social Conventions [5.457150493905063]
We evaluate how contemporary AI services competitively meet user needs, then examined society's depiction as mirrored by Large Language Models.
We suggest a model of decision-making in value-conflicting scenarios which could be adopted for future machine value judgments.
This paper advocates for a practical approach to using AI as a tool for investigating other remote worlds.
arXiv Detail & Related papers (2023-10-04T08:42:02Z) - Certification Labels for Trustworthy AI: Insights From an Empirical
Mixed-Method Study [0.0]
This study empirically investigated certification labels as a promising solution.
We demonstrate that labels can significantly increase end-users' trust and willingness to use AI.
However, end-users' preferences for certification labels and their effect on trust and willingness to use AI were more pronounced in high-stake scenarios.
arXiv Detail & Related papers (2023-05-15T09:51:10Z) - Out of Context: Investigating the Bias and Fairness Concerns of
"Artificial Intelligence as a Service" [6.824692201913679]
"AI as a Service" (AI as a Service) is a rapidly growing market, offering various plug-and-play AI services and tools.
Yet, it is known that AI systems can encapsulate biases and inequalities that can have societal impact.
This paper argues that the context-sensitive nature of fairness is often incompatible with AI' 'one-size-fits-all' approach.
arXiv Detail & Related papers (2023-02-02T22:32:10Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.