Trustworthy Representation Learning Across Domains
- URL: http://arxiv.org/abs/2308.12315v2
- Date: Tue, 29 Aug 2023 23:19:53 GMT
- Title: Trustworthy Representation Learning Across Domains
- Authors: Ronghang Zhu and Dongliang Guo and Daiqing Qi and Zhixuan Chu and
Xiang Yu and Sheng Li
- Abstract summary: We introduce the details of the proposed trustworthy framework for representation learning across domains.
We provide basic notions and comprehensively summarize existing methods for the trustworthy framework from four concepts.
- Score: 22.54626834599221
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As AI systems have obtained significant performance to be deployed widely in
our daily live and human society, people both enjoy the benefits brought by
these technologies and suffer many social issues induced by these systems. To
make AI systems good enough and trustworthy, plenty of researches have been
done to build guidelines for trustworthy AI systems. Machine learning is one of
the most important parts for AI systems and representation learning is the
fundamental technology in machine learning. How to make the representation
learning trustworthy in real-world application, e.g., cross domain scenarios,
is very valuable and necessary for both machine learning and AI system fields.
Inspired by the concepts in trustworthy AI, we proposed the first trustworthy
representation learning across domains framework which includes four concepts,
i.e, robustness, privacy, fairness, and explainability, to give a comprehensive
literature review on this research direction. Specifically, we first introduce
the details of the proposed trustworthy framework for representation learning
across domains. Second, we provide basic notions and comprehensively summarize
existing methods for the trustworthy framework from four concepts. Finally, we
conclude this survey with insights and discussions on future research
directions.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Never trust, always verify : a roadmap for Trustworthy AI? [12.031113181911627]
We examine trust in the context of AI-based systems to understand what it means for an AI system to be trustworthy.
We suggest a trust (resp. zero-trust) model for AI and suggest a set of properties that should be satisfied to ensure the trustworthiness of AI systems.
arXiv Detail & Related papers (2022-06-23T21:13:10Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: From Principles to Practices [44.67324097900778]
Many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc.
In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.
To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems.
arXiv Detail & Related papers (2021-10-04T03:20:39Z) - Knowledge-intensive Language Understanding for Explainable AI [9.541228711585886]
How AI-led decisions are made and what determining factors were included are crucial to understand.
It is critical to have human-centered explanations that are directly related to decision-making.
It is necessary to involve explicit domain knowledge that humans understand and use.
arXiv Detail & Related papers (2021-08-02T21:12:30Z) - Know Your Model (KYM): Increasing Trust in AI and Machine Learning [4.93786553432578]
We analyze each element of trustworthiness and provide a set of 20 guidelines that can be leveraged to ensure optimal AI functionality.
The guidelines help ensure that trustworthiness is provable and can be demonstrated, they are implementation agnostic, and they can be applied to any AI system in any sector.
arXiv Detail & Related papers (2021-05-31T14:08:22Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Technologies for Trustworthy Machine Learning: A Survey in a
Socio-Technical Context [4.866589122417036]
We argue that four categories of system properties are instrumental in achieving the policy objectives, namely fairness, explainability, auditability and safety & security (FEAS)
We discuss how these properties need to be considered across all stages of the machine learning life cycle, from data collection through run-time model inference.
We conclude with an identification of open research problems, with a particular focus on the connection between trustworthy machine learning technologies and their implications for individuals and society.
arXiv Detail & Related papers (2020-07-17T11:39:20Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.