1st ICLR International Workshop on Privacy, Accountability,
Interpretability, Robustness, Reasoning on Structured Data (PAIR^2Struct)
- URL: http://arxiv.org/abs/2210.03612v1
- Date: Fri, 7 Oct 2022 15:12:03 GMT
- Title: 1st ICLR International Workshop on Privacy, Accountability,
Interpretability, Robustness, Reasoning on Structured Data (PAIR^2Struct)
- Authors: Hao Wang, Wanyu Lin, Hao He, Di Wang, Chengzhi Mao, Muhan Zhang
- Abstract summary: Data Privacy, Accountability, Interpretability, Robustness, and Reasoning have been recognized as fundamental principles of using machine learning (ML) technologies on decision-critical and/or privacy-sensitive applications.
By exploiting the inherently structured knowledge, one can design plausible approaches to identify and use more relevant variables to make reliable decisions.
- Score: 28.549151517783287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have seen advances on principles and guidance relating to
accountable and ethical use of artificial intelligence (AI) spring up around
the globe. Specifically, Data Privacy, Accountability, Interpretability,
Robustness, and Reasoning have been broadly recognized as fundamental
principles of using machine learning (ML) technologies on decision-critical
and/or privacy-sensitive applications. On the other hand, in tremendous
real-world applications, data itself can be well represented as various
structured formalisms, such as graph-structured data (e.g., networks),
grid-structured data (e.g., images), sequential data (e.g., text), etc. By
exploiting the inherently structured knowledge, one can design plausible
approaches to identify and use more relevant variables to make reliable
decisions, thereby facilitating real-world deployments.
Related papers
- Data Science Principles for Interpretable and Explainable AI [0.7581664835990121]
Interpretable and interactive machine learning aims to make complex models more transparent and controllable.
This review synthesizes key principles from the growing literature in this field.
arXiv Detail & Related papers (2024-05-17T05:32:27Z) - Best Practices and Lessons Learned on Synthetic Data for Language Models [83.63271573197026]
The success of AI models relies on the availability of large, diverse, and high-quality datasets.
Synthetic data has emerged as a promising solution by generating artificial data that mimics real-world patterns.
arXiv Detail & Related papers (2024-04-11T06:34:17Z) - Predictive, scalable and interpretable knowledge tracing on structured domains [6.860460230412773]
PSI-KT is a hierarchical generative approach that explicitly models how both individual cognitive traits and the prerequisite structure of knowledge influence learning dynamics.
PSI-KT targets the real-world need for efficient personalization even with a growing body of learners and learning histories.
arXiv Detail & Related papers (2024-03-19T22:19:29Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - Semantic Modelling of Organizational Knowledge as a Basis for Enterprise
Data Governance 4.0 -- Application to a Unified Clinical Data Model [6.302916372143144]
We establish a simple, cost-efficient framework that enables metadata-driven, agile and (semi-automated) data governance.
We explain how we implement and use this framework to integrate 25 years of clinical study data at an enterprise scale in a fully productive environment.
arXiv Detail & Related papers (2023-10-20T19:36:03Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - CateCom: a practical data-centric approach to categorization of
computational models [77.34726150561087]
We present an effort aimed at organizing the landscape of physics-based and data-driven computational models.
We apply object-oriented design concepts and outline the foundations of an open-source collaborative framework.
arXiv Detail & Related papers (2021-09-28T02:59:40Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Towards Accountability for Machine Learning Datasets: Practices from
Software Engineering and Infrastructure [9.825840279544465]
datasets which empower machine learning are often used, shared and re-used with little visibility into the processes of deliberation which led to their creation.
This paper introduces a rigorous framework for dataset development transparency which supports decision-making and accountability.
arXiv Detail & Related papers (2020-10-23T01:57:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.