Decision Rule Elicitation for Domain Adaptation
- URL: http://arxiv.org/abs/2102.11539v1
- Date: Tue, 23 Feb 2021 08:07:22 GMT
- Title: Decision Rule Elicitation for Domain Adaptation
- Authors: Alexander Nikitin and Samuel Kaski
- Abstract summary: Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
- Score: 93.02675868486932
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human-in-the-loop machine learning is widely used in artificial intelligence
(AI) to elicit labels for data points from experts or to provide feedback on
how close the predicted results are to the target. This simplifies away all the
details of the decision-making process of the expert. In this work, we allow
the experts to additionally produce decision rules describing their
decision-making; the rules are expected to be imperfect but to give additional
information. In particular, the rules can extend to new distributions, and
hence enable significantly improving performance for cases where the training
and testing distributions differ, such as in domain adaptation. We apply the
proposed method to lifelong learning and domain adaptation problems and discuss
applications in other branches of AI, such as knowledge acquisition problems in
expert systems. In simulated and real-user studies, we show that decision rule
elicitation improves domain adaptation of the algorithm and helps to propagate
expert's knowledge to the AI model.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Predicting and Understanding Human Action Decisions during Skillful
Joint-Action via Machine Learning and Explainable-AI [1.3381749415517021]
This study uses supervised machine learning and explainable artificial intelligence to model, predict and understand human decision-making.
Long short-term memory networks were trained to predict the target selection decisions of expert and novice actors completing a dyadic herding task.
arXiv Detail & Related papers (2022-06-06T16:54:43Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Towards Fairness Certification in Artificial Intelligence [31.920661197618195]
We propose a first joint effort to define the operational steps needed for AI fairness certification.
We will overview the criteria that should be met by an AI system before coming into official service and the conformity assessment procedures useful to monitor its functioning for fair decisions.
arXiv Detail & Related papers (2021-06-04T14:12:12Z) - Discovering the Rationale of Decisions: Experiments on Aligning Learning
and Reasoning [0.0]
We introduce a knowledge-driven method for model-agnostic rationale evaluation using dedicated test cases.
We show that our method allows us to analyze the rationale of black-box machine learning systems.
arXiv Detail & Related papers (2021-05-14T10:37:03Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - Specialization in Hierarchical Learning Systems [0.0]
We investigate in how far information constraints in hierarchies of experts not only provide a principled method for regularization but also to enforce specialization.
We devise an information-theoretically motivated on-line learning rule that allows partitioning of the problem space into multiple sub-problems that can be solved by the individual experts.
We show the broad applicability of our approach on a range of problems including classification, regression, density estimation, and reinforcement learning problems.
arXiv Detail & Related papers (2020-11-03T17:00:31Z) - Automatic Discovery of Interpretable Planning Strategies [9.410583483182657]
We introduce AI-Interpret, a method for transforming idiosyncratic policies into simple and interpretable descriptions.
We show that prividing the decision rules generated by AI-Interpret as flowcharts significantly improved people's planning strategies and decisions.
arXiv Detail & Related papers (2020-05-24T12:24:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.