Co-creating a globally interpretable model with human input
- URL: http://arxiv.org/abs/2306.13381v1
- Date: Fri, 23 Jun 2023 09:03:16 GMT
- Title: Co-creating a globally interpretable model with human input
- Authors: Rahul Nair
- Abstract summary: We consider an aggregated human-AI collaboration aimed at generating a joint interpretable model.
The model takes the form of Boolean decision rules, where human input is provided in the form of logical conditions or as partial templates.
- Score: 4.435944192177403
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider an aggregated human-AI collaboration aimed at generating a joint
interpretable model. The model takes the form of Boolean decision rules, where
human input is provided in the form of logical conditions or as partial
templates. This focus on the combined construction of a model offers a
different perspective on joint decision making. Previous efforts have typically
focused on aggregating outcomes rather than decisions logic. We demonstrate the
proposed approach through two examples and highlight the usefulness and
challenges of the approach.
Related papers
- Modeling Boundedly Rational Agents with Latent Inference Budgets [56.24971011281947]
We introduce a latent inference budget model (L-IBM) that models agents' computational constraints explicitly.
L-IBMs make it possible to learn agent models using data from diverse populations of suboptimal actors.
We show that L-IBMs match or outperform Boltzmann models of decision-making under uncertainty.
arXiv Detail & Related papers (2023-12-07T03:55:51Z) - An attention model for the formation of collectives in real-world
domains [78.1526027174326]
We consider the problem of forming collectives of agents for real-world applications aligned with Sustainable Development Goals.
We propose a general approach for the formation of collectives based on a novel combination of an attention model and an integer linear program.
arXiv Detail & Related papers (2022-04-30T09:15:36Z) - Human-AI Collaboration via Conditional Delegation: A Case Study of
Content Moderation [47.102566259034326]
We propose conditional delegation as an alternative paradigm for human-AI collaboration.
We develop novel interfaces to assist humans in creating conditional delegation rules.
Our study demonstrates the promise of conditional delegation in improving model performance.
arXiv Detail & Related papers (2022-04-25T17:00:02Z) - An Ample Approach to Data and Modeling [1.0152838128195467]
We describe a framework for modeling how models can be built that integrates concepts and methods from a wide range of fields.
The reference M* meta model framework is presented, which relies critically in associating whole datasets and respective models in terms of a strict equivalence relation.
Several considerations about how the developed framework can provide insights about data clustering, complexity, collaborative research, deep learning, and creativity are then presented.
arXiv Detail & Related papers (2021-10-05T01:26:09Z) - Dissecting Generation Modes for Abstractive Summarization Models via
Ablation and Attribution [34.2658286826597]
We propose a two-step method to interpret summarization model decisions.
We first analyze the model's behavior by ablating the full model to categorize each decoder decision into one of several generation modes.
After isolating decisions that do depend on the input, we explore interpreting these decisions using several different attribution methods.
arXiv Detail & Related papers (2021-06-03T00:54:16Z) - Attentional Prototype Inference for Few-Shot Segmentation [128.45753577331422]
We propose attentional prototype inference (API), a probabilistic latent variable framework for few-shot segmentation.
We define a global latent variable to represent the prototype of each object category, which we model as a probabilistic distribution.
We conduct extensive experiments on four benchmarks, where our proposal obtains at least competitive and often better performance than state-of-the-art prototype-based methods.
arXiv Detail & Related papers (2021-05-14T06:58:44Z) - Paired Examples as Indirect Supervision in Latent Decision Models [109.76417071249945]
We introduce a way to leverage paired examples that provide stronger cues for learning latent decisions.
We apply our method to improve compositional question answering using neural module networks on the DROP dataset.
arXiv Detail & Related papers (2021-04-05T03:58:30Z) - On Exploiting Hitting Sets for Model Reconciliation [53.81101846598925]
In human-aware planning, a planning agent may need to provide an explanation to a human user on why its plan is optimal.
A popular approach to do this is called model reconciliation, where the agent tries to reconcile the differences in its model and the human's model.
We present a logic-based framework for model reconciliation that extends beyond the realm of planning.
arXiv Detail & Related papers (2020-12-16T21:25:53Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - A Meta-Bayesian Model of Intentional Visual Search [0.0]
We propose a computational model of visual search that incorporates Bayesian interpretations of the neural mechanisms that underlie categorical perception and saccade planning.
To enable meaningful comparisons between simulated and human behaviours, we employ a gaze-contingent paradigm that required participants to classify occluded MNIST digits through a window that followed their gaze.
Our model is able to recapitulate human behavioural metrics such as classification accuracy while retaining a high degree of interpretability, which we demonstrate by recovering subject-specific parameters from observed human behaviour.
arXiv Detail & Related papers (2020-06-05T16:10:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.