User-Oriented Smart General AI System under Causal Inference
- URL: http://arxiv.org/abs/2103.14561v1
- Date: Thu, 25 Mar 2021 08:34:35 GMT
- Title: User-Oriented Smart General AI System under Causal Inference
- Authors: Huimin Peng
- Abstract summary: General AI system solves a wide range of tasks with high performance in an automated fashion.
The best general AI algorithm designed by one individual is different from that devised by another.
Tacit knowledge depends upon user-specific comprehension of task information and individual model design preferences.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: General AI system solves a wide range of tasks with high performance in an
automated fashion. The best general AI algorithm designed by one individual is
different from that devised by another. The best performance records achieved
by different users are also different. An inevitable component of general AI is
tacit knowledge that depends upon user-specific comprehension of task
information and individual model design preferences that are related to user
technical experiences. Tacit knowledge affects model performance but cannot be
automatically optimized in general AI algorithms. In this paper, we propose
User-Oriented Smart General AI System under Causal Inference, abbreviated as
UOGASuCI, where UOGAS represents User-Oriented General AI System and uCI means
under the framework of causal inference. User characteristics that have a
significant influence upon tacit knowledge can be extracted from observed model
training experiences of many users in external memory modules. Under the
framework of causal inference, we manage to identify the optimal value of user
characteristics that are connected with the best model performance designed by
users. We make suggestions to users about how different user characteristics
can improve the best model performance achieved by users. By recommending
updating user characteristics associated with individualized tacit knowledge
comprehension and technical preferences, UOGAS helps users design models with
better performance.
Related papers
- Survey of User Interface Design and Interaction Techniques in Generative AI Applications [79.55963742878684]
We aim to create a compendium of different user-interaction patterns that can be used as a reference for designers and developers alike.
We also strive to lower the entry barrier for those attempting to learn more about the design of generative AI applications.
arXiv Detail & Related papers (2024-10-28T23:10:06Z) - iSee: Advancing Multi-Shot Explainable AI Using Case-based Recommendations [0.6774524960721717]
iSee platform is designed for the intelligent sharing and reuse of explanation experiences.
Case-based Reasoning is used to advance best practices in XAI.
All knowledge generated within the iSee platform is formalised by the iSee ontology for interoperability.
arXiv Detail & Related papers (2024-08-23T09:44:57Z) - Establishing Knowledge Preference in Language Models [80.70632813935644]
Language models are known to encode a great amount of factual knowledge through pretraining.
Such knowledge might be insufficient to cater to user requests.
When answering questions about ongoing events, the model should use recent news articles to update its response.
When some facts are edited in the model, the updated facts should override all prior knowledge learned by the model.
arXiv Detail & Related papers (2024-07-17T23:16:11Z) - Beyond One-Size-Fits-All: Adapting Counterfactual Explanations to User Objectives [2.3369294168789203]
Counterfactual Explanations (CFEs) offer insights into the decision-making processes of machine learning algorithms.
Existing literature often overlooks the diverse needs and objectives of users across different applications and domains.
We advocate for a nuanced understanding of CFEs, recognizing the variability in desired properties based on user objectives and target applications.
arXiv Detail & Related papers (2024-04-12T13:11:55Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - I-CEE: Tailoring Explanations of Image Classification Models to User
Expertise [13.293968260458962]
We present I-CEE, a framework that provides Image Classification Explanations tailored to User Expertise.
I-CEE models the informativeness of the example images to depend on user expertise, resulting in different examples for different users.
Experiments with simulated users show that I-CEE improves users' ability to accurately predict the model's decisions.
arXiv Detail & Related papers (2023-12-19T12:26:57Z) - LAMBO: Large AI Model Empowered Edge Intelligence [71.56135386994119]
Next-generation edge intelligence is anticipated to benefit various applications via offloading techniques.
Traditional offloading architectures face several issues, including heterogeneous constraints, partial perception, uncertain generalization, and lack of tractability.
We propose a Large AI Model-Based Offloading (LAMBO) framework with over one billion parameters for solving these problems.
arXiv Detail & Related papers (2023-08-29T07:25:42Z) - Advancing Human-AI Complementarity: The Impact of User Expertise and
Algorithmic Tuning on Joint Decision Making [10.890854857970488]
Many factors can impact success of Human-AI teams, including a user's domain expertise, mental models of an AI system, trust in recommendations, and more.
Our study examined user performance in a non-trivial blood vessel labeling task where participants indicated whether a given blood vessel was flowing or stalled.
Our results show that while recommendations from an AI-Assistant can aid user decision making, factors such as users' baseline performance relative to the AI and complementary tuning of AI error types significantly impact overall team performance.
arXiv Detail & Related papers (2022-08-16T21:39:58Z) - Meta-Wrapper: Differentiable Wrapping Operator for User Interest
Selection in CTR Prediction [97.99938802797377]
Click-through rate (CTR) prediction, whose goal is to predict the probability of the user to click on an item, has become increasingly significant in recommender systems.
Recent deep learning models with the ability to automatically extract the user interest from his/her behaviors have achieved great success.
We propose a novel approach under the framework of the wrapper method, which is named Meta-Wrapper.
arXiv Detail & Related papers (2022-06-28T03:28:15Z) - The Impact of Explanations on AI Competency Prediction in VQA [3.149760860038061]
We evaluate the impact of explanations on the user's mental model of AI agent competency within the task of visual question answering (VQA)
We introduce an explainable VQA system that uses spatial and object features and is powered by the BERT language model.
arXiv Detail & Related papers (2020-07-02T06:11:28Z) - Optimizing Interactive Systems via Data-Driven Objectives [70.3578528542663]
We propose an approach that infers the objective directly from observed user interactions.
These inferences can be made regardless of prior knowledge and across different types of user behavior.
We introduce Interactive System (ISO), a novel algorithm that uses these inferred objectives for optimization.
arXiv Detail & Related papers (2020-06-19T20:49:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.