Knowledge-augmented Risk Assessment (KaRA): a hybrid-intelligence
framework for supporting knowledge-intensive risk assessment of prospect
candidates
- URL: http://arxiv.org/abs/2303.05288v1
- Date: Thu, 9 Mar 2023 14:32:11 GMT
- Title: Knowledge-augmented Risk Assessment (KaRA): a hybrid-intelligence
framework for supporting knowledge-intensive risk assessment of prospect
candidates
- Authors: Carlos Raoni Mendes, Emilio Vital Brazil, Vinicius Segura, and Renato
Cerqueira
- Abstract summary: In many contexts, assessing the Probability of Success (PoS) of prospects heavily depends on experts' knowledge, often leading to biased and inconsistent assessments.
We have developed the framework named KARA to address these issues.
It combines multiple AI techniques that consider SMEs (Subject Matter Experts) feedback on top of a structured domain knowledge-base to support risk assessment processes of prospect candidates.
- Score: 2.3311636727756055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Evaluating the potential of a prospective candidate is a common task in
multiple decision-making processes in different industries. We refer to a
prospect as something or someone that could potentially produce positive
results in a given context, e.g., an area where an oil company could find oil,
a compound that, when synthesized, results in a material with required
properties, and so on. In many contexts, assessing the Probability of Success
(PoS) of prospects heavily depends on experts' knowledge, often leading to
biased and inconsistent assessments. We have developed the framework named KARA
(Knowledge-augmented Risk Assessment) to address these issues. It combines
multiple AI techniques that consider SMEs (Subject Matter Experts) feedback on
top of a structured domain knowledge-base to support risk assessment processes
of prospect candidates in knowledge-intensive contexts.
Related papers
- Human services organizations and the responsible integration of AI: Considering ethics and contextualizing risk(s) [0.0]
Authors argue that ethical concerns about AI deployment vary significantly based on implementation context and specific use cases.
They propose a dimensional risk assessment approach that considers factors like data sensitivity, professional oversight requirements, and potential impact on client wellbeing.
arXiv Detail & Related papers (2025-01-20T19:38:21Z) - Misalignments in AI Perception: Quantitative Findings and Visual Mapping of How Experts and the Public Differ in Expectations and Risks, Benefits, and Value Judgments [0.20971479389679332]
This study examines how the general public and academic AI experts perceive AI's capabilities and impact across 71 scenarios.
Participants evaluated each scenario on four dimensions: expected probability, perceived risk and benefit, and overall sentiment (or value)
The findings reveal significant quantitative differences: experts anticipate higher probabilities, perceive lower risks, report greater utility, and express more favorable sentiment toward AI compared to the non-experts.
arXiv Detail & Related papers (2024-12-02T12:51:45Z) - Mapping Public Perception of Artificial Intelligence: Expectations, Risk-Benefit Tradeoffs, and Value As Determinants for Societal Acceptance [0.20971479389679332]
Using a representative sample of 1100 participants from Germany, this study examines mental models of AI.
Participants quantitatively evaluated 71 statements about AI's future capabilities.
We present rankings of these projections alongside visual mappings illustrating public risk-benefit tradeoffs.
arXiv Detail & Related papers (2024-11-28T20:03:01Z) - Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - Trustworthy, responsible, ethical AI in manufacturing and supply chains:
synthesis and emerging research questions [59.34177693293227]
We explore the applicability of responsible, ethical, and trustworthy AI within the context of manufacturing.
We then use a broadened adaptation of a machine learning lifecycle to discuss, through the use of illustrative examples, how each step may result in a given AI trustworthiness concern.
arXiv Detail & Related papers (2023-05-19T10:43:06Z) - Quantitative AI Risk Assessments: Opportunities and Challenges [7.35411010153049]
Best way to reduce risks is to implement comprehensive AI lifecycle governance.
Risks can be quantified using metrics from the technical community.
This paper explores these issues, focusing on the opportunities, challenges, and potential impacts of such an approach.
arXiv Detail & Related papers (2022-09-13T21:47:25Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - A Survey of Risk-Aware Multi-Armed Bandits [84.67376599822569]
We review various risk measures of interest, and comment on their properties.
We consider algorithms for the regret minimization setting, where the exploration-exploitation trade-off manifests.
We conclude by commenting on persisting challenges and fertile areas for future research.
arXiv Detail & Related papers (2022-05-12T02:20:34Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.