A Framework for Institutional Risk Identification using Knowledge Graphs
and Automated News Profiling
- URL: http://arxiv.org/abs/2109.09103v1
- Date: Sun, 19 Sep 2021 11:06:12 GMT
- Title: A Framework for Institutional Risk Identification using Knowledge Graphs
and Automated News Profiling
- Authors: Mahmoud Mahfouz, Armineh Nourbakhsh, Sameena Shah
- Abstract summary: Organizations around the world face an array of risks impacting their operations globally.
It is imperative to have a robust risk identification process to detect and evaluate the impact of potential risks before they materialize.
- Score: 5.631924211771643
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Organizations around the world face an array of risks impacting their
operations globally. It is imperative to have a robust risk identification
process to detect and evaluate the impact of potential risks before they
materialize. Given the nature of the task and the current requirements of deep
subject matter expertise, most organizations utilize a heavily manual process.
In our work, we develop an automated system that (a) continuously monitors
global news, (b) is able to autonomously identify and characterize risks, (c)
is able to determine the proximity of reaching triggers to determine the
distance from the manifestation of the risk impact and (d) identifies
organization's operational areas that may be most impacted by the risk. Other
contributions also include: (a) a knowledge graph representation of risks and
(b) relevant news matching to risks identified by the organization utilizing a
neural embedding model to match the textual description of a given risk with
multi-lingual news.
Related papers
- Multi-Agent Risks from Advanced AI [90.74347101431474]
Multi-agent systems of advanced AI pose novel and under-explored risks.
We identify three key failure modes based on agents' incentives, as well as seven key risk factors.
We highlight several important instances of each risk, as well as promising directions to help mitigate them.
arXiv Detail & Related papers (2025-02-19T23:03:21Z) - Supervision policies can shape long-term risk management in general-purpose AI models [0.0]
We develop a simulation framework parameterized by features extracted from the diverse landscape of risk, incident, or hazard reporting ecosystems.
We evaluate four supervision policies: non-prioritized (first-come, first-served), random selection, priority-based (addressing the highest-priority risks first), and diversity-prioritized (balancing high-priority risks with comprehensive coverage across risk types)
Our results indicate that while priority-based and diversity-prioritized policies are more effective at mitigating high-impact risks, they may inadvertently neglect systemic issues reported by the broader community.
arXiv Detail & Related papers (2025-01-10T17:52:34Z) - Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - A Framework for Exploring the Consequences of AI-Mediated Enterprise Knowledge Access and Identifying Risks to Workers [3.4568218861862556]
This paper presents the Consequence-Mechanism-Risk framework to identify risks to workers from AI-mediated enterprise knowledge access systems.
We have drawn on wide-ranging literature detailing risks to workers, and categorised risks as being to worker value, power, and wellbeing.
Future work could apply this framework to other technological systems to promote the protection of workers and other groups.
arXiv Detail & Related papers (2023-12-08T17:05:40Z) - RiskBench: A Scenario-based Benchmark for Risk Identification [4.263035319815899]
This work focuses on risk identification, the process of identifying and analyzing risks stemming from dynamic traffic participants and unexpected events.
We introduce textbfRiskBench, a large-scale scenario-based benchmark for risk identification.
We assess the ability of ten algorithms to (1) detect and locate risks, (2) anticipate risks, and (3) facilitate decision-making.
arXiv Detail & Related papers (2023-12-04T06:21:22Z) - Capsa: A Unified Framework for Quantifying Risk in Deep Neural Networks [142.67349734180445]
Existing algorithms that provide risk-awareness to deep neural networks are complex and ad-hoc.
Here we present capsa, a framework for extending models with risk-awareness.
arXiv Detail & Related papers (2023-08-01T02:07:47Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z) - Foveate, Attribute, and Rationalize: Towards Physically Safe and
Trustworthy AI [76.28956947107372]
Covertly unsafe text is an area of particular interest, as such text may arise from everyday scenarios and are challenging to detect as harmful.
We propose FARM, a novel framework leveraging external knowledge for trustworthy rationale generation in the context of safety.
Our experiments show that FARM obtains state-of-the-art results on the SafeText dataset, showing absolute improvement in safety classification accuracy by 5.9%.
arXiv Detail & Related papers (2022-12-19T17:51:47Z) - Ethical and social risks of harm from Language Models [22.964941107198023]
This paper aims to help structure the risk landscape associated with large-scale Language Models (LMs)
A wide range of established and anticipated risks are analysed in detail, drawing on multidisciplinary expertise and literature from computer science, linguistics, and social sciences.
arXiv Detail & Related papers (2021-12-08T16:09:48Z) - Driver-centric Risk Object Identification [25.85690304998681]
We propose a driver-centric definition of risk, i.e., risky objects influence driver behavior.
We formulate the task as a cause-effect problem and present a novel two-stage risk object identification framework.
A driver-centric Risk Object Identification dataset is curated to evaluate the proposed system.
arXiv Detail & Related papers (2021-06-24T17:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.