Expert System Gradient Descent Style Training: Development of a
Defensible Artificial Intelligence Technique
- URL: http://arxiv.org/abs/2103.04314v1
- Date: Sun, 7 Mar 2021 10:09:50 GMT
- Title: Expert System Gradient Descent Style Training: Development of a
Defensible Artificial Intelligence Technique
- Authors: Jeremy Straub
- Abstract summary: This paper presents the use of a machine learning expert system, which is developed with meaning-assigned nodes (facts) and correlations (rules)
The performance of these systems is compared to random and fully connected networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Artificial intelligence systems, which are designed with a capability to
learn from the data presented to them, are used throughout society. These
systems are used to screen loan applicants, make sentencing recommendations for
criminal defendants, scan social media posts for disallowed content and more.
Because these systems don't assign meaning to their complex learned correlation
network, they can learn associations that don't equate to causality, resulting
in non-optimal and indefensible decisions being made. In addition to making
decisions that are sub-optimal, these systems may create legal liability for
their designers and operators by learning correlations that violate
anti-discrimination and other laws regarding what factors can be used in
different types of decision making. This paper presents the use of a machine
learning expert system, which is developed with meaning-assigned nodes (facts)
and correlations (rules). Multiple potential implementations are considered and
evaluated under different conditions, including different network error and
augmentation levels and different training levels. The performance of these
systems is compared to random and fully connected networks.
Related papers
- Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - GowFed -- A novel Federated Network Intrusion Detection System [0.15469452301122172]
This work presents GowFed, a novel network threat detection system that combines the usage of Gower Dissimilarity matrices and Federated averaging.
Different approaches of GowFed have been developed based on state-of the-art knowledge: (1) a vanilla version; and (2) a version instrumented with an attention mechanism.
Overall, GowFed intends to be the first stepping stone towards the combined usage of Federated Learning and Gower Dissimilarity matrices to detect network threats in industrial-level networks.
arXiv Detail & Related papers (2022-10-28T23:53:37Z) - Neuro-Symbolic Artificial Intelligence (AI) for Intent based Semantic
Communication [85.06664206117088]
6G networks must consider semantics and effectiveness (at end-user) of the data transmission.
NeSy AI is proposed as a pillar for learning causal structure behind the observed data.
GFlowNet is leveraged for the first time in a wireless system to learn the probabilistic structure which generates the data.
arXiv Detail & Related papers (2022-05-22T07:11:57Z) - Learning from Heterogeneous Data Based on Social Interactions over
Graphs [58.34060409467834]
This work proposes a decentralized architecture, where individual agents aim at solving a classification problem while observing streaming features of different dimensions.
We show that the.
strategy enables the agents to learn consistently under this highly-heterogeneous setting.
We show that the.
strategy enables the agents to learn consistently under this highly-heterogeneous setting.
arXiv Detail & Related papers (2021-12-17T12:47:18Z) - Learning Physical Concepts in Cyber-Physical Systems: A Case Study [72.74318982275052]
We provide an overview of the current state of research regarding methods for learning physical concepts in time series data.
We also analyze the most important methods from the current state of the art using the example of a three-tank system.
arXiv Detail & Related papers (2021-11-28T14:24:52Z) - Robustness testing of AI systems: A case study for traffic sign
recognition [13.395753930904108]
This paper presents how the robustness of AI systems can be practically examined and which methods and metrics can be used to do so.
The robustness testing methodology is described and analysed for the example use case of traffic sign recognition in autonomous driving.
arXiv Detail & Related papers (2021-08-13T10:29:09Z) - Determining Sentencing Recommendations and Patentability Using a Machine
Learning Trained Expert System [0.0]
This paper presents two studies that use a machine learning expert system (MLES)
One study focuses on a system to advise to U.S. federal judges for regarding consistent federal criminal sentencing.
The other study aims to develop a system that could assist the U.S. Patent and Trademark Office automate their patentability assessment process.
arXiv Detail & Related papers (2021-08-05T16:21:29Z) - Scholarly AI system diagrams as an access point to mental models [6.233820957059352]
Complex systems, such as Artificial Intelligence (AI) systems, are comprised of many interrelated components.
In order to represent these systems, demonstrating the relations between components is essential.
Diagrams, as "icons of relation", are a prevalent medium for signifying complex systems.
arXiv Detail & Related papers (2021-04-30T07:55:18Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - NERD: Neural Network for Edict of Risky Data Streams [0.0]
Cyber incidents can have a wide range of cause from a simple connection loss to an insistent attack.
The developed system is enriched with information by multiple sources such as intrusion detection systems and monitoring tools.
It uses over twenty key attributes like sync-package ratio to identify potential security incidents and to classify the data into different priority categories.
arXiv Detail & Related papers (2020-07-08T14:24:48Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.