Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning
- URL: http://arxiv.org/abs/2012.11406v1
- Date: Mon, 21 Dec 2020 15:07:19 GMT
- Title: Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning
- Authors: Alexander Sagel and Amit Sahu and Stefan Matthes and Holger Pfeifer
and Tianming Qiu and Harald Rue{\ss} and Hao Shen and Julian W\"ormann
- Abstract summary: Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
- Score: 69.99522650448213
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Research in machine learning is at a turning point. While supervised deep
learning has conquered the field at a breathtaking pace and demonstrated the
ability to solve inference problems with unprecedented accuracy, it still does
not quite live up to its name if we think of learning as the process of
acquiring knowledge about a subject or problem. Major weaknesses of present-day
deep learning models are, for instance, their lack of adaptability to changes
of environment or their incapability to perform other kinds of tasks than the
one they were trained for. While it is still unclear how to overcome these
limitations, one can observe a paradigm shift within the machine learning
community, with research interests shifting away from increasing the
performance of highly parameterized models to exceedingly specific tasks, and
towards employing machine learning algorithms in highly diverse domains. This
research question can be approached from different angles. For instance, the
field of Informed AI investigates the problem of infusing domain knowledge into
a machine learning model, by using techniques such as regularization, data
augmentation or post-processing.
On the other hand, a remarkable number of works in the recent years has
focused on developing models that by themselves guarantee a certain degree of
versatility and invariance with respect to the domain or problem at hand. Thus,
rather than investigating how to provide domain-specific knowledge to machine
learning models, these works explore methods that equip the models with the
capability of acquiring the knowledge by themselves. This white paper provides
an introduction and discussion of this emerging field in machine learning
research. To this end, it reviews the role of knowledge in machine learning,
and discusses its relation to the concept of invariance, before providing a
literature review of the field.
Related papers
- Enhancing Generative Class Incremental Learning Performance with Model Forgetting Approach [50.36650300087987]
This study presents a novel approach to Generative Class Incremental Learning (GCIL) by introducing the forgetting mechanism.
We have found that integrating the forgetting mechanisms significantly enhances the models' performance in acquiring new knowledge.
arXiv Detail & Related papers (2024-03-27T05:10:38Z) - Zero-knowledge Proof Meets Machine Learning in Verifiability: A Survey [19.70499936572449]
High-quality models rely not only on efficient optimization algorithms but also on the training and learning processes built upon vast amounts of data and computational power.
Due to various challenges such as limited computational resources and data privacy concerns, users in need of models often cannot train machine learning models locally.
This paper presents a comprehensive survey of zero-knowledge proof-based verifiable machine learning (ZKP-VML) technology.
arXiv Detail & Related papers (2023-10-23T12:15:23Z) - Interpretable Machine Learning for Discovery: Statistical Challenges \&
Opportunities [1.2891210250935146]
We discuss and review the field of interpretable machine learning.
We outline the types of discoveries that can be made using Interpretable Machine Learning.
We focus on the grand challenge of how to validate these discoveries in a data-driven manner.
arXiv Detail & Related papers (2023-08-02T23:57:31Z) - Machine Unlearning: A Survey [56.79152190680552]
A special need has arisen where, due to privacy, usability, and/or the right to be forgotten, information about some specific samples needs to be removed from a model, called machine unlearning.
This emerging technology has drawn significant interest from both academics and industry due to its innovation and practicality.
No study has analyzed this complex topic or compared the feasibility of existing unlearning solutions in different kinds of scenarios.
The survey concludes by highlighting some of the outstanding issues with unlearning techniques, along with some feasible directions for new research opportunities.
arXiv Detail & Related papers (2023-06-06T10:18:36Z) - Graph Enabled Cross-Domain Knowledge Transfer [1.52292571922932]
Cross-Domain Knowledge Transfer is an approach to mitigate the gap between good representation learning and the scarce knowledge in the domain of interest.
From the machine learning perspective, the paradigm of semi-supervised learning takes advantage of large amount of data without ground truth and achieves impressive learning performance improvement.
arXiv Detail & Related papers (2023-04-07T03:02:10Z) - A Roadmap to Domain Knowledge Integration in Machine Learning [21.96548398967003]
Integrating knowledge in a machine learning model can help to overcome these obstacles up to a certain degree.
We will give a brief overview of these different forms of knowledge integration and their performance in certain machine learning tasks.
arXiv Detail & Related papers (2022-12-12T05:35:44Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - From Machine Learning to Robotics: Challenges and Opportunities for
Embodied Intelligence [113.06484656032978]
Article argues that embodied intelligence is a key driver for the advancement of machine learning technology.
We highlight challenges and opportunities specific to embodied intelligence.
We propose research directions which may significantly advance the state-of-the-art in robot learning.
arXiv Detail & Related papers (2021-10-28T16:04:01Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - A Survey of Knowledge Representation in Service Robotics [10.220366465518262]
We focus on knowledge representations and how knowledge is typically gathered, represented, and reproduced to solve problems.
In accordance with the definition of knowledge representations, we discuss the key distinction between such representations and useful learning models.
We discuss key principles that should be considered when designing an effective knowledge representation.
arXiv Detail & Related papers (2018-07-05T22:18:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.