Open Environment Machine Learning
- URL: http://arxiv.org/abs/2206.00423v1
- Date: Wed, 1 Jun 2022 11:57:56 GMT
- Title: Open Environment Machine Learning
- Authors: Zhi-Hua Zhou
- Abstract summary: Conventional machine learning studies assume close world scenarios where important factors of the learning process hold invariant.
This article briefly introduces some advances in this line of research, focusing on techniques concerning emerging new classes, decremental/incremental features, changing data distributions, varied learning objectives, and discusses some theoretical issues.
- Score: 84.90891046882213
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional machine learning studies generally assume close world scenarios
where important factors of the learning process hold invariant. With the great
success of machine learning, nowadays, more and more practical tasks,
particularly those involving open world scenarios where important factors are
subject to change, called open environment machine learning (Open ML) in this
article, are present to the community. Evidently it is a grand challenge for
machine learning turning from close world to open world. It becomes even more
challenging since, in various big data tasks, data are usually accumulated with
time, like streams, while it is hard to train the machine learning model after
collecting all data as in conventional studies. This article briefly introduces
some advances in this line of research, focusing on techniques concerning
emerging new classes, decremental/incremental features, changing data
distributions, varied learning objectives, and discusses some theoretical
issues.
Related papers
- Towards Few-Shot Learning in the Open World: A Review and Beyond [52.41344813375177]
Few-shot learning aims to mimic human intelligence by enabling significant generalizations and transferability.
This paper presents a review of recent advancements designed to adapt FSL for use in open-world settings.
We categorize existing methods into three distinct types of open-world few-shot learning: those involving varying instances, varying classes, and varying distributions.
arXiv Detail & Related papers (2024-08-19T06:23:21Z) - Open-world Machine Learning: A Review and New Outlooks [83.6401132743407]
This paper aims to provide a comprehensive introduction to the emerging open-world machine learning paradigm.
It aims to help researchers build more powerful AI systems in their respective fields, and to promote the development of artificial general intelligence.
arXiv Detail & Related papers (2024-03-04T06:25:26Z) - Robust Computer Vision in an Ever-Changing World: A Survey of Techniques
for Tackling Distribution Shifts [20.17397328893533]
AI applications are becoming increasingly visible to the general public.
There is a notable gap between the theoretical assumptions researchers make about computer vision models and the reality those models face when deployed in the real world.
One of the critical reasons for this gap is a challenging problem known as distribution shift.
arXiv Detail & Related papers (2023-12-03T23:40:12Z) - Detecting and Learning Out-of-Distribution Data in the Open world:
Algorithm and Theory [15.875140867859209]
This thesis makes contributions to the realm of machine learning, specifically in the context of open-world scenarios.
Research investigates two intertwined steps essential for open-world machine learning: Out-of-distribution (OOD) Detection and Open-world Representation Learning (ORL)
arXiv Detail & Related papers (2023-10-10T00:25:21Z) - Interpretable Machine Learning for Discovery: Statistical Challenges \&
Opportunities [1.2891210250935146]
We discuss and review the field of interpretable machine learning.
We outline the types of discoveries that can be made using Interpretable Machine Learning.
We focus on the grand challenge of how to validate these discoveries in a data-driven manner.
arXiv Detail & Related papers (2023-08-02T23:57:31Z) - INTERN: A New Learning Paradigm Towards General Vision [117.3343347061931]
We develop a new learning paradigm named INTERN.
By learning with supervisory signals from multiple sources in multiple stages, the model being trained will develop strong generalizability.
In most cases, our models, adapted with only 10% of the training data in the target domain, outperform the counterparts trained with the full set of data.
arXiv Detail & Related papers (2021-11-16T18:42:50Z) - From Machine Learning to Robotics: Challenges and Opportunities for
Embodied Intelligence [113.06484656032978]
Article argues that embodied intelligence is a key driver for the advancement of machine learning technology.
We highlight challenges and opportunities specific to embodied intelligence.
We propose research directions which may significantly advance the state-of-the-art in robot learning.
arXiv Detail & Related papers (2021-10-28T16:04:01Z) - Open-world Machine Learning: Applications, Challenges, and Opportunities [0.7734726150561086]
Open-world machine learning deals with arbitrary inputs (data with unseen classes) to machine learning systems.
Traditional machine learning is static learning which is not appropriate for an active environment.
This paper presents a systematic review of various techniques for open-world machine learning.
arXiv Detail & Related papers (2021-05-27T21:05:10Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.