Unsupervised Learning in Complex Systems
- URL: http://arxiv.org/abs/2307.10993v1
- Date: Tue, 11 Jul 2023 19:48:42 GMT
- Title: Unsupervised Learning in Complex Systems
- Authors: Hugo Cisneros
- Abstract summary: This thesis explores the use of complex systems to study learning and adaptation in natural and artificial systems.
The goal is to develop autonomous systems that can learn without supervision, develop on their own, and become increasingly complex over time.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this thesis, we explore the use of complex systems to study learning and
adaptation in natural and artificial systems. The goal is to develop autonomous
systems that can learn without supervision, develop on their own, and become
increasingly complex over time. Complex systems are identified as a suitable
framework for understanding these phenomena due to their ability to exhibit
growth of complexity. Being able to build learning algorithms that require
limited to no supervision would enable greater flexibility and adaptability in
various applications. By understanding the fundamental principles of learning
in complex systems, we hope to advance our ability to design and implement
practical learning algorithms in the future. This thesis makes the following
key contributions: the development of a general complexity metric that we apply
to search for complex systems that exhibit growth of complexity, the
introduction of a coarse-graining method to study computations in large-scale
complex systems, and the development of a metric for learning efficiency as
well as a benchmark dataset for evaluating the speed of learning algorithms.
Our findings add substantially to our understanding of learning and adaptation
in natural and artificial systems. Moreover, our approach contributes to a
promising new direction for research in this area. We hope these findings will
inspire the development of more effective and efficient learning algorithms in
the future.
Related papers
- Interpretable Meta-Learning of Physical Systems [4.343110120255532]
Recent meta-learning methods rely on black-box neural networks, resulting in high computational costs and limited interpretability.
We argue that multi-environment generalization can be achieved using a simpler learning model, with an affine structure with respect to the learning task.
We demonstrate the competitive generalization performance and the low computational cost of our method by comparing it to state-of-the-art algorithms on physical systems.
arXiv Detail & Related papers (2023-12-01T10:18:50Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - A Comprehensive Survey of Continual Learning: Theory, Method and
Application [64.23253420555989]
We present a comprehensive survey of continual learning, seeking to bridge the basic settings, theoretical foundations, representative methods, and practical applications.
We summarize the general objectives of continual learning as ensuring a proper stability-plasticity trade-off and an adequate intra/inter-task generalizability in the context of resource efficiency.
arXiv Detail & Related papers (2023-01-31T11:34:56Z) - A Survey on Large-Population Systems and Scalable Multi-Agent
Reinforcement Learning [18.918558716102144]
We will shed light on current approaches to tractably understanding and analyzing large-population systems.
We will survey potential areas of application for large-scale control and identify fruitful future applications of learning algorithms in practical systems.
arXiv Detail & Related papers (2022-09-08T14:58:50Z) - Sample-Efficient Reinforcement Learning in the Presence of Exogenous
Information [77.19830787312743]
In real-world reinforcement learning applications the learner's observation space is ubiquitously high-dimensional with both relevant and irrelevant information about the task at hand.
We introduce a new problem setting for reinforcement learning, the Exogenous Decision Process (ExoMDP), in which the state space admits an (unknown) factorization into a small controllable component and a large irrelevant component.
We provide a new algorithm, ExoRL, which learns a near-optimal policy with sample complexity in the size of the endogenous component.
arXiv Detail & Related papers (2022-06-09T05:19:32Z) - Collective Intelligence for Deep Learning: A Survey of Recent
Developments [11.247894240593691]
We will provide a historical context of neural network research's involvement with complex systems.
We will highlight several active areas in modern deep learning research that incorporate the principles of collective intelligence.
arXiv Detail & Related papers (2021-11-29T08:39:32Z) - Self-organizing Democratized Learning: Towards Large-scale Distributed
Learning Systems [71.14339738190202]
democratized learning (Dem-AI) lays out a holistic philosophy with underlying principles for building large-scale distributed and democratized machine learning systems.
Inspired by Dem-AI philosophy, a novel distributed learning approach is proposed in this paper.
The proposed algorithms demonstrate better results in the generalization performance of learning models in agents compared to the conventional FL algorithms.
arXiv Detail & Related papers (2020-07-07T08:34:48Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z) - Provably Efficient Exploration for Reinforcement Learning Using
Unsupervised Learning [96.78504087416654]
Motivated by the prevailing paradigm of using unsupervised learning for efficient exploration in reinforcement learning (RL) problems, we investigate when this paradigm is provably efficient.
We present a general algorithmic framework that is built upon two components: an unsupervised learning algorithm and a noregret tabular RL algorithm.
arXiv Detail & Related papers (2020-03-15T19:23:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.