Impossibility of Collective Intelligence
- URL: http://arxiv.org/abs/2206.02786v1
- Date: Sun, 5 Jun 2022 07:58:39 GMT
- Title: Impossibility of Collective Intelligence
- Authors: Krikamol Muandet
- Abstract summary: We show that it is theoretically impossible to design a rational learning algorithm that has the ability to learn across heterogeneous environments.
The only feasible algorithm compatible with all of the axioms is the standard empirical risk minimization.
Our impossibility result reveals informational incomparability between environments as one of the foremost obstacles for researchers.
- Score: 10.107996426462604
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Democratization of AI involves training and deploying machine learning models
across heterogeneous and potentially massive environments. Diversity of data
opens up a number of possibilities to advance AI systems, but also introduces
pressing concerns such as privacy, security, and equity that require special
attention. This work shows that it is theoretically impossible to design a
rational learning algorithm that has the ability to successfully learn across
heterogeneous environments, which we decoratively call collective intelligence
(CI). By representing learning algorithms as choice correspondences over a
hypothesis space, we are able to axiomatize them with essential properties.
Unfortunately, the only feasible algorithm compatible with all of the axioms is
the standard empirical risk minimization (ERM) which learns arbitrarily from a
single environment. Our impossibility result reveals informational
incomparability between environments as one of the foremost obstacles for
researchers who design novel algorithms that learn from multiple environments,
which sheds light on prerequisites for success in critical areas of machine
learning such as out-of-distribution generalization, federated learning,
algorithmic fairness, and multi-modal learning.
Related papers
- Is Efficient PAC Learning Possible with an Oracle That Responds 'Yes' or 'No'? [26.334900941196082]
We investigate whether the ability to perform ERM, which computes a hypothesis minimizing empirical risk on a given dataset, is necessary for efficient learning.
We show that in real setting of PAC for binary classification, a concept class can be learned using an oracle which only returns a single bit.
Our results extend to the learning setting with a slight strengthening of the oracle, as well as to the partial concept, multiclass and real-valued learning settings.
arXiv Detail & Related papers (2024-06-17T15:50:08Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Artificial intelligence is algorithmic mimicry: why artificial "agents"
are not (and won't be) proper agents [0.0]
I investigate what is the prospect of developing artificial general intelligence (AGI)
I compare living and algorithmic systems, with a special focus on the notion of "agency"
It is extremely unlikely that true AGI can be developed in the current algorithmic framework of AI research.
arXiv Detail & Related papers (2023-06-27T19:25:09Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Human-Algorithm Collaboration: Achieving Complementarity and Avoiding
Unfairness [92.26039686430204]
We show that even in carefully-designed systems, complementary performance can be elusive.
First, we provide a theoretical framework for modeling simple human-algorithm systems.
Next, we use this model to prove conditions where complementarity is impossible.
arXiv Detail & Related papers (2022-02-17T18:44:41Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - From Undecidability of Non-Triviality and Finiteness to Undecidability
of Learnability [0.0]
We show that there is no general-purpose procedure for rigorously evaluating whether newly proposed models indeed successfully learn from data.
For PAC binary classification, uniform and universal online learning, and exact learning through teacher-learner interactions, learnability is in general undecidable.
There is no one-size-fits-all algorithm for deciding whether a machine learning model can be successful.
arXiv Detail & Related papers (2021-06-02T18:00:04Z) - Investigating Bi-Level Optimization for Learning and Vision from a
Unified Perspective: A Survey and Beyond [114.39616146985001]
In machine learning and computer vision fields, despite the different motivations and mechanisms, a lot of complex problems contain a series of closely related subproblms.
In this paper, we first uniformly express these complex learning and vision problems from the perspective of Bi-Level Optimization (BLO)
Then we construct a value-function-based single-level reformulation and establish a unified algorithmic framework to understand and formulate mainstream gradient-based BLO methodologies.
arXiv Detail & Related papers (2021-01-27T16:20:23Z) - AI Centered on Scene Fitting and Dynamic Cognitive Network [4.228224431041357]
This paper briefly analyzes the advantages and problems of AI mainstream technology and puts forward: To achieve stronger Artificial Intelligence, the end-to-end function calculation must be changed.
It also discusses the concrete scheme named Dynamic Cognitive Network model (DC Net)
arXiv Detail & Related papers (2020-10-02T06:13:41Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.