Finding the unicorn: Predicting early stage startup success through a
hybrid intelligence method
- URL: http://arxiv.org/abs/2105.03360v1
- Date: Fri, 7 May 2021 16:16:36 GMT
- Title: Finding the unicorn: Predicting early stage startup success through a
hybrid intelligence method
- Authors: Dominik Dellermann, Nikolaus Lipusch, Philipp Ebel, Karl Michael Popp,
and Jan Marco Leimeister
- Abstract summary: We develop a Hybrid Intelligence method to predict the success of startups.
This method combines the strength of both machine and collective intelligence to demonstrate its utility under extreme uncertainty.
- Score: 3.8471013858178424
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence is an emerging topic and will soon be able to perform
decisions better than humans. In more complex and creative contexts such as
innovation, however, the question remains whether machines are superior to
humans. Machines fail in two kinds of situations: processing and interpreting
soft information (information that cannot be quantified) and making predictions
in unknowable risk situations of extreme uncertainty. In such situations, the
machine does not have representative information for a certain outcome.
Thereby, humans are still the gold standard for assessing soft signals and make
use of intuition. To predict the success of startups, we, thus, combine the
complementary capabilities of humans and machines in a Hybrid Intelligence
method. To reach our aim, we follow a design science research approach to
develop a Hybrid Intelligence method that combines the strength of both machine
and collective intelligence to demonstrate its utility for predictions under
extreme uncertainty.
Related papers
- Probabilistic Artificial Intelligence [42.59649764999974]
Key aspect of intelligence is to not only make predictions, but reason about the uncertainty in these predictions, and to consider this uncertainty when making decisions.
We discuss the differentiation between "epistemic" uncertainty due to lack of data and "aleatoric" uncertainty, which is irreducible and stems, e.g., from noisy observations and outcomes.
arXiv Detail & Related papers (2025-02-07T14:29:07Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Artificial Human Intelligence: The role of Humans in the Development of Next Generation AI [6.8894258727040665]
We explore the interplay between human and machine intelligence, focusing on the crucial role humans play in developing intelligent systems.
We propose future perspectives, capitalizing on the advantages of symbiotic designs to suggest a human-centered direction for next-generation developments.
arXiv Detail & Related papers (2024-09-24T12:02:20Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - The Generative AI Paradox: "What It Can Create, It May Not Understand" [81.89252713236746]
Recent wave of generative AI has sparked excitement and concern over potentially superhuman levels of artificial intelligence.
At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans.
This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make?
arXiv Detail & Related papers (2023-10-31T18:07:07Z) - Cognitive Architecture for Co-Evolutionary Hybrid Intelligence [0.17767466724342065]
The paper questions the feasibility of a strong (general) data-centric artificial intelligence (AI)
As an alternative, the concept of co-evolutionary hybrid intelligence is proposed.
An architecture seamlessly incorporates a human into the loop of intelligent problem solving is considered.
arXiv Detail & Related papers (2022-09-05T08:26:16Z) - Co-evolutionary hybrid intelligence [0.3007949058551534]
The current approach to the development of intelligent systems is data-centric.
The article discusses an alternative approach to the development of artificial intelligence systems based on human-machine hybridization and their co-evolution.
arXiv Detail & Related papers (2021-12-09T08:14:56Z) - Hybrid Intelligence [4.508830262248694]
We argue that the most likely paradigm for the division of labor between humans and machines in the next decades is Hybrid Intelligence.
This concept aims at using the complementary strengths of human intelligence and AI, so that they can perform better than each of the two could separately.
arXiv Detail & Related papers (2021-05-03T08:56:09Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Future Trends for Human-AI Collaboration: A Comprehensive Taxonomy of
AI/AGI Using Multiple Intelligences and Learning Styles [95.58955174499371]
We describe various aspects of multiple human intelligences and learning styles, which may impact on a variety of AI problem domains.
Future AI systems will be able not only to communicate with human users and each other, but also to efficiently exchange knowledge and wisdom.
arXiv Detail & Related papers (2020-08-07T21:00:13Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.