Could AI be the Great Filter? What Astrobiology can Teach the
Intelligence Community about Anthropogenic Risks
- URL: http://arxiv.org/abs/2305.05653v1
- Date: Tue, 9 May 2023 17:50:02 GMT
- Title: Could AI be the Great Filter? What Astrobiology can Teach the
Intelligence Community about Anthropogenic Risks
- Authors: Mark M. Bailey
- Abstract summary: The Fermi Paradox is the disquieting idea that, if extraterrestrial life is probable in the Universe, then why have we not encountered it?
One intriguing hypothesis is known as the Great Filter, which suggests that some event required for the emergence of intelligent life is extremely unlikely, hence the cosmic silence.
From an intelligence perspective, framing global catastrophic risk within the context of the Great Filter can provide insight into the long-term futures of technologies that we don't fully understand, like artificial intelligence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Where is everybody? This phrase distills the foreboding of what has come to
be known as the Fermi Paradox - the disquieting idea that, if extraterrestrial
life is probable in the Universe, then why have we not encountered it? This
conundrum has puzzled scholars for decades, and many hypotheses have been
proposed suggesting both naturalistic and sociological explanations. One
intriguing hypothesis is known as the Great Filter, which suggests that some
event required for the emergence of intelligent life is extremely unlikely,
hence the cosmic silence. A logically equivalent version of this hypothesis -
and one that should give us pause - suggests that some catastrophic event is
likely to occur that prevents life's expansion throughout the cosmos. This
could be a naturally occurring event, or more disconcertingly, something that
intelligent beings do to themselves that leads to their own extinction. From an
intelligence perspective, framing global catastrophic risk (particularly risks
of anthropogenic origin) within the context of the Great Filter can provide
insight into the long-term futures of technologies that we don't fully
understand, like artificial intelligence. For the intelligence professional
concerned with global catastrophic risk, this has significant implications for
how these risks ought to be prioritized.
Related papers
- Can a Bayesian Oracle Prevent Harm from an Agent? [48.12936383352277]
We consider estimating a context-dependent bound on the probability of violating a given safety specification.
Noting that different plausible hypotheses about the world could produce very different outcomes, we derive on the safety violation probability predicted under the true but unknown hypothesis.
We consider two forms of this result, in the iid case and in the non-iid case, and conclude with open problems towards turning such results into practical AI guardrails.
arXiv Detail & Related papers (2024-08-09T18:10:42Z) - On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - Extinction Risks from AI: Invisible to Science? [0.0]
Extinction-level Goodhart's Law is "Virtually any goal specification, pursued to the extreme, will result in the extinction of humanity"
This raises the possibility that whether the risk of extinction from artificial intelligence is real or not, the underlying dynamics might be invisible to current scientific methods.
arXiv Detail & Related papers (2024-02-02T23:04:13Z) - Two Types of AI Existential Risk: Decisive and Accumulative [3.5051464966389116]
This paper contrasts the conventional "decisive AI x-risk hypothesis" with an "accumulative AI x-risk hypothesis"
The accumulative hypothesis suggests a boiling frog scenario where incremental AI risks slowly converge, undermining resilience until a triggering event results in irreversible collapse.
arXiv Detail & Related papers (2024-01-15T17:06:02Z) - The Generative AI Paradox: "What It Can Create, It May Not Understand" [81.89252713236746]
Recent wave of generative AI has sparked excitement and concern over potentially superhuman levels of artificial intelligence.
At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans.
This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make?
arXiv Detail & Related papers (2023-10-31T18:07:07Z) - A Measure-Theoretic Axiomatisation of Causality [55.6970314129444]
We argue in favour of taking Kolmogorov's measure-theoretic axiomatisation of probability as the starting point towards an axiomatisation of causality.
Our proposed framework is rigorously grounded in measure theory, but it also sheds light on long-standing limitations of existing frameworks.
arXiv Detail & Related papers (2023-05-19T13:15:48Z) - Understanding Natural Language Understanding Systems. A Critical
Analysis [91.81211519327161]
The development of machines that guillemotlefttalk like usguillemotright, also known as Natural Language Understanding (NLU) systems, is the Holy Grail of Artificial Intelligence (AI)
But never has the trust that we can build guillemotlefttalking machinesguillemotright been stronger than the one engendered by the last generation of NLU systems.
Are we at the dawn of a new era, in which the Grail is finally closer to us?
arXiv Detail & Related papers (2023-03-01T08:32:55Z) - Life in a random universe: Sciama's argument reconsidered [5.15018725021934]
We show that a random universe can masquerade as intelligently designed,' with the fundamental constants instead appearing to be fined to achieve the highest probability for life to occur.
For our universe, this mechanism may only require there to be around a dozen currently unknown fundamental constants.
arXiv Detail & Related papers (2021-09-10T23:15:31Z) - On the Unimportance of Superintelligence [0.0]
I analyze the priority for allocating resources to mitigate the risk of superintelligences.
Part I observes that a superintelligence unconnected to the outside world carries no threat.
Part II proposes that biotechnology ranks high in risk among peripheral systems.
arXiv Detail & Related papers (2021-08-30T01:23:25Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Intelligence Primer [0.0]
Intelligence is a fundamental part of all living things, as well as the foundation for Artificial Intelligence.
In this primer we explore the ideas associated with intelligence and, by doing so, understand the implications and constraints.
We call this a Life, the Universe, and Everything primer, after the famous science fiction book by Douglas Adams.
arXiv Detail & Related papers (2020-08-13T15:47:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.