We use cookies to track usage and preferences. See Cookie Policy
Pocketmags Digital Magazines Pocketmags Digital Magazines
   You are currently viewing the United Kingdom version of the site.
Would you like to switch to your local site?
Home > Skeptic > 22.2 > Why Artificial Intelligence is Not an Existential Threat

Why Artificial Intelligence is Not an Existential Threat

OVER THE YEARS EXISTENTIAL THREAT WARNINGS have been sounded for global thermonuclear war, overpopulation, ecological destruction, species extinction, exhaustion of natural resources, global pandemics, biological weapons, asteroid strikes, ISIS and Islamism, nanotechnology, global warming, and even Vladimir Putin and Donald Trump. The modifier “existential” is usually meant to convey a threat to the survival of our country, civilization, or species. Here I will focus on fears about runaway Artificial Intelligence (AI). These concerns go beyond the Golem, Frankenstein’s monster, or Hollywood’s Skynet and Matrix, and yet they are still permutations on one of the oldest myths in history—the perils of humans playing God with their technologies in which matters get out of hand for the worse.

Before we consider the AI doomsayers, however, let’s recognize that not all AI experts are so pessimistic. In fact, most AI scientists are neither utopian or dystopian, and instead spend most of their time thinking of ways to make our machines incrementally smarter and our lives gradually better. Think of cars becoming smart cars and, soon, fully autonomous vehicles. Each model is just another step toward making moving our atoms around the world safer and simpler. Then there are the AI Utopians, most notably represented by Ray Kurzweil in his book The Singularity is Near, in which he demonstrates what he calls “the law of accelerating returns”—not just that change is accelerating, but that the rate of change is accelerating. This is Moore’s Law—the doubling rate of computer power since the 1960s—on steroids and applied to all science and technology. This has led the world to change more in the past century than it did in the previous 1000 centuries. As we approach the Singularity, says Kurzweil, the world will change more in a decade than in 1000 centuries, and as the acceleration continues and we reach the Singularity the world will change more in a year than in all pre- Singularity history. Singulartarians project a future in which benevolent computers, robots, and replicators produce limitless prosperity, end poverty and hunger, conquer disease and death, achieve immortality, colonize the galaxy, and eventually even spread throughout the universe by reaching the so called Omega point where we/they become omniscient, omnipotent, and omnibenevolent deities.1

By contrast, AI Dystopians envision a future in which: (1) amoral AI continues on its path of increasing intelligence to a tipping point beyond which their intelligence will be so far beyond us that we can’t stop them from inadvertently destroying us, or (2) malevolent computers and robots take us over, making us their slaves or servants, or driving us into extinction through techno-genocide. 2 Cambridge University computer scientist and researcher at the Centre for the Study of Existential Risk, Stuart Russell, for example, compares the growth of AI to the development of nuclear weapons: “From the beginning, the primary interest in nuclear technology was the inexhaustible supply of energy. The possibility of weapons was also obvious. I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence. Both seem wonderful until one thinks of the possible risks.”3

The go-to guy on the possible risks of AI is computer scientist Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute (MIRI). “How likely is it that Artificial Intelligence will cross all the vast gap from amoeba to village idiot, and then stop at the level of human genius?” He answers his rhetorical question thus: “It would be physically possible to build a brain that computed a million times as fast as a human brain, without shrinking the size, or running at lower temperatures, or invoking reversible computing or quantum computing. If a human mind were thus accelerated, a subjective year of thinking would be accomplished for every 31 physical seconds in the outside world, and a millennium would fly by in eight-and-a-half hours.”4 It is literally inconceivable how much smarter than a human a computer would be that could do a thousand years of thinking in the equivalent of a human’s day.

READ MORE
Purchase options below
Find the complete article and many more in this issue of Skeptic - 22.2
If you own the issue, Login to read the full article now.
Single Issue - 22.2
£4.99
Or 499 points
Annual Digital Subscription
Only £ 3.75 per issue
SAVE
25%
£14.99
Or 1499 points

View Issues

About Skeptic

ARTIFICIAL INTELLIGENCE DANGER ARTICLES: Why We Should Be Concerned About Artificial Superintelligence; Is Artificial Intelligence an Existential Threat?; Artificial Intelligence: Simulation, Not Synthesis; The Rise of the Alt-Right and the Politics of Polarization in America; Delusions of the Imagination: Debunking an Early 19th Century Medical Quack Device — The “Tractor”; Area 51: What is Really Going on There?; Is Race a Useful Concept?; The Three Shades of Atheism; COLUMNS: pH Mythology: Separating pHacts from pHiction; Are You An Unconscious Racist? REVIEWS of: Rethink: The Surprising History of New Ideas; How Men Age: What Evolution Reveals About Male Health and Mortality; Not a Scientist: How Politicians Mistake, Misrepresent, and Utterly Mangle Science; Big Con: Great Hoaxes, Frauds, Grifts, and and Swindles in American History by Nate Hendley; Fraud: An American History from Barnum to Madoff; Homo Deus: A Brief History of Tomorrow JUNIOR SKEPTIC: Chemtrails
Ways to Pay Pocketmags Payment Types
At Pocketmags you get Secure Billing Great Offers HTML Reader Gifting options Loyalty Points