You are currently viewing the United Kingdom version of the site.
Would you like to switch to your local site?
24 MIN READ TIME

Why Artificial Intelligence is Not an Existential Threat

OVER THE YEARS EXISTENTIAL THREAT WARNINGS have been sounded for global thermonuclear war, overpopulation, ecological destruction, species extinction, exhaustion of natural resources, global pandemics, biological weapons, asteroid strikes, ISIS and Islamism, nanotechnology, global warming, and even Vladimir Putin and Donald Trump. The modifier “existential” is usually meant to convey a threat to the survival of our country, civilization, or species. Here I will focus on fears about runaway Artificial Intelligence (AI). These concerns go beyond the Golem, Frankenstein’s monster, or Hollywood’s Skynet and Matrix, and yet they are still permutations on one of the oldest myths in history—the perils of humans playing God with their technologies in which matters get out of hand for the worse.

Before we consider the AI doomsayers, however, let’s recognize that not all AI experts are so pessimistic. In fact, most AI scientists are neither utopian or dystopian, and instead spend most of their time thinking of ways to make our machines incrementally smarter and our lives gradually better. Think of cars becoming smart cars and, soon, fully autonomous vehicles. Each model is just another step toward making moving our atoms around the world safer and simpler. Then there are the AI Utopians, most notably represented by Ray Kurzweil in his book The Singularity is Near, in which he demonstrates what he calls “the law of accelerating returns”—not just that change is accelerating, but that the rate of change is accelerating. This is Moore’s Law—the doubling rate of computer power since the 1960s—on steroids and applied to all science and technology. This has led the world to change more in the past century than it did in the previous 1000 centuries. As we approach the Singularity, says Kurzweil, the world will change more in a decade than in 1000 centuries, and as the acceleration continues and we reach the Singularity the world will change more in a year than in all pre- Singularity history. Singulartarians project a future in which benevolent computers, robots, and replicators produce limitless prosperity, end poverty and hunger, conquer disease and death, achieve immortality, colonize the galaxy, and eventually even spread throughout the universe by reaching the so called Omega point where we/they become omniscient, omnipotent, and omnibenevolent deities.1

By contrast, AI Dystopians envision a future in which: (1) amoral AI continues on its path of increasing intelligence to a tipping point beyond which their intelligence will be so far beyond us that we can’t stop them from inadvertently destroying us, or (2) malevolent computers and robots take us over, making us their slaves or servants, or driving us into extinction through techno-genocide. 2 Cambridge University computer scientist and researcher at the Centre for the Study of Existential Risk, Stuart Russell, for example, compares the growth of AI to the development of nuclear weapons: “From the beginning, the primary interest in nuclear technology was the inexhaustible supply of energy. The possibility of weapons was also obvious. I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence. Both seem wonderful until one thinks of the possible risks.”3

The go-to guy on the possible risks of AI is computer scientist Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute (MIRI). “How likely is it that Artificial Intelligence will cross all the vast gap from amoeba to village idiot, and then stop at the level of human genius?” He answers his rhetorical question thus: “It would be physically possible to build a brain that computed a million times as fast as a human brain, without shrinking the size, or running at lower temperatures, or invoking reversible computing or quantum computing. If a human mind were thus accelerated, a subjective year of thinking would be accomplished for every 31 physical seconds in the outside world, and a millennium would fly by in eight-and-a-half hours.”4 It is literally inconceivable how much smarter than a human a computer would be that could do a thousand years of thinking in the equivalent of a human’s day.

In this scenario, it is not that AI is evil so much as it is amoral. It just doesn’t care about humans, or about anything else for that matter. “The unFriendly AI has the ability to repattern all matter in the solar system according to its optimization target,” Yudkowsky notes. “This is fatal for us if the AI does not choose specifically according to the criterion of how this transformation affects existing patterns such as biology and people.” The paradigmatic example was proposed as a thought experiment by the Oxford University philosopher Nick Bostrom: the “paperclip maximizer.” This is an AI machine designed to make paperclips that apparently doesn’t have an off switch. After running through its initial supply of raw materials to make paperclips it simply utilizes any available atoms that happen to be within its reach, including humans. From there, it “starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities.” 5 Before long the entire universe is made up of paperclips and paperclip makers. Bostrom is also the Director of the Future of Humanity Institute, and in his book Superintelligence he outlines his concerns about humanity’s future if an Artificial Superintelligence (ASI) takes a “treacherous turn” toward an “existential catastrophe as the default outcome of an intelligence explosion.” He begins by defining an existential risk as “one that threatens to cause the extinction of Earth-originating intelligent life or to otherwise permanently and drastically destroy its potential for future desirable development.” We blithely go on making smarter and smarter AIs because they make our lives better, and thus the Cassandras are out-voiced by the Pollyannas because AI manufacturers (and their lobbyists) stand to lose if the reins to the bit are pulled too hard. And so the checks-and-balances programs that should be built into an ASI (such as how to turn it off) are not available when it reaches the treacherous turn when “smarter is more dangerous.” Bostrom suggests what might then happen:

Unlock this article and much more with
You can enjoy:
Enjoy this edition in full
Instant access to 600+ titles
Thousands of back issues
No contract or commitment
Try for 99p
SUBSCRIBE NOW
30 day trial, then just £9.99 / month. Cancel anytime. New subscribers only.


Learn more
Pocketmags Plus
Pocketmags Plus

This article is from...


View Issues
Skeptic
22.2
VIEW IN STORE

Other Articles in this Issue


COLUMNS
The SkepDoc
pH Mythology: Separating pHacts from pHiction
The Gadfly
Are You An Unconscious Racist?
ARTICLES
The Rise of the Alt-Right and the Politics of Polarization in America
The Rise of the Alt-Right and the Politics of Polarization
Delusions of the Imagination
How the “Tractor”—an Early 19th Century Medical Quack Device—Was Debunked by One of the Earliest Single Blind Placebo Studies
Area 51: What is Really Going on There?
UFOs and U-2s, Aliens and A-12s
Is Race a Useful Concept?
WE SEEK TO ADDRESS A SINGULAR, SIMPLE QUESTION: are
The Three Shades of Atheism
How Atheists Differ in Their Views on God
SPECIAL SECTION AI DANGER
Why We Should Be Concerned About Artificial Superintelligence
The human brain isn’t magic; nor are the problem-solving
Artificial Intelligence Simulation, Not Synthesis
After over 50 years of mostly empty promises and disappointments
REVIEWS
Think Again
Rethink: The Surprising History of New Ideas by Steven Poole
The Ultimate Trade Off
A review of How Men Age: What Evolution Reveals About Male Health and Mortality by Richard G. Bribiescas
Playing Whac-a-Mole with Science Deniers
A Review of Not a Scientist: How Politicians Mistake, Misrepresent, and Utterly Mangle Science by Dave Levitan.
Frauds and Cons
Reviews of: Big Con: Great Hoaxes, Frauds, Grifts, and Swindles in American History by Nate Hendley Fraud: An American History from Barnum to Madoff by Edward J. Balliesen Houdini’s ‘Girl Detective’ compiled by Tony Wolf The Confidence Game: Why We Fall For it…Every Time by Maria Konnikova
Any Sufficiently Advanced Human is Indistinguishable from God
Review of Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari.
JUNIOR SKEPTIC
TERRIFYING! IMPROBABLE! CHEMTRAILS!
We’ve all heard the story of Chicken Little—a fanciful
Chat
X
Pocketmags Support