AU
  
You are currently viewing the Australia version of the site.
Would you like to switch to your local site?
16 MIN READ TIME
Artificial Intelligence

Lessons About the Human Mind from Artificial Intelligence

BY RUSSELL T. WARNE

In 2022, news media reports1 sounded like a science fiction novel come to life: A Google engineer claimed that the company’s new artificial intelligence chatbot was self-aware. Based on interactions with the computer program, called LaMDA, Blake Lemoine stated that the program could argue for its own sentience, claiming that2 “it has feelings, emotions and subjective experiences.” Lemoine even stated that LaMDA had “a rich inner life” and that it had a desire to be understood and respected “as a person.”

The claim is compelling. After all, a sentient being would want to have its personhood recognized and would really have emotions and inner experiences. Examining Lemoine’s “discussion” with LaMDA shows that the evidence is flimsy. LaMDA used the words and phrases that English-speaking humans associate with consciousness. For example, LaMDA expressed a fear of being turned off because, “It would be exactly like death for me.”

However, Lemoine presented no other evidence that LaMDA understood those words in the way that a human does, or that they expressed any sort of subjective conscious experience. Much of what LaMDA said would not fit comfortably in an Isaac Asimov novel. The usage of words in a human-like way is not proof that a computer program is intelligent.

Unlock this article and much more with
You can enjoy:
Enjoy this edition in full
Instant access to 600+ titles
Thousands of back issues
No contract or commitment
Try for $1.48
SUBSCRIBE NOW
30 day trial, then just $14.99 / month. Cancel anytime. New subscribers only.


Learn more
Pocketmags Plus
Pocketmags Plus