GB
  
You are currently viewing the United Kingdom version of the site.
Would you like to switch to your local site?
13 MIN READ TIME

Artificial Intelligence

AI and Uncertainty

One winter evening in 2014, Stuart Russell, a professor of Computer Science at the University of California, Berkeley, was riding the Paris Metro. He was on his way to a rehearsal for a choir that he had joined while living in the French capital during a sabbatical from Berkeley.

That evening, he was listening to the piece that he would be practicing, Samuel Barber’s Agnus Dei, the composer’s choral arrangement of his haunting Adagio for Strings. Swept up in the sublime music, Russell had a breathtaking idea. AI should be built to support ineffable human moments like this one. Instead of delegating an objective to a machine and then stepping back, designers should make systems that will work with us to realize both our complex, shifting goals and our values and preferences. “It just sprang into my mind that what matters, and therefore what the purpose of AI was, was in some sense the aggregate quality of human experience,” he later recalled. And in order to be constantly learning what humans want or need, AI must be uncertain, Russell realized. “This is the core of the new approach: we remove the false assumption that the machine is pursuing a fixed objective that is perfectly known.”

Talking with me by video call one day in the fall of 2022, Russell elaborates. Once the machine is uncertain, it can start working with humans instead of “just watching from above.” If it doesn’t know how the future should unfold, AI becomes teachable, says Russell, a thin, dapper man with a manner of speaking that is somehow both poetical and laser precise. A key part of his Paris epiphany, he says, “was realizing that actually [AI’s] state of uncertainty about human objectives is permanent.” He pauses. “To some extent, this is how it’s going to be for humans too. We are not born with fixed reward functions.”

A few weeks later, I meet up virtually with Anca Dragan, an energetic Berkeley roboticist who is a protégé of Russell’s and one of a growing number of high-profile scientists turning his vision for reimagining AI into algorithmic reality.

“One of my biggest lessons over the past five years or so has been that there’s a tremendous amount of power for AI in being able to hold appropriate uncertainty about what the objective should be,” she tells me. Power? I ask. She explains that by making AI “a little bit more humble, a little bit more uncertain, all of a sudden magical things happen” for both the robot and the human. Together, we begin watching two illustrative bits of video whose banality belies their importance.

Unlock this article and much more with
You can enjoy:
Enjoy this edition in full
Instant access to 600+ titles
Thousands of back issues
No contract or commitment
Try for 99p
SUBSCRIBE NOW
30 day trial, then just £9.99 / month. Cancel anytime. New subscribers only.


Learn more
Pocketmags Plus
Pocketmags Plus
Chat
X
Pocketmags Support