IT
  
Attualmente si sta visualizzando la versione Italy del sito.
Volete passare al vostro sito locale?
10 TEMPO DI LETTURA MIN
Artificial Intelligence

Can We Trust AI to Make Decisions?

BY URS GASSER & VIKTOR MAYER-SCHÖNBERGER

Machine-based decision-making is an interesting vision for the future: Humanity, crippled by its own cognitive deformations, tries to improve its lot by opting to outsource its decisions to adaptive machines—a kind of mental prosthetic.

For most of the twentieth century, artificial intelligence was based on representing explicit sets of rules in software and having the computer “reason” based on these rules—the machine’s “intelligence” involved applying the rules to a particular situation. Because the rules were explicit, the machine could also “explain” its reasoning by listing the rules that prompted its decision. Even if AI had the ring of going beyond the obvious in reasoning and decisionmaking, traditional AI depended on our ability to make explicit all relevant rules and to translate them into some machine-digestible representation. It was transparent and explainable, but it was also static—in this way, it did not differ fundamentally from other forms of decisional guardrails such as standard operating procedures (SOPs) or checklists. The progress of this kind of AI stalled because in many everyday areas of human activity and decisionmaking, it is exceptionally hard to make rules explicit.

In recent decades, however, AI has been used as a label for something quite different. The new kind of AI analyzes training data in sophisticated ways to uncover patterns that represent knowledge implicit in the data. The AI does not turn this hidden knowledge into explicit and comprehensible rules, but instead represents it as a huge and complex set of abstract links and dependencies within a network of nodes, a bit like neurons in a brain. It then “decides” how to respond to new data by applying the patterns from the training data. For example, the training data may consist of medical images of suspected tumors, and information about whether or not they in fact proved to be cancerous. When shown a new image, the AI estimates how likely that image is to be of a cancer. Because the system is learning from training data, the process is referred to as “machine learning.”

Sbloccate questo articolo e molto altro con
Si può godere di:
Godetevi questa edizione per intero
Accesso immediato a oltre 600 titoli
Migliaia di numeri arretrati
Nessun contratto o impegno
Prova per €1.09
ABBONATI ORA
30 giorni di accesso, poi solo €11,99 / mese. Disdetta in qualsiasi momento. Solo per i nuovi abbonati.


Per saperne di più
Pocketmags Plus
Pocketmags Plus