Está atualmente a visualizar o Portugal versão do sítio.
Gostaria de mudar para o seu sítio local?
10 TEMPO DE LEITURA MIN
Artificial Intelligence

Can We Trust AI to Make Decisions?

BY URS GASSER & VIKTOR MAYER-SCHÖNBERGER

Machine-based decision-making is an interesting vision for the future: Humanity, crippled by its own cognitive deformations, tries to improve its lot by opting to outsource its decisions to adaptive machines—a kind of mental prosthetic.

For most of the twentieth century, artificial intelligence was based on representing explicit sets of rules in software and having the computer “reason” based on these rules—the machine’s “intelligence” involved applying the rules to a particular situation. Because the rules were explicit, the machine could also “explain” its reasoning by listing the rules that prompted its decision. Even if AI had the ring of going beyond the obvious in reasoning and decisionmaking, traditional AI depended on our ability to make explicit all relevant rules and to translate them into some machine-digestible representation. It was transparent and explainable, but it was also static—in this way, it did not differ fundamentally from other forms of decisional guardrails such as standard operating procedures (SOPs) or checklists. The progress of this kind of AI stalled because in many everyday areas of human activity and decisionmaking, it is exceptionally hard to make rules explicit.

In recent decades, however, AI has been used as a label for something quite different. The new kind of AI analyzes training data in sophisticated ways to uncover patterns that represent knowledge implicit in the data. The AI does not turn this hidden knowledge into explicit and comprehensible rules, but instead represents it as a huge and complex set of abstract links and dependencies within a network of nodes, a bit like neurons in a brain. It then “decides” how to respond to new data by applying the patterns from the training data. For example, the training data may consist of medical images of suspected tumors, and information about whether or not they in fact proved to be cancerous. When shown a new image, the AI estimates how likely that image is to be of a cancer. Because the system is learning from training data, the process is referred to as “machine learning.”

Desbloqueie este artigo e muito mais com
Pode desfrutar:
Desfrute desta edição na íntegra
Acesso instantâneo a mais de 600 títulos
Milhares de edições anteriores
Sem contrato ou compromisso
INSCREVA-SE AGORA
30 dias de teste, depois apenas €11,99 / mês. Cancelar em qualquer altura. Apenas para novos subscritores.


Saiba mais
Pocketmags Plus
Pocketmags Plus