Está viendo la página Spain versión del sitio.
Le gustaría cambiar a su sitio local?
10 MIN TIEMPO DE LECTURA
Artificial Intelligence

Can We Trust AI to Make Decisions?

BY URS GASSER & VIKTOR MAYER-SCHÖNBERGER

Machine-based decision-making is an interesting vision for the future: Humanity, crippled by its own cognitive deformations, tries to improve its lot by opting to outsource its decisions to adaptive machines—a kind of mental prosthetic.

For most of the twentieth century, artificial intelligence was based on representing explicit sets of rules in software and having the computer “reason” based on these rules—the machine’s “intelligence” involved applying the rules to a particular situation. Because the rules were explicit, the machine could also “explain” its reasoning by listing the rules that prompted its decision. Even if AI had the ring of going beyond the obvious in reasoning and decisionmaking, traditional AI depended on our ability to make explicit all relevant rules and to translate them into some machine-digestible representation. It was transparent and explainable, but it was also static—in this way, it did not differ fundamentally from other forms of decisional guardrails such as standard operating procedures (SOPs) or checklists. The progress of this kind of AI stalled because in many everyday areas of human activity and decisionmaking, it is exceptionally hard to make rules explicit.

In recent decades, however, AI has been used as a label for something quite different. The new kind of AI analyzes training data in sophisticated ways to uncover patterns that represent knowledge implicit in the data. The AI does not turn this hidden knowledge into explicit and comprehensible rules, but instead represents it as a huge and complex set of abstract links and dependencies within a network of nodes, a bit like neurons in a brain. It then “decides” how to respond to new data by applying the patterns from the training data. For example, the training data may consist of medical images of suspected tumors, and information about whether or not they in fact proved to be cancerous. When shown a new image, the AI estimates how likely that image is to be of a cancer. Because the system is learning from training data, the process is referred to as “machine learning.”

Desbloquea este artículo y mucho más con
Puedes disfrutar:
Disfrute de esta edición al completo
Acceso instantáneo a más de 600 títulos
Miles de números atrasados
Sin contrato ni compromiso
Inténtalo €1.09
SUSCRÍBETE AHORA
30 días de acceso, luego sólo €11,99 / mes. Cancelación en cualquier momento. Sólo para nuevos abonados.


Más información
Pocketmags Plus
Pocketmags Plus