The future of AGI
AGI could be revolutionary but where is it going to take us and are we ready?
Image credit: Getty Images
While there’s much
excitement, it’s fair to say there’s also a great deal of apprehension surrounding AGI, even among those working in the field. In 2023, OpenAI set out some of the benefits of AGI including the ability to aid humans in the discovery of new scientific knowledge. But it admitted: “AGI would also come with serious risk of misuse, drastic accidents and societal disruption.” It’s important, then, that it is rolled out responsibly.
There’s certainly a lot at stake which is why there are questions surrounding how far we dare let AGI go. Francesca Rossi, an AI researcher at IBM and president of the Association for the Advancement of Artificial Intelligence (AAAI), is unsure whether we really ought to be aiming for human-level intelligence. “AI should support human growth, learning and improvement, not replace us,” she told the journal Nature.
It may be too late for that, given the advances made. For that reason, steps are being taken to help prevent the tech from falling prey to bad actors and among those keen to assist is artificial intelligence researcher Dr Ben Goertzel. He is spearheading two projects – OpenCog Hyperon and SingularityNET. The former seeks to build the future of AGI “based on sound ethical principles and democratic decentralised governance” while the latter offers a decentralised platform for AI and AGI systems aimed at preventing control by a single entity.