machine learning revolution
THE MACHINE LEARNING REVOLUTION
According to the hype, ar tificial intelligence will soon be capable of any thing. Jeremy Laird examines the true nature of machine learning
IT’S THE NEXT BIG WAVE of computing. So says Intel of
artificial intelligence (AI). If anything, that’s underselling it. According to some evangelists, AI is the key to unlocking almost every major problem humanity faces, from a cure for cancer to limitless clean energy. Less optimistic observers, including philosopher and neuroscientist Sam Harris, see AI as one of the most pressing existential threats to the survival of mankind. Either way, as Dr. Emmett Brown would put it, it’s pretty heavy.
© ANDRIY ONUFRIYENKO /GETTY IMAGES
Even a brief analysis of the implications of AI quickly takes on epic proportions. Back at the more practical end of the epistemological scale, getting a grasp on current
commercial implementations of AI can be equally baffling. Machine learning, deep learning, neural networks, Tensor cores—keeping track of the processes and hardware, not to mention the jargon, associated with AI is a full-time job.
So, the long-term impact of AI may be anyone’s guess. But, in the here and now, there are plenty of questions that can at least begin to be addressed. What is AI in practical computing terms? What is it used for today? What kind of hardware is involved and how are AI workflows processed? And does it add up to anything that you as a computing enthusiast should care about? Or is it just a tool for Big Tech to bolster their balance sheets?
BUZZWORDS AND BULLCRAP or the greatest paradigm shift in the history of computing? What, exactly, is artificial intelligence, or AI? According to the hype, AI won’t just radically revolutionize computing. Eventually it will alter almost every aspect of human life. Right now, however, defining AI and determining how relevant it really is in day-to-day computing, that’s not so easy.
Put another way, we can all agree that when, for instance, the self-driving car is cracked, it’ll have a huge impact on the way we live. But more immediately, when a chip maker bigs up the “AI” abilities of its new CPU or GPU, does that actually mean much beyond the marketing? Whether it’s a graphics card or a smartphone chip, is the addition of “AI” fundamentally different from the usual generational improvements in computing performance?
Taken in its broadest sense, AI is any form of intelligence exhibited by machines. The meaning of “intelligence” obviously poses philosophical problems, but that aside, it’s a pretty straightforward concept. Drill down into the specifics, however, and it all gets a lot more complicated. How do you determine that any given computational process or algorithm qualifies as artificial intelligence?
WHAT DEFINES AI?
One way to define AI is the ability to adapt and improvise. If a given process or algorithm can’t do that to some degree, it’s not AI. Another common theme is the combination of large amounts of data with the absence of explicit programming. In simple terms, AI entails a system with an assigned task or a desired output, and a large set of data to sort through, but the precise parameters under which the data is processed aren’t defined. Instead, the algorithms are designed to spot patterns and statistical relationships, and learn in a trial and error fashion. This is what is otherwise known as machine learning and it’s usually what is meant when the term AI is used in a commercial computing context.
Nvidia’s new RTX 30-series graphics cards sport thirdgeneration Tensor cores for accelerating AI workloads.
A good example of how this works in practice is natural language processing. A non-AI approach would involve meticulously coding the specific rules, syntax, grammar, and vocabulary of a given language. With machine learning, the algorithmic rules are much less specific and all about pattern spotting, while the system is fed huge amounts of sample data from which patterns eventually emerge.
GPT-3
Generative Pre-trained Transformer 3 (GPT-3), developed by San Francisco-based OpenAI and released in 2020, is just such a machine learning natural language system. It was trained using billions of English language articles harvested from the web. GPT-3 arrived to much acclaim, with The New York Times declaring it “terrifyingly good” at writing and reading. In fact, GPT-3 was so impressive, Microsoft opted to acquire an exclusive license in order to use the technology to develop and deliver advanced AI-powered natural language solutions, the first of which is a tool that converts text into Microsoft Power Fx code, a programming language used for database queries and derived from Microsoft Excel formulas.