Moore’s law
MOORE, MOORE, MOORE
It’s not dead. It isn’t slowing down. It’s not even sick. Give it up for Moore’s Law, the computing paradigm that’s still very much alive and kicking. Jeremy Laird investigates how much longer our PCs can keep getting faster
BY SOME ESTIMATES, over half of the world’s economic growth over the last 50 years has depended on Moore’s Law. What was once an esoteric observation involving transistor density in semiconductors is arguably now the most important economic driving force on the planet. It’s the technological gift that keeps on giving. More transistors. More computing performance. For less money. Year after year.
The net result has been unprecedented global economic growth lasting decades. In other words, Moore’s Law isn’t just about faster CPUs and GPUs every year or the inevitability that the PC you buy today will be hopelessly outclassed tomorrow. It’s about the huge impact that exponential increases in computing power have had on the way we all live.
At least, that was true for about 40 years following the inception of Moore’s Law in the mid-1970s. Over the past decade, however, the assumptions around Moore’s Law have been shaken, some argue shattered. Certainly, some of the biggest players in chip production have struggled to maintain the relentless technological cadence implicit in Moore’s Law. Five or six years ago, the widespread assumption was that Moore’s Law was well on its way to becoming as dead as the dodo.
Today, however, it’s a bit more complicated than that. The big boys in chip production, including Intel, TSMC, and Samsung, all have aggressive roadmaps involving ever smaller chip nodes that look conspicuously Moore-ish. TSMC in particular has a recent track record for actually delivering. So, is Moore’s Law back on course after a temporary blip? Or are its death throes merely being dragged out a little longer than expected? Time to find out.
TO BEGIN, let’s reflect on what Moore’s Law actually is. It comes down to the disarmingly simple observation that transistor densities in integrated circuits double every two years. In other words, Moore’s Law says that twice as many components are squeezed into a given area of computer chip every couple of years. This is the pace at which the chip industry has progressed for decades and the result, in hardware terms, has been spectacular, exponential growth in computing complexity and capability.
If that seems to also imply a halving in cost over the same time frame, the reality isn’t quite so simple. Certainly, Moore’s Law proved accurate from 1975, when the co-founder of Intel, Gordon Moore (the ‘Moore’ in Moore’s Law), adjusted his earlier observational time frame from doubling every year to every two years, until around 2010, when there were the first signs that the wheels might be coming off one of the most remarkable runs in engineering and economic history.
To illustrate that with some numbers, way back in 1970, Intel’s first microprocessor, the 4004, packed 2,250 transistors, which was impressive for the day. Intel’s latest desktop chip, Alder Lake, contains over 20 billion transistors. Of course, you’d expect today’s CPUs to be dramatically more complex than that 50-yearold processor, so try this for a more recent and arguably more brain-bending comparison. The Intel 486 CPU of 1990 boasted 1.2 million transistors. Roughly speaking, that means 20,000 486s would fit inside the die space of a current Intel Core i9-12900K, were the 486 built on the same 10nm production node as the new chip. That’s a staggering notion for anyone who can remember a 486 when it was the latest and greatest desktop powerhouse.
Or how about the first “Willamette” Pentium 4, from 10 years after the 486? That was a 42 million transistor chip. Using the same rationale, you’re looking at around 500 of those to rack up the same transistor count as a 12900K. Even a fairly modern Core i7 processor from 2010 would fit inside an Alder Lake die 30 times over, again if it were produced on Intel’s latest node.