The Continuing Evolution of Moore’s Law

More than half a century on, Moore’s Law continues to shape the tech landscape. We asked Intel CTO Mike Mayberry to share his view.

By Mike Mayberry, Intel Chief Technology Officer

Moore’s Law is dead – Long live Moore’s Law! This was the essence of the debate at the Defense Advanced Research Projects Agency’s (DARPA) Electronics Resurgence Initiative (ERI) Summit this year.

But to understand the debate, we need to agree on what is meant by Moore’s Law.

“Moore did not make an observation about performance. For that we turn to Robert Dennard and Fred Pollack.”

Gordon Moore’s original observation in 1965 was that as you packed more functions into an integrated circuit, the cost per function decreased. The first part of this observation is about economics and its essence has remained constant even as the underlying technology and rate of improvement has evolved.

Moore did not make an observation about performance. For that we turn to Robert Dennard and Fred Pollack. Dennard observed in 1974 that as a transistor became smaller, size, frequency, and power improved - if you scaled features and voltage at the right rate. And Pollack from Intel observed that doubling the complexity of a microprocessor resulted in a square root of two increase in performance.

With all of this we can construct a user value triangle relating price, integration, and performance.

So, what do people mean when they say Moore’s Law is dead? Often, they mean one or more triangle sides, and not specifically Moore’s (economic) side.

First, when people lament that CPU core frequencies are no longer scaling as they did in the ‘90s, they are implicitly referring to Dennard scaling. We never perfectly followed Dennard, but we came closest in the ‘90s.

Second, people may lament PCs don’t feel faster. That is related to Pollack’s observation specifically related to the CPU and does not consider the problems of being network-bound or memory-bound. Most architectures today are memory-bound, and many of your everyday tasks are network-bound. Building only a faster CPU, without addressing memory or network constraints, results in only incremental gains.

The third thread of the debate though is not captured by the triangle, it is economic: the rising costs of leading-edge design. There are those who can’t afford to participate in a new node because it’s too expensive, which can lead to people claiming, “we didn’t need it anyway.”

Elements of this debate have been going on since the early 2000s. Meanwhile, technologists ignore the debate and keep making progress. Here’s an example of 10 years of progress: reducing a custom-built microwave-sized system to the size of a large paperback. And the new system outperforms the old! This creates economic value through integration, the essence of Moore’s Law, along with continued power-performance scaling despite the end of Dennard scaling.

CMOS scaling is not yet done and we can see continued progress as we improve our ability to control fabrication. We are not so much limited by the physics as by our ability to fabricate in high volumes with high precision. It is difficult, but we expect it to continue.

We moved to 3D starting with Trigate (FinFET) at 22nm node but an even better example is our announcement in May of a 96 layer, 4 bit per cell NAND flash that packs up to 1 Terabit of information per die. This is a true post-Dennard example of packing increasing functions into a die without feature scaling. Over time, we expect logic to also move more toward 3D.

We have some promising research devices, tunnel FET and ferroelectric, as examples, which can drastically improve power-performance. Unfortunately, they are not a simple replacement for CMOS. So, we expect to integrate these in a heterogeneous manner, likely as layers, combining the goodness of scaled CMOS along with novel functions provided by these new devices.

CMOS scaling + 3D processing + Novel functions = Future of Moore’s Law
Heterogeneous Systems + Novel Data Processing = Future Product Evolution

And as the amount and types of data explode, we want to rapidly integrate novel architectures purpose-built for to the new world of data. Doing so heterogeneously is not only faster but can combine chiplets from multiple teams.

New architectures that combine memory and compute are an example of post-Pollack data processing. One example is Loihi, Intel’s neuromorphic research chip. Artificial Intelligence workloads in general have different memory access patterns and thus can exploit different data processing architectures compared with traditional software workloads.

Pulling all this together, we expect the economic benefits of Moore’s Law to continue even as the elements look different than when Moore made his original observation. We need not be distracted by the debate but continue to deliver ever-better products over the next 50 years.

*Other names and brands may be claimed as the property of others.