Beyond DRAM and Flash, Part 1: The End Is Nigh

The history of computing is inextricably linked with Moore’s Law, which says that the number of components on a chip of a given area will double every 18 months, for the same price. That last clause is actually not part of Gordon Moore’s original observation, but it’s the reason why today your cellphone has more computing power than a mainframe computer from the 60s, at a miniscule fraction of the size and an infinitesimal fraction of the cost. Remarkably, Moore’s Law has held true for four decades and has come to seem as reliable a descriptor of our industry as Ohm’s Law is of the current in a resistor.

Recently, I wrote about the memory hierarchy and how its days are numbered. Now, I want to dive a little deeper into what the future holds for two of the technologies we use to hold our data: DRAM and Flash. In short, that future is not good. We have to face up to the fact that both technologies are gently bumping up against a “scale wall” beyond which they will not get any better. The comfortable drumbeat of free progress is coming to an end. The implications are huge.

Why does this scale wall matter?

We’re in the middle of data explosion[1]. Dealing with this deluge of data requires ever-more-capable computers to ingest and manipulate ever-larger data sets. We can no longer rely on our tried and trusted memory technologies to evolve to cope. If your IT budget is growing exponentially along with the data, this isn’t a problem. Is yours?

DRAM progress has almost stalled

The truth is that DRAM fell off the curve some years ago. As the figure shows, by the turn of the millennium we were already two years behind. Today, we’re nearly a decade in arrears.

 

Figure 1. DRAM evolution lags Moore's Law (Netlist Blog)

Figure 1. DRAM evolution lags Moore's Law (Netlist Blog)

 

Underlying Moore’s Law is a less well-known, but more exact, law called Dennard scaling (named, appropriately, after the inventor of DRAM itself). Dennard scaling is concerned with the reduction over time of a range of physical and electrical parameters.

The physical scaling is the problem. DRAM defines its state by storing electrons in a capacitor. Dennard scaling has reduced the size of the “mouth” of the capacitor and thus the amount of charge that can be stored and the space available to the insulating layers that keep the electrons from escaping. With a shrinking mouth, the only way to keep the capacitance at a usable level is to increase the depth. Capacitors are now approximately the aspect ratio of a drinking straw! That we’re able to make them at all is a testament to the ingenuity and persistence of chipmakers everywhere, but at some point a physical limit must be reached.

 

Figure 2. Simplified DRAM cell

Figure 2. Simplified DRAM cell

The Hammer Test

Every time you use your credit card, call an ambulance or sell some stock, chances are your transaction will be handled by HP servers. These computing systems are crucial so, naturally, we take reliability testing very seriously. Each time a new generation of DRAM chips comes out, we gather samples from every manufacturer and subject their chips to the “Hammer Test.” The Hammer is an automated test that subjects DRAM chips to extreme conditions to make sure they’re up to the job. A couple of years ago, when 28nm DRAM devices were coming onto the market, we were shocked when every single one failed the Hammer. We were able to work with our memory vendors to achieve acceptable quality, but it was clear that scaling down had started to affect reliability. That’s when we accelerated our search for alternative technologies.

Flash scaling is also starting to slow

Flash memory technology isn’t quite as close to its end-of-life. Whereas a DRAM cell consists of a transistor and a capacitor, a Flash cell is relatively simple—essentially just a four-pole transistor with the data stored as electrons trapped on the fourth, “floating” gate. This makes it much easier to scale (at the expense of read/write speed and the ability to address individual bits). However, the scaling problem persists: as Flash moves to finer manufacturing nodes the number of electrons storing each bit drops and data retention suffers. Our assessment is that scaling Flash much beyond the 20nm node is problematic.

 

Figure 3. Simplified Flash memory cell

 

One solution to scaling is to store up to three bits per cell—8 possible values. Unfortunately, that means dividing an already small number of electrons still further and the lifetime of multilevel cells suffers.

Manufacturers are turning to vertical scaling of cells—called 3D-NAND Flash—to fit more cells per square millimeter of silicon without making individual cells smaller. This is another remarkable piece of chip-making ingenuity but it’s not easy and thus not cheap to do. There is an expectation that price-per-bit will fall more slowly than in the past and that the lifetime of 3D-NAND will be short, reaching its end-of-life before the end of the decade.

In addition, performance—read and write times—is being sacrificed to scaling[2]. As in the case of DRAM, HP saw the need for a replacement technology coming some years ago. 

We need a step change—enter the Memristor

The industry could wait for breakthroughs in DRAM and Flash technology that will steer us back onto the path of Moore’s Law. I don’t think that’s going to happen. However, we can steer memory as a whole back by replacing DRAM and Flash with a technology that does scale.

Our money, as everybody knows by now, is literally on the Memristor as a replacement for both DRAM and Flash.

You can read part two, “New Memory Technology for the Data Deluge,” here

 

 

[1] Source: IDC, “EMC Digital Universe Study,” 2014.


[2] L. M. Grupp, J. D. Davis, and S. Swanson. The bleak future of NAND Flash memory. In Proceedings of the 10th USENIX conference on File and Storage Technologies (FAST) , 2012