Moore's Law Is Dead

Do you like your iPhone? Today, you are walking with the equivalent of what was supercomputer not too long ago — in your pocket.

The foundation for the modern semiconductor industry got its start in the late 1940s and 1950s. Electronics up until then were dependent on vacuum tubes. These weren’t, to say the least, very easy to miniaturize. But we did try to use them.

In 1946, the U.S. government built the first general-purpose electronic computer. It was called ENIAC (Electronic Numerical Integrator And Computer) and contained over 17,000 vacuum tubes, along with an additional 90,000 bulky components. It was enormous. Something better was obviously needed for computers to become practical.

That began to happen in 1947 when engineers at Bell Labs invented the transistor.

These components could replace vacuum tubes. They could also be made much smaller. Things started changing fast. By 1954, Texas Instruments was selling them.

Then, in 1958, Texas Instruments engineer Jack Kilby (who would go on to win the Nobel Prize) invented a way to put multiple transistors on a single piece of material. He used a chip made out of germanium for its semiconducting properties.
Some of the greatest inventions in history sure didn’t look that way at the start. In fact, at times, they were downright ugly. However, this ugly duckling was the world’s first integrated circuit:

Experiment

 

Not long after, Robert Noyce of Fairchild Semiconductor found a better way to build an integrated circuit using a silicon chip. His invention was so important, we have geography named after it. It was the beginning of a big industry that specialized in making things smaller. And they did get smaller, regularly.

By 1965, another Fairchild Semiconductor researcher, Gordon Moore, noticed something important. He published a paper with his observation: Approximately every year and a half or so, the semiconductor industry was learning how to pack twice as many transistors into a given amount of silicon real estate.

In other words, the power of computer chips was doubling on a predictable schedule. What’s more, costs were falling too. This observation, famously known as Moore’s law, eventually became prophetic.

Since then, nonstop improvement in making computer components smaller has made it possible to pack more and more power into a piece of silicon real estate. ENIAC had close to 100,000 components. It also covered 1,800 square feet and weighed 30 tons. Today, the Intel processor (made by a company founded by Noyce and Moore) in your computer packs more than 10,000 times that many components — all on a chip less than 2 centimeters square.

The fulfillment of Moore’s law has led to some of the most quickly adopted technologies ever. In its time, the personal computer was the fastest adopted tech in history. It was only eclipsed by the smartphone — another product enabled by the integrated circuit. In both cases, competition on performance and price have given us cheaper and better computers.

As circuits have grown less expensive and more powerful, the products they enabled became ubiquitous.

Without the integrated circuit, we could never have all the electronic goodness we enjoy today. The invention of the integrated circuit has transformed every single economic activity we engage in.

You’re probably reading this on your smartphone right now, aren’t you?

But many are predicting that Moore’s law, as we’ve known it, is about to die. In a way, it is. What’s really going to happen, however, is that it’s going to be reborn, in a sense. We are going to build chips in a new way, by moving into a new dimension — literally.

When Gordon Moore wrote his paper, the most complex chip had only 64 transistors on it. Back around 2000, the processor on my home-built PC was made using a 180-nanometer process technology. The one I’m using now, also built out of parts, uses a 22-nanometer technology. The amount of transistors on the chip has increased from 37 million to over 1 billion in only 15 years.

Moore’s law is based on shrinkage. How small can you shrink the manufacturing process? The smaller you can do it, the more components you can fit on a silicon wafer. We’ve been really good at that for over 50 years.

But we’re hitting limits with how small we can make these components. In fact, over the past several years, it’s become harder and harder to shrink the manufacturing process. Some experts predict we’ll hit the end of the line by 2020. Some say it will be 2022. Either way, it’s going to happen pretty soon.

But it isn’t going to be the end of exponential improvement in processing power. In fact, the smartphone you buy in 2025 is going to be more powerful than a professional workstation you might be using today. Not only more powerful in terms of processing power but also in terms of memory. Plus, just as we’ve seen for decades, you’ll get more bang for your buck.

That’s important, because we are already feeling the pain. We are feeling it most in flash memory right now. That’s the sort of data storage you have in your smartphone. As we’ve scaled the process down, we have more and more problems with this common computer memory.

The market demands more memory, and faster memory. That’s you. You want to have lots of browser tabs open, listen to music and have word processing software running at the same time. You also want enough storage to keep all your pictures, songs and files safely and conveniently stored.

And you probably want to be able to do it in something small enough to carry in your pocket.

But the market isn’t just you and others like you. It’s also big users with data centers. You see, we hit a limit on hard drive performance a long time ago. That’s an older storage technology. Your PC or laptop probably still uses it, although many newer laptops have switched to flash modules.

Big Data needs fast storage. Even with more powerful processors, you aren’t going to get very far if you can’t access data on a hard drive fast enough. It becomes a bottleneck. For that reason, hard drives have been going away for more performance-intensive applications. Flash memory has replaced them.

There’s a downside on the horizon, however. The tiny transistors on these memory chips are harder to erase and write to accurately as time goes on. They aren’t as good at storing data for a long period of time.

And it gets worse the smaller they become. They become less reliable, meaning we need more advanced memory controllers to make up the difference. They have to use predictive logic to identify which memory elements will fail next and to move those bits of data somewhere else on the chip proactively.

Yet even with better memory controllers, there’s a limit to how small we can go using current flash technology. Without a fix, they are going to hit a performance wall, just like hard drives did. Fortunately, these problems are on their way to a solution. Computer chips are going into the third dimension.

Instead of packing more electronic components into a two-dimensional space, we’re going to be building them up, layer on layer, into three-dimensional circuits. That way, we can still cram more transistors in a small package, while preserving reliability and performance.

Think of it as analogous to urban real estate. When an area gets built out, it starts getting built up. Instead of single-story structures, you start to see tall buildings. You go 3-D.

This wave of new technology is already starting to show up. In 2013, Samsung became the first semiconductor manufacturer to overcome scaling limits in flash memory with 3-D technology by marketing a vertical solution.

That’s just the beginning. The entire semiconductor industry is poised to seriously move into 3-D circuits in various forms. Samsung isn’t the only company working on it. All the big guys are doing it. Intel, Micron, Hynix, Taiwan Semiconductor and others are all working on moving to 3-D tech. We are also seeing plans to increase the amount of layers that will be stacked on a single module.

Chip foundries need new tooling to churn out new 3-D chips. The market keeps growing. Demand for computer chips keeps going up, be it for microprocessors, memory or storage. All the big trends you’ve heard about, from Big Data to the Internet of Things to the continued growth in the smartphone and wearables market… all of them are driving demand for more equipment to build out their electronic guts.

Plus, there’s more than growth in this industry: There’s frequent turnover. These chip companies are always in a race to keep up with Moore’s law, making it a self-fulfilling prophecy. The one that doesn’t manage gets left in the dust by the rest of the pack.

That means they are always in need of upgrades. Whenever semiconductor manufacturers need to shrink their process, they need to buy the equipment to do it. That makes select companies who can supply chip foundries with the latest and greatest fabrication technology very attractive right now.

Ad lucrum per scientia (toward wealth through science),

Ray Blanco
for The Daily Reckoning

P.S. Be sure to sign up for The Daily Reckoning — a free and entertaining look at the world of finance and politics. The articles you find here on our website are only a snippet of what you receive in The Daily Reckoning email edition. Click here now to sign up for FREE to see what you’re missing.

 

The Daily Reckoning