Moore’s Law, the truism that the amount of raw computing power available per dollar tends to double roughly every eighteen months, has been part of computer science since 1965, when Gordon Moore first noticed the trend and wrote an article about it. At the time, the «Law» bit was a joke. 49 years later, no one is laughing.

Nowadays, computer chips are made using an extremely sophisticated but very old fabrication method. Crystal sheets of very pure silicon are coated with various substances, engraved with high-precision laser beams, acid-etched, bombarded with high-energy impurities and electroplated.

There are over twenty layers of this process going on, creating nanoscale components with a precision that is, frankly, staggering. Unfortunately, these trends cannot continue forever.

We are rapidly approaching the point where the transistors to be engraved will be so small that exotic quantum effects will interfere with the basic operation of the machine. It is generally accepted that the latest advances in computer technology will go beyond silicon by around 2020, when computers will be about sixteen times faster than they are today. So, for the general trend of Moore’s Law to continue, we need to split paths with silicon, as we did with vacuum tubes, and start making chips using new technologies that have more room to grow.

4. Neuromorphic Chips

As the electronics market moves towards smarter technologies that adapt to users and automate smarter work, many of the problems that computers need to solve are related to machine learning and optimization. One powerful technology used to solve such problems is «neural networks».

Neural networks reflect the structure of the brain: they have nodes that represent neurons, and weighted connections between those nodes that represent synapses. Weight-driven information flows through the network to solve problems. Simple rules govern how weights change between neurons, and these changes can be used for learning and intelligent behavior. Such training requires computational costs when modeling on a conventional computer.

Neuromorphic chips try to solve this problem with special equipment specifically designed to mimic the behavior and train neurons. In this way, huge speedups can be achieved by using neurons that behave more like real neurons in the brain.

IBM and DARPA have spearheaded research into neuromorphic chips in a project called SyNAPSE, which we mentioned before. Synapse has the ultimate goal of creating a system equivalent to a complete human brain, implemented in hardware, no larger than a real human brain. In the near future, IBM plans to include neuromorphic chips in their Watson systems in order to speed up the solution of certain subtasks in an algorithm that depends on neural networks.

The current IBM system implements a programming language for neuromorphic hardware that allows programmers to take pre-trained neural network fragments (called «corlets») and link them together to build robust problem-solving machines. You probably won’t have neuromorphic chips for a long time, but you will almost certainly be using web services that use servers with neuromorphic chips in just a few years.

3. Micron hybrid memory cube

One of the major bottlenecks for modern computer design is the time it takes to retrieve data from the memory that the processor needs to run on. The time it takes to talk to the ultra-fast registers inside the processor is significantly shorter than the time it takes to fetch data from RAM, which in turn is much faster than fetching data from a bulky, unwieldy hard drive.

As a result, the processor is often left waiting for a long period of time before data arrives so that it can perform the next round of computation. CPU cache is about ten times faster than RAM, and RAM is about a hundred thousand times faster than a hard drive. In other words, if talking to the CPU cache is like walking to the next house to get some information, then talking to the RAM is like going to the store for a couple of miles for the same information. I’m going to the moon.

Micron’s technology could put the industry out of whack compared to conventional DDR memory technology by replacing it with its own technology, which bundles RAM modules into cubes and uses higher bandwidth cables to speed up communication with those cubes. The cubes are built directly into the motherboard next to the processor (rather than plugged into slots like conventional RAM). The hybrid memory cube architecture offers the processor five times the bandwidth of DDR4 RAM coming out this year and consumes 70% less power. The technology is expected to hit the supercomputer market early next year, and the consumer market a few years later.

2. Storage memristor

Another approach to solving the memory problem is to design computer memory that has the advantage of more than one kind of memory. Typically, memory trade-offs come down to cost, access speed, and volatility (volatility is a property of a constant supply of energy for data storage). Hard drives are very slow, but cheap and non-volatile.

The ram is changeable, but fast and cheap. Cache and registers are volatile and very expensive, but also very fast. The best of the two technologies is non-volatile, quickly available, and cheap to build. In theory, memristors offer a way to do this.

Memristors are similar to resistors (devices that reduce the flow of current through a circuit), with the catch that they have memory. Pass current through them in one direction, and their resistance will increase. Pass the current through the other path and their resistance will decrease. As a result, you can create inexpensive, high-speed RAM-style memory cells that are non-volatile and can be manufactured cheaply.

This makes it more likely that blocks of RAM will be as large as the hard drives that store the entire OS and file system of the computer (e.g., the huge non-volatile RAM disk, ), all of which can be accessed at RAM speed. No more hard drive. No more going to the moon.

HP designed the computer using memristor technology and a specialized core design that uses photonics (light-based communication) to speed up communication between computing elements. This device (called the «Machine») is capable of performing complex processing on hundreds of terabytes of data in a fraction of a second. Memristor memory is 64-128 times denser than conventional RAM, which means the physical footprint of the device is very small — and the entire shebang consumes much less power than the server rooms it will replace. HP hopes to bring computers based on The Machine to market in the next two to three years.

1. Graphene processors

Graphene is a material made up of highly bonded lattices of carbon atoms (analogous to carbon nanotubes). It has a number of remarkable properties, including tremendous physical strength and near-superconductivity. There are many potential uses for graphene, from space elevators to bulletproof vests to better batteries, but this article focuses on their potential role in computer architectures.

Another way to make computers faster than downsizing a transistor is to simply make those transistors run faster. Unfortunately, since silicon is not a very good conductor, much of the energy transferred through the processor is converted into heat. If you try to increase the clock speed of silicon processors above nine gigahertz, the high temperature will interfere with the operation of the processor. 9 GHz requires extraordinary cooling efforts (using liquid nitrogen in some cases). Most consumer chips are much slower. (To learn more about how conventional computer processors work, read our article on it).

Graphene, on the other hand, is an excellent conductor. A graphene transistor can theoretically run up to 500 GHz without any heat issues, and you can etch it just like etching silicon. IBM has already engraved simple analog graphene chips using traditional chip lithography techniques. Until recently, the problem has been twofold: firstly, it is very difficult to produce graphene in large quantities, and secondly, we do not have a good way to create graphene transistors that would completely block the flow of current when they are turned off. ‘ state.

The first problem was solved when the electronics giant Samsung announced that its research arm had discovered a way to mass-produce single crystals of graphene in high purity. The second problem is more difficult. The problem is that while graphene’s extreme conductivity makes it attractive in terms of heat, it’s also annoying when you want to build transistors—devices that are designed to stop transmitting billions of times per second. Graphene, unlike silicon, lacks a «gap» — a current velocity that is so small that it causes the material to drop to zero conductivity. Luckily, it looks like there are a few options on that front.

Samsung has developed a transistor that uses the properties of the silicon-graphene interface to obtain the desired properties, and has built a number of basic logic circuits with it. Although this circuit is not a pure graphene computer, it will retain many of the beneficial effects of graphene. Another option would be to use «negative resistance» to create a different type of transistor that can be used to create logic gates that operate at higher power but with fewer elements.

Of the technologies discussed in this article, graphene is the furthest removed from commercial reality. It can take up to ten years for the technology to mature enough to completely replace silicon. However, in the long term, it is highly likely that graphene (or a variant of the material) will become the basis of the computing platform of the future.

next ten years

Our civilization and much of our economy has become heavily dependent on Moore’s Law, and huge institutions are investing huge amounts of money in an attempt to prevent its end. A number of minor improvements (such as 3D chip architecture and error-tolerant computing) will help keep Moore’s Law going beyond its theoretical six-year horizon, but this sort of thing can’t last forever.

At some point in the next decade, we will need to switch to a new technology, and the smart money will be on graphene. This change could seriously shake the status quo in the computer industry, as well as make and lose a lot of fortune. Of course, even graphene is not a permanent solution. It’s likely that we’ll be here again in a few decades debating which new technologies will take over, now that we’ve reached the limits of graphene.

What direction do you think the latest computer technology will take? Which of these technologies do you think has the best chance of taking electronics and computers to the next level?

Image Credits: ESD Gloved Female Hand Via Shutterstock

Похожие записи