The history of computing is full of failures.

The Apple III had a nasty habit of cooking in its deformed shell. The Atari Jaguar, an «innovative» game console that had some false claims about its performance, just couldn’t capture the market. Designed for high performance accounting applications, Intel’s flagship Pentium processor had difficulty with decimal numbers.

But another type of flop that dominates the world of computing is the FLOPS measurement, which has long been called a fair comparison between different machines, architectures, and systems.

FLOPS is a measure of floating point operations per second. Simply put, it is a speedometer for a computing system. And it has been growing exponentially for decades.

So what if I told you that in a few years you will have a system sitting on your desk or on your TV or on your phone that will wipe out the floor of today’s supercomputers? Incredible? I’m crazy? Look at history before judging.

Asci_red _-_ tflop4m

Supercomputer in Supermarket

Recent Intel i7 Haswell The processor can perform about 177 billion FLOPS (GFLOPS), which is faster than the fastest supercomputer in the US in 1994, the Sandia National Labs XP/s140 with 3,680 compute cores working together.

The PlayStation 4 can run at around 1.8 trillion FLOPS thanks to its advanced Cell microarchitecture and would surpass the $55 million ASCI Red supercomputer that led the global supercomputing league in 1998, nearly 15 years before the PS4 was released.

IBM Watson AI System has a (current) peak performance of 80 TFLOPS, which is far from being included in the list of the top 500 modern supercomputers, and the Chinese Tianhe-2 tops the Top 500 for the last 3 times in a row with maximum performance. 54,902 TFLOPS, or almost 55 Peta Flops.

The big question is where is the next desktop-sized supercomputer? ? And more importantly, when will we get it?

CPU_power_density

Another brick in the power wall

In recent history, the driving forces between these impressive advances in speed have been in materials science and architectural design; Smaller nanometer-scale manufacturing processes mean chips can be thinner, faster, and release less energy in the form of heat, making them cheaper to run.

Also, with the development of multi-core architectures in the late 2000s, many «processors» now fit on a single chip. This technology, combined with the growing maturity of distributed computing systems, in which many «computers» can run as one machine, means that the Top 500 has grown steadily, almost in step with Moore’s famous law.

However, the laws of physics are starting to get in the way of all this growth, even Intel is worried about it, and many around the world are looking for the next thing.

… in about ten years, we will see the collapse of Moore’s law. In fact, we are already seeing a slowdown in Moore’s Law. Computer power simply cannot sustain its rapid exponential growth using standard silicon technology. — Dr. Michio Kaku — 2012

The main problem with the current processing circuit is that the transistors are either on (1) or off (0). Every time the gate of a transistor «flips», it must release a certain amount of energy into the material that makes the gate for this «flip» to remain. As these gates get smaller and smaller, the ratio between the energy to use the transistor and the energy to «flip» the transistor gets larger and larger, creating serious heat and reliability problems. Current systems are approaching, and in some cases even exceeding, the original thermal density of nuclear reactors, and materials are starting to fail their designers. This is classically called The Wall of Power.

Recently, some have begun to think differently about how to perform useful calculations. In particular, two companies caught our attention in terms of modern forms of quantum and optical computing. Canadian D-Wave Systems and British Optalysys, which have extremely different approaches to very different sets of problems.

9496546

Time to change the music

D-Wave has received a lot of press lately, with its ominous black box with a very chilled casing and an extremely sharp cyberpunk internal spike containing a mysterious bare chip with unimaginable powers.

Essentially, the D2 system takes a completely different approach to problem solving, effectively throwing away the book of cause and effect. So what are the problems this Google/NASA/Lockheed Martin is targeting?

travelling_salesman_problem

stray man

Historically, if you want to solve an NP-Hard or Intermediate problem where there is an extremely large number of possible solutions with a wide range of possibilities, using the «values» of the classical approach just doesn’t work. Take, for example, the traveling salesman problem; given N-cities, find the shortest path to visit all cities once. It is important to note that TSP is a major factor in many areas such as microarray manufacturing, logistics, and even DNA sequencing.

But all these problems boil down to an apparently simple process; Choose a starting point, generate a route around N «things», measure the distance, and if there is an existing route that is shorter than it, discard the trial route and move on to the next one until there are no more routes to test.

This sounds easy, and for small values ​​it is; for 3 cities there are 3 * 2 * 1 = 6 routes to check, for 7 cities — 7 * 6 * 5 * 4 * 3 * 2 * 1 = 5040, which is not too bad for a computer. This is a factorial sequence that can be expressed as «N!», so 5040 is 7!.

However, by the time you go a little further to visit 10 cities, you need to test over 3 million routes. When you get to 100 the number of routes you need to check will be 9 followed by 157 digits. The only way to look at these kinds of functions is to use a log plot where the y-axis starts at 1(10^0), 10(10^1), 100(10^2), 1000(10^3). ) and so on.

download

The numbers are simply getting too big to be processed on any machine that exists today or could exist using classical computing architectures. But what D-Wave does is quite different.

640px-DWave_128chip

Vesuvius appears

The Vesuvius chip in D2 uses about 500 «qubits», or quantum bits, to perform these calculations using a technique called quantum annealing. Instead of measuring each route at a time, Vesuvius’s qubits are set into a state of superposition (not turning on and off, working together like a kind of potential field) and a series of increasingly complex algebraic descriptions of the solution (i.e., a series of Hamiltonians). descriptions of the solution, not the solution itself) are applied to the superposition field.

Essentially, the system is testing the suitability of each potential solution at the same time, like a ball «deciding» which way to go down a hill. When the superposition relaxes into a ground state, that ground state of the qubits must describe the optimal solution.

Many wonder what advantage the D-Wave system provides over a conventional computer. In a recent test of the platform against a typical Traveling Saleman task that took 30 minutes on a classic computer, it took only half a second on Vesuvius.

However, to be clear, this will never be the system you play Doom on. Some commentators have tried to compare this highly specialized system to a general purpose processor. You better compare the Ohio-class submarine to the F35 Lightning; whatever metric you choose for one is so irrelevant for the other that it’s useless.

D-Wave is orders of magnitude faster for its specific problems compared to a standard processor, and FLOPS estimates range from a relatively impressive 420 GFLOPS to a mind-blowing 1.5 Peta-FLOPS (placed it in the top 10 supercomputer list of 2013 at the time of the last public release). prototype). If anything, this discrepancy highlights the beginning of the end of FLOPS as a universal measurement applied to specific problem areas.

This area of ​​computing targets a very specific (and very interesting) set of problems. It is worrying that one of the problems in this area is cryptography. In particular, public key cryptography.

Luckily, D-Wave’s implementation is focused on optimization algorithms, and D-Wave has made some design decisions (such as a hierarchical peer-to-peer structure on a chip) that indicate that you can’t use Vesuvius to solve Shor’s algorithm, which could potentially unblock the Internet. so much that it would make Robert Redford proud.

laser mathematics

The second company on our list is Optalysys. This UK-based company is taking the computation and turning it on with an analog superposition of light to perform certain classes of computation using the very nature of light. The video below demonstrates some of the background and basics of the Optalysys system as presented by Prof. Heinz Wolf.

It’s a bit of a thrill, but at its core, it’s a box that will hopefully sit on your desk one day and provide computational support for modeling, CAD/CAM, and medical imaging (and maybe, just maybe, PC gaming). As with Vesuvius, the Optalysys solution can’t handle basic computing tasks, but that’s not what it’s designed for.

A useful way to think of this style of optical processing is to think of it as a physical graphics processing unit (GPU). Modern GPU uses many stream processors in parallel, performing the same calculations for different data coming from different areas of memory. This architecture was a natural outgrowth of how computer graphics are generated, but this massively parallel architecture has been used for everything from high-frequency trading to artificial neural networks.

Optalsys takes similar principles and translates them into a physical environment; data splitting becomes beam splitting, linear algebra becomes quantum interference, MapReduce style functions become optical filtering systems. And all these functions work in a constant, almost instantaneous, time.

The original prototype device uses a 20 Hz 500×500 elementary grid to perform fast Fourier transforms (basically «what frequencies appear in this input stream?») and provides an incredible 40 GFLOPS equivalent. The developers are aiming for a 340 GFLOPS system by next year, which, given the expected power consumption, will be an impressive result.

So where is my black box?

History of computing shows us that initially the reserve of research laboratories and government agencies is rapidly breaking through. in consumer equipment. Unfortunately, the history of computing has not yet dealt with the limitations of the laws of physics.

Personally, I don’t think D-Wave and Optalysys will be the exact technologies we’ll have on our desktops 5-10 years from now. Consider that the first recognizable «smart watch» was introduced in 2000 and failed miserably; but the essence of the technology continues today. Likewise, this research in quantum and optical computing accelerators is likely to end up as footnotes in the «next big thing.»

Materials science approaches biological computers by using DNA-like structures to perform mathematical operations. Nanotechnology and «Programmable Matter» are approaching the point where they do not process «data», but the material itself will contain, represent and process information.

All in all, it’s a brave new world for the computational scientist. Where do you think this is all going? Let’s talk about it in the comments!

Photo Credit: KL Intel Pentium A80501 Constantina Lancet, Asci red — tflop4m from USG — Sandia National Laboratory, DWave D2 from Vancouver Sun, DWave 128chip from D-Wave Systems, Inc., Randall Munro’s Traveling Salesman Problem (XKCD)

Похожие записи