The history of computing is full of failures.
The Apple III had a nasty habit of cooking in its deformed shell. The Atari Jaguar, an «innovative» game console that had some false claims about its performance, just couldn’t capture the market. Designed for high performance accounting applications, Intel’s flagship Pentium processor had difficulty with decimal numbers.
But another type of flop that dominates the world of computing is the FLOPS measurement, which has long been called a fair comparison between different machines, architectures, and systems.
FLOPS is a measure of floating point operations per second. Simply put, it is a speedometer for a computing system. And it has been growing exponentially for decades.
So what if I told you that in a few years you will have a system sitting on your desk or on your TV or on your phone that will wipe out the floor of today’s supercomputers? Incredible? I’m crazy? Look at history before judging.
Supercomputer in Supermarket
The PlayStation 4 can run at around 1.8 trillion FLOPS thanks to its advanced Cell microarchitecture and would surpass the $55 million ASCI Red supercomputer that led the global supercomputing league in 1998, nearly 15 years before the PS4 was released.
The big question is where is the next desktop-sized supercomputer? ? And more importantly, when will we get it?
Another brick in the power wall
In recent history, the driving forces between these impressive advances in speed have been in materials science and architectural design; Smaller nanometer-scale manufacturing processes mean chips can be thinner, faster, and release less energy in the form of heat, making them cheaper to run.
Also, with the development of multi-core architectures in the late 2000s, many «processors» now fit on a single chip. This technology, combined with the growing maturity of distributed computing systems, in which many «computers» can run as one machine, means that the Top 500 has grown steadily, almost in step with Moore’s famous law.
However, the laws of physics are starting to get in the way of all this growth, even Intel is worried about it, and many around the world are looking for the next thing.
… in about ten years, we will see the collapse of Moore’s law. In fact, we are already seeing a slowdown in Moore’s Law. Computer power simply cannot sustain its rapid exponential growth using standard silicon technology. — Dr. Michio Kaku — 2012
The main problem with the current processing circuit is that the transistors are either on (1) or off (0). Every time the gate of a transistor «flips», it must release a certain amount of energy into the material that makes the gate for this «flip» to remain. As these gates get smaller and smaller, the ratio between the energy to use the transistor and the energy to «flip» the transistor gets larger and larger, creating serious heat and reliability problems. Current systems are approaching, and in some cases even exceeding, the original thermal density of nuclear reactors, and materials are starting to fail their designers. This is classically called The Wall of Power.
Recently, some have begun to think differently about how to perform useful calculations. In particular, two companies caught our attention in terms of modern forms of quantum and optical computing. Canadian D-Wave Systems and British Optalysys, which have extremely different approaches to very different sets of problems.
Time to change the music
D-Wave has received a lot of press lately, with its ominous black box with a very chilled casing and an extremely sharp cyberpunk internal spike containing a mysterious bare chip with unimaginable powers.
Essentially, the D2 system takes a completely different approach to problem solving, effectively throwing away the book of cause and effect. So what are the problems this Google/NASA/Lockheed Martin is targeting?