Quantum computing is one of those technologies that is so mysterious that the names of TV characters are dropped when they want to sound smart.

Quantum computing as an idea has been around for some time — the theoretical possibility was first presented by Yuri Manin and Richard Feynman in 1982. However, in the past few years, this area has become increasingly closer to practicality.

Companies like Google and Microsoft, as well as government agencies like the NSA, have been feverishly tinkering with quantum computers for years. A company called D-Wave has made and is selling devices that (although they are not proper computers and can only execute a few algorithms) use quantum properties and are another step towards being completely Turing-complete.» quantum machine.

It doesn’t seem unreasonable to say that breakthroughs could happen that would allow the construction of the first large-scale quantum computer within a decade.

So why all the interest? Why should you care? Computers are getting faster and faster. What is so special about quantum computers?

To explain why these machines are so important, we need to take a step back and figure out what quantum computers are and why they work. First, let’s talk about a concept called «runtime complexity».

## What is Runtime Complexity?

One of the big surprises in the early days of computer science was the discovery that if you have a computer that solves a problem of a certain size in a certain amount of time, doubling the computer’s speed doesn’t necessarily allow it to solve problems. twice as much.

Some algorithms increase the overall execution time very, very quickly as the size of the problem increases — some algorithms can be completed quickly given 100 data points, but to complete an algorithm given 1000 data points would require an Earth-sized computer running for a billion years. Runtime complexity is a formalization of this idea: it looks at a curve of how fast the complexity of a task grows and uses the shape of that curve to classify an algorithm.

Typically, these complexity classes are expressed as functions. An algorithm that gets proportionally more complex as the dataset it operates on increases (for example, a simple count function) is called a function with runtime complexity equal to » **n»** ( **t.** E. For processing **n** data points required **n** units of time).

On the other hand, it can be called «linear» because when you plot a graph, you get a straight line. Other features may be **n^2** or **2^n** or **n!** (n factorial). These are polynomial and exponential. In the last two cases the exponentials grow so fast that in almost all cases they are impossible to solve for anything but very trivial examples.

## Complexity of execution and cryptography

If you’re hearing this stuff for the first time and it sounds pointless and cryptic, let’s try to substantiate this discussion. Run-time complexity is critical to cryptography, which makes the decryption process much easier for people who know the secret key than for those who don’t. In an ideal cryptographic scheme, decryption should be linear if you have the key, and **2^k** (where k is the number of bits in the key) if you don’t have one.

In other words, the best algorithm for decrypting a message without a key would be to simply guess at the possible keys, which is impossible for keys only a few hundred bits long.

For symmetric key cryptography (in which both parties have the ability to securely exchange a secret before they begin to communicate), this is quite simple. For asymmetric cryptography, this is more difficult.