Quantum computers are slow

quantum-technologies1

The title of this GMVBlog post may seem provocative, but bear with me; you’ll soon get my drift. Quantum computers are slow for two reasons:

  1. Due to the definition of their workings
  2. The manner in which, in the near future, they will be accessed (in the cloud)

As we saw in an earlier GMVBlog post (GOOGLE AND ‘QUANTUM SUPREMACY’) the underlying principles of any quantum computer worthy of the name are, firstly, the Superposition Principle, according to which any particle may be in several different states at the same time, and, secondly, the Entanglement Principle, which lets us kill two birds with one stone; whatever we do to a qbit automatically and instantly determines what will happen to another qbit entangled with the first. Einstein’s ‘spooky action at a distance’[1]. But to describe these two principles mathematically and understand what we are going to say later, we first need to understand how a qbit is defined, as a quantum computer’s fundamental information processing unit:

 qbit |= (probabilidad 0)  estado 0|+ (probabilidad 1)  estado 1|

(Oops! I hope you’re still with us!)

In a quantum computer we have only 2 options or , whereas in any experiment in general we might have infinite possibilities. If, instead of 1 qbit, we have two or more entangled, things get a bit more complicated, but the essence is still the same. How are quantum-mechanics experiments carried out? We start with a source of quanta (a lot); these are whittled down to the states we want to measure; they are allowed to evolve according to the laws of physics and then measured. And the results of the measurements can lie only between the specified options. But note that before each option is a factor that says “probability of 0|1”. By the very definition of probability, for this number to make sense, it has to be measured many times. This is the key to the slowness of quantum computers. What does a quantum algorithm do? It forces one of the two probabilities to be very high and the other to be very low. But we don’t know which of the two possible options is the most probable. After all, it is an algorithm; we want to calculate things we didn’t know beforehand.

How is a quantum algorithm run? Just like any other experiment: 1) We prepare the qbits; this spadework also serves for entering the starting data. We set their initial values to or , and decide too whether to entangle a qbit with other qbits. 2) We let them evolve, which evolution is defined by the quantum algorithm. While evolving, it is defined in the abovementioned way, without knowing beforehand which option it is in. Part of this evolution too is the evolution of the entanglement. 3) We measure. And upon measurement, and only then, the qbit will choose between 0 or 1 and thereby become a bit. Meanwhile it will have remained in a superimposed state, the limbo of Schrödinger’s alive-and-dead cat or the spinning top in the film “Inception”.

But wait, do we measure only once? Not at all! One result is not enough because then we wouldn’t know if the high-probability state has been chosen or, in a quantum serendipitous twist of fate, the low-probability state. We have to measure many times. And then we have to repeat ALL the steps until we’re sure the machine’s result is the most probable. And in all this we have to factor in too the Quantum Error Codes, the current quantum computing battle-horse to ensure the dreaded blue screen doesn’t crop up at every turn due to decoherence. The computer is so slow that the qbits can become entangled with the system under measurement, after which the experiment has to start from scratch. Qbits are very frail.

The problem here is that each one of these steps takes up time ranging from microseconds (I/O) to milliseconds (measurements). Any quantum computer therefore takes several milliseconds to run a quantum algorithm. To this we must add the fact that the first, prohibitively expensive quantum computers can be accessed only in the cloud with companies who rent out their use time. Factor in too the network delay and the runtime of the corresponding web services. Running a set of basic actions in a classic computer takes a few nanoseconds. Which is quicker?

In that case why are we so awed by the ostensible speed of quantum computers? Because the number of actions to run is much greater. Thanks to the principle of superimposition and the entanglement property we can sum up several algorithm steps in one or run several of these steps in parallel. And the number of equivalent repetitions of a classic algorithm is slashed. Thanks to quantum magic the result comes across as the most probable option allowed by the laws of physics. In other words running a quantum algorithm (NP-complete problems) implies far fewer actions than a classic algorithm (P or EXP problem). Between waiting 10 billion years for a classic computer to break a sufficiently long cryptographic code and only a few milliseconds for a quantum computer, who cares if it’s slow?

1 Einstein’s ‘spooky action at a distance’ spotted in objects almost big enough to see. www.sciencemag.org Apr 25,2018.

Author: Fernando Labarga Ávalos

Dodaj komentarz


Source URL: http://www.gmv.com/media/blog/cyberbezpieczenstwo/quantum-computers-are-slow