我的直觉是为什么量子计算可以比传统计算更好地表现出来,即波函数的波状性质允许您通过一次操作来干扰信息的多个状态,这在理论上可以实现指数级加速。
但是,如果真的只是复杂状态的相长干涉,为什么不仅仅对经典波进行干涉呢?
而且,就此而言,如果品质因数仅仅是可以计算出多少步,那么为什么不从一个嵌入了所需计算的复杂动力系统开始。(即,为什么不只为特定问题创建“模拟模拟器”?)
我的直觉是为什么量子计算可以比传统计算更好地表现出来,即波函数的波状性质允许您通过一次操作来干扰信息的多个状态,这在理论上可以实现指数级加速。
但是,如果真的只是复杂状态的相长干涉,为什么不仅仅对经典波进行干涉呢?
而且,就此而言,如果品质因数仅仅是可以计算出多少步,那么为什么不从一个嵌入了所需计算的复杂动力系统开始。(即,为什么不只为特定问题创建“模拟模拟器”?)
Answers:
您最主要的论断是,波的数学模拟量子力学的数学是正确的。实际上,出于这个确切的原因,许多QM的先驱者曾经将其称为波动力学。然后自然会问:“为什么我们不能用波进行量子计算?”。
简短的答案是,量子力学使我们能够在仅消耗多项式资源的情况下,以指数级的希尔伯特空间工作。即,量子位的状态空间是2 n维的希尔伯特空间。
不能从多项式经典资源中构造出指数级的希尔伯特空间。为了了解为什么会发生这种情况,让我们看一下两种基于波动力学的计算机。
构建这种计算机的第一种方法是采用个两级经典系统。这样,每个系统本身都可以由2D Hilbert空间表示。例如,可以想象只有前两个谐波被激发的n个吉他弦。
因为没有纠缠,所以此设置将无法模仿量子计算。因此,系统的任何状态都将是乘积状态, 吉他弦的组合系统不能用于制作2 n维的希尔伯特空间。
尝试构造指数级大希尔伯特空间的第二种方法是使用单个吉他st,并使用希尔伯特空间的基向量来确定其前谐波。这是在@DaftWullie的答案中完成的。这种方法的问题在于,需要激发的最高谐波频率会随着O (2 n)的变化而变化。。而且,由于振动弦的能量随其频率呈二次方比例变化,因此我们将需要成倍数量的能量来激发弦。因此,在最坏的情况下,计算的能源成本会随着问题的大小呈指数增长。
因此,这里的关键是经典系统在物理上可分离的部分之间缺乏纠缠。而且,如果没有纠缠,我们就不能用多项式开销构造指数大的希尔伯特空间。
我本人经常将量子力学力量的来源描述为“破坏性干涉”所致,也就是说,量子力学具有波状性质。从计算复杂性的角度来看,很明显,这是量子计算最重要,最有趣的功能之一,例如Scott Aronson指出的。但是,当我们以非常简短的方式来描述它时-“量子计算的力量在于相消干涉/量子力学的波状性质”-重要的是要注意,这种陈述是一种简写,并且一定不完整。
每当您对某事物的“力量”或“优势”发表声明时,请务必牢记:与什么相比?在这种情况下,我们要比较的是特定的概率计算:并且我们要记住的不仅是“某物”像波浪一样起作用,而且还特别是某件事(如概率)像波浪一样起作用。
必须说,在古典世界中,概率本身的行为确实有点像波浪:具体来说,它遵循某种惠更斯原理(通过汇总各个初始变量的贡献,您可以了解事物概率的传播。条件-或换句话说,通过叠加原理)。当然,不同之处在于,概率是非负的,因此只能累积,其演化本质上将是扩散的一种形式。量子计算设法表现出具有类似概率振幅的波状行为,这可能是非正性的。因此可能会看到这些振幅的相消干涉。
特别是,由于充当波的事物就像概率一样,因此系统在其中演化的“频率空间”在计算中涉及的粒子数量上可能成指数关系。如果您想获得优于常规计算的优势,那么这种一般现象是必要的:如果频率空间随系统数量成倍增长,并且演化本身服从波动方程,则使用经典计算机进行仿真的障碍将更容易克服。如果您想考虑如何与其他类型的波实现类似的计算优势,则必须问自己如何打算将成倍数量的可区分“频率”或“模”挤入有限的能量空间。
最后,从实用的角度出发,还有一个容错问题。似然现象所表现出的波状行为的另一个副作用是,您可以通过测试奇偶校验或更通常地对边际分布的粗调进行错误校正。没有这种功能,量子计算将本质上局限于模拟计算的形式,这对于某些目的是有用的,但仅限于对噪声的敏感性问题。我们尚未在内置的计算机系统中进行容错量子计算,但是我们知道,从原理上讲这是可能的,我们正在努力实现这一目标。然而,例如水波如何实现类似的目标尚不清楚。
一些的的 其他 答案触及量子力学的这同一特性:“波粒二象性”是表达一个事实,即我们对单个粒子其行事像波浪一样,大约可扩展性言论的行为东西概率的方式/随之而来的是配置空间的指数增长。但是,在这些稍高一点的描述中,有一个事实:我们具有量子振幅,其行为类似于多变量概率分布的元素,随时间线性变化并累加,但可以为负也可以为正。
我并没有声称有完整的答案(但是!我希望对此进行更新,因为试图进行很好的解释是一个有趣的问题)。但是,让我从一些澄清的意见开始...
但是,如果真的只是复杂状态的相长干涉,为什么不仅仅对经典波进行干涉呢?
glib的回答是,这不仅仅是干扰。我认为真正的结果是,量子力学使用了与经典物理学不同的概率公理(概率振幅),而这些在波情景中没有重现。
当有人写“海浪”时,我自然会想到水浪,但这可能不是最有用的图景。让我们考虑一下理想的吉他弦。在长度上 (固定在两端),具有波函数
However, they are not superposition and entanglement as we understand them in quantum theory. A key feature of quantum theory is that it contains indeterminism - that the results of some outcomes are inherently unpredictable. We don't start or end our computation from these points, but we must go through them somewhere during the computation. For example, experimental tests of Bell's Theorem have proven that the world is not deterministic (and, so far, conforms to what quantum theory predicts). The wave-bit theory is entirely deterministic: I can look at the string of my guitar, whatever weird shape it might be in, and my looking at it does not change its shape. Moreover, I can even determine the values of the in a single shot, and therefore know what shape it will be in at all later times. This is very different to quantum theory, where there are different bases that can give me different information, but I can never access all of it (indeterminism).
I don't have a complete proof of this. We know that entanglement is necessary for quantum computation, and that entanglement can demonstrate indeterminism, but that's not quite enough for a precise statement. Contextuality is a similar measure of indeterminism but for single qubits, and results along those lines have started to become available recently, see here, for broad classes of computations.
Another way to think about this might be to ask what computational operations we can perform with these waves? Presumably, even if you allow some non-linear interactions, the operations can be simulated by a classical computer (after all, classical gates include non-linearity). I assume that the function like classical probabilities, not probability amplitudes.
This might be one way of seeing the difference (or at least heading in the right direction). There's a way of performing quantum computation classed measurement-based quantum computation. You prepare your system in some particular state (which, we've already agreed, we could do with our w-bits), and then you measure the different qubits. Your choice of measurement basis determines the computation. But we can't do that here because we don't have that choice of basis.
And on that matter, if the figure-of-merit is simply how few steps something can be calculated in, why not start with a complicated dynamical system that has the desired computation embedded in it. (ie, why not just create "analog simulators" for specific problems?)
This is not the figure of merit. The figure of merit is really "How long does it take to perform the computation" and "how does that time scale as the problem size changes?". If we choose to break everything down in terms of elementary gates, then the first question is essentially how many gates are there, and the second is how does the number of gates scale. But we don't have to break it down like that. There are plenty of "analog quantum simulators". Feynman's original specification of a quantum computer was one such analogue simulator. It's just that the time feature manifests in a different way. There, you're talking about implementing a Hamiltonian evolution for a particular time , . Now, sure, you could implement , and replace with , but practically, the coupling strengths in are limited, so there's a finite time that things take, and we can still demand how that scales with the problem size. Similarly, there's adiabatic quantum computation. There, the time required is determined by the energy gap between the ground and the first excited state. The smaller the gap, the longer your computation takes. We know that all 3 models are equivalent in the time they take (up to polynomial conversion factors, which are essentially irrelevant if you're talking about an exponential speed-up).
So, analog quantum simulators are certainly a thing, and there are those of us who think they're a very sensible thing at least in the short-term. My research, for example, is very much about "how do we design Hamiltonians so that their time evolution creates the operations that we want?", aiming to do everything we can in a language that is "natural" for a given quantum system, rather than having to coerce it into performing a whole weird sequence of quantum gates.
Regular waves can interfere, but cannot be entangled.
An example of an entangled pair of qubits, that cannot happen with classical waves, is given in the first sentence of my answer to this question: What's the difference between a set of qubits and a capacitor with a subdivided plate?
Entanglement is considered to be the crucial thing that gives quantum computers advantage over classical ones, since superposition alone can be simulated by a probabilistic classical computer (i.e. a classical computer plus a coin flipper).
"why not just perform this interference with classical waves?"
Yes this is one way we can simulate quantum computers on regular digital computers. We simulate the "waves" using floating point arithmetic. The problem is that it does not scale. Every qubit doubles the number of dimensions. For 30 qubits you already need about 8 gigabytes of ram just to store the "wave" aka state vector. At around 40 qubits we run out of computers big enough to do this.
A similar question was asked here: What's the difference between a set of qubits and a capacitor with a subdivided plate?