At the beginnings of quantum mechanics, when people started looking at this theory that was really successful at doing things like predicting the spectra coming out of atoms, and they started looking at the implications for how we understand physical theories, there was this epic battle that went on for years between Einstein and Bohr, two giants of early 20th-century physics.
The problem was that as you look deeply into the theory, you realize that there is, for example, inherent randomness. You can prepare a couple of systems in identical ways, and you measure something about the system, and the outcomes are different. And they are different for very fundamental reasons. It’s not just, oh, we just didn’t have enough information. No, even with perfect information, the outcomes of identical measurement can be different. This was not compatible with how physical theories were supposed to work.
Entanglement has made this progression from being an uncomfortable property of quantum systems — and a philosophical question — to something that constitutes the basis of quantum technologies.
It became a sort of hobby of Einstein’s to come up with little paradoxes based on these things that challenged the way we understand physical theories, from a very foundational point of view. In quantum mechanics, you can formally write something that we call an entangled state, and this is a state that involves more than one particle or more than one subsystem. It describes correlations between parts of that system.
Einstein’s point was, if I take the two subsystems and separate them very far away, then I can measure something on one and know the outcome in the other one instantaneously, or faster than light communication, which is not compatible with relativity. His conclusion was that there must be an underlying theory that “explained” the quantum-mechanical correlations and the apparent randomness of measurement outcomes.
Essentially, the results look random because there are these hidden variables that we don’t know about. But if we knew their values, the results would be predictable. These theories became known as “hidden variable” theories.
It wasn’t until the ’60s that John Bell wrote a beautiful, very simple theorem where he said, let’s assume that there is some underlying information, some underlying physical theory that works in this general way. Can I have a set of measurements that have different outcomes from the quantum-mechanical model?
This is called the Bell inequality. It says, if you measure a particular sequence of correlations and you get a value below a certain threshold, then there could be an underlying hidden-variable theory of quantum mechanics. But if the value is above this threshold, then quantum mechanics cannot be explained by hidden variables. At this point, no matter how uncomfortable we are with the implications of quantum mechanics, we don’t get to pretend there is an underlying theory that will explain it all away.
It really changes our perception of the nature of reality. The randomness in measurement results is fundamental to nature. It’s not just an accident from the lack of information about the hidden variables.
Eventually, people said, Well, okay, let’s test these things. For that, you need to make entangled particles, and you need to measure them in the way prescribed by Bell’s theorem. That’s essentially what John Clauser and Alain Aspect did during the late ’70s and early ’80s, with increasing levels of sophistication. Anton Zeilinger wasn’t part of those Bell inequality measurements, but he used entanglement in a multitude of ground-breaking experiments, such as quantum teleportation, entanglement swapping, and the generation of tripartite entangled states.