The Unlikely Origins Of The First Quantum Computer

The Unlikely Origins Of The First Quantum Computer

Within days of each other back in 1998, two teams published the results of the first real-world quantum computations. But the first quantum computers weren’t computers at all. They were biochemistry equipment, relying on the same science as MRI machines.

You might think of quantum computing as a hyped-up race between computer companies to build a powerful processing device that will make more lifelike AI, revolutionise medicine, and crack the encryption that protects our data. And indeed, the prototype quantum computers of the late 1990s indirectly led to the quantum computers built by Google and IBM.

But that’s not how it all began—it started with physicists tinkering with mathematics and biochemistry equipment for curiosity’s sake.

“It was not motivated in any way by making better computers,” Neil Gershenfeld, the director of MIT’s Center for Bits and Atoms and a member of one of the two teams that first experimentally realised quantum algorithms, told me. “It was understanding whether the universe computes, and how the universe computes.”

Computers are just systems that begin with an abstracted input and apply a series of instructions to it in order to receive an output. Today’s computers translate inputs, instructions, and outputs into switches, called bits, that equal either zero or one and whose values control other switches.

Scientists have long used computers to simulate the laws of physics, hoping to better understand how the universe works—for example, you can simulate how far a ball will go based on where it starts and how fast it is thrown.

But using bits to simulate physics didn’t make much sense to famed physicist Richard Feynman, since the laws of physics at the smallest scale are rooted in a set of rules called quantum mechanics.

“Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical,” Feynman famously said at a 1981 conference.

A small band of scientists theorised about using these rules to create better simulations during the decade following. Instead of switches, their quantum simulation’s bits are the dual particle-waves of quantum mechanics. Each individual quantum bit would still be restricted to two choices, but as waves, they can take on either of these states simultaneously with varying strengths, interacting with one another like ocean waves – either amplifying the strength of certain combinations of choices or cancelling combinations out.

But once you measure these quantum bits, each one immediately snaps into a single state. Those strengths, or amplitudes, translate into the probability of ending up with each outcome.

Through the early 1990s, “people thought that quantum computing was essentially mad, and many had [supposedly] proved that it could never work,” Jonathan Jones, a physics professor at the University of Oxford who was one of the first to run quantum algorithms on a real quantum computer, told me. Mainly, people thought it was just a curiosity created by theoretical physicists who wondered whether they could understand the universe itself in the language of computers.

It also seemed that the finickiness of quantum mechanics – the fact that any slight jostle could quickly snap fragile qubits into single-state particles – would make them impossible to realise.

Two milestones busted those ideas. Physicist Peter Shor unveiled an algorithm in 1994 that showed that a computer based on qubits could factor large numbers near-exponentially faster than the best bit-based algorithms. If scientists could invent a quantum computer advanced enough to run the algorithm, then it could crack the popular modern-day encryption systems based on the fact that it’s easy for classical computers to multiply two large prime numbers together but very, very hard to factor the result back into primes.

The second turning point came in the mid-90s when physicists started developing error correction—the idea of spreading a single qubit’s worth of information across a series of correlated qubits to lessen the errors.

But even after that, the field was small, and the physicists we spoke to discussed conferences at which most of the world’s quantum computing scientists could fit in a room together. Quantum computing forerunners like Charlie Bennett, Isaac Chuang, Seth Lloyd, and David DiVincenzo were coming up with lots of new ideas that percolated quickly through the community.

Almost simultaneously, several independent groups realised that the medical and biochemistry industry had long been using a quantum computer in research—Nuclear Magnetic Resonance, or NMR spectrometers.

NMR, the technology behind MRI, most commonly consists of a molecule of interest dissolved in a liquid solvent, placed in a strong magnetic field. The nuclei of the atoms in these molecules have an innate quantum mechanical property called “spin,” which is essentially the smallest unit of magnetic information, and can be in either of two states, “up” or “down.” These spins align with the direction of the field.

In medicine and biochemistry, scientists will hit the molecules with additional smaller oscillating magnetic fields, called radio-frequency pulses, causing the atoms to release characteristic signals that offer physical information about the molecule. Magnetic resonance imaging or MRI machines instead use this signal to create a picture.

But the physicists realised that they could treat certain molecules in this magnetic field as quantum computers, where the nuclei served as qubits, the spin states were qubit values, and the radio-frequency pulses were both the instructions and controllers. These are the operations of quantum computers, also called logic gates as they are in classical computers.

“In a sense, NMR had actually been ahead of other fields for decades,” said Jones, a biochemist who teamed up with physicist Michele Mosca to perform one of the first quantum calculations. “They had done logic gates back in the 70s. They just didn’t know what they were doing and didn’t call it logic gates.”

Physicists including Chuang, Gershenfeld and David Cory released papers detailing how to realise these devices in 1997. A year later, two teams, one with Jones and Mosca and another with Chuang and Mark Kubinic, actually performed the quantum algorithms.

The former consisted of cytosine molecules where two hydrogen atoms had been replaced with deuterium atoms—hydrogen with a neutron. The latter used chloroform molecules. They prepared the qubits into initial states, performed a computation by applying a specially crafted radio-frequency pulse, and measured the final states.

We don’t often hear about NMR quantum computers today, because even then, physicists knew that the technique had its limits, something all of the physicists I spoke with mentioned.

More qubits would mean more specially crafted molecules. The techniques relied on special workarounds such that each additional qubit would make it harder to pick the signal out of the background noise. “No one thought it would ever be used for more than a demonstration,” Jones said. They just weren’t scalable beyond a few qubits.

Still, they were important experiments that physicists still talk about today. NMR machines remain crucial to biochemistry and still have a place in quantum technology. But this early work has left an important, indirect impact on the field. The science behind those radio-frequency pulses has lived on in the quantum computers that Google, IBM, and other companies have built in order to control their qubits.

Quantum computers running Shor’s algorithm are still decades away even today, but companies have begun unveiling real devices with dozens of qubits that can perform rudimentary and clearly quantum calculations.

Charlie Bennet, IBM fellow and quantum computing veteran, explained that these experiments weren’t enormous discoveries on their own, and indeed the NMR community had been advancing its own science and radio-frequency pulses before quantum computing came along. The physicists I spoke with explained that nobody “won” and there was no “race” back in the late 1990s.

Instead, it was a transition point along a road of incremental advances, a point in time in which groups of scientists all came to realise that humans had the technology to control quantum states and use them for computations.

“Science is always like that. The whole evidence is more important than almost any one paper,” said Bennett. “There are important discoveries – but these rarely occur in single papers.”


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.