Four benchmarks for VCs investing in quantum

Quantum computing has seen significant capital inflows in recent years, growing from $93.5 million in 2015 to $3.2 billion in 2021, with VC and private capital invent more than 70% of the investments. However, a major challenge for the emerging sector is benchmarking the value of innovation. Without a standardized way to measure “how well” a quantum computer is performing, this capital risks being misdirected.

This could undermine the credibility of quanta as unrealistic expectations trigger a hype cycle. Such criticism has already been leveled at the sector, as recently argued by Oxford physicist Nikita Gourianov in the FT from “a grossly exaggerated perspective on the promise of quantum computing” and “the formation of a classical bubble”.

But there are some measurable areas that generally correspond to performance improvements for a quantum computer. In this article, I will cover four benchmarks: gate fidelity, coherence time, scale potential, and error correction.

goal loyalty

The digital circuits we see in traditional processors are built around “logic gates” — effectively, circuits that execute a series of instructions. A quantum logic gate is the equivalent of quantum computers – a basic quantum circuit that runs a small number of qubits.

However, quantum logic gates have significant complexity compared to traditional logic gates. Without delving too deeply into physics, under quantum mechanics we forego the idea that we can accurately predict the value of a property that a particle possesses. Instead, until we measure that particle, it can take on a range of values ​​for that property, with some values ​​more likely than others. We call the range of probabilities for a particle’s state a “quantum state”.

This element of quantum states is a challenge when trying to get a quantum gate to work reliably. In short, higher gate fidelity means more reliable operations through a quantum gate and a greater likelihood that a processing cycle will follow the instructions we give it.

coherence time

Imagine you have a very hot piece of metal. By converting that heat directly into electricity, you can use this hot metal to do a lot of work for you – like converting heat into electricity. But over time, interaction with ambient air particles will steal most of that thermal energy, to the point where the metal isn’t hot enough to power any work.

Something similar happens with the “quantum nature” of particles. Over time, quantum particles lose their ability to do useful information work when interacting with their environment, eventually rendering them unusable for a quantum computer.

A quantum particle that can do useful work is said to be “coherent”. Quantum computers, which can extend the time a particle remains coherent, have more room to perform calculations useful to us.

scale potential

Some methods of quantum computing research have more scaling potential than others. For example processes that Build qubits out of silicon can heavily borrow existing processes from the semiconductor industry for manufacturing while requiring little space for implementation. In this case, that represents greater potential for production and the number of qubits doing useful work on a square inch chip.

How will the chosen architecture behave when made ten times larger? 100 times bigger? 1,000 times bigger? And is that economical? If you want to realize quantum computing in practice, we have to go beyond a handful of qubits.

error correction

Another benchmark area is how we handle quantum error correction. There is always some “background noise” surrounding quantum effects that can disrupt the calculation, along with the aforementioned loss of coherence of a quantum particle. Together, this means that there is a risk of error with any operation. Because of this, a quantum computer must find ways to detect and prevent the propagation of errors so they don’t undermine the overall performance of a process.

👉 Read more: In a quantum lab

To achieve error correction, teams must understand how errors propagate through a system. A team must also be able to construct systems to compensate for and, if necessary, correct the risk of error – be it faulty quantum gates, corruption of stored quantum information or erroneous measurements.

Benchmarking for realistic quantum growth

This has been a relatively cursory exploration of possible benchmarks in the quantum space, where we only cover four underlying technical benchmarks. As you might imagine, there are many other factors that will come into play when assessing the viability of a quantum computing startup.

To contend with the inherent complexity of space, investors must be willing to dive into and grapple with the technology and engineering underpinnings of quantum. Rather than casually discussing the potential of quantum, investors need to be more willing to understand the physical and technical challenges and solutions in which they are investing. Only then can we improve capital allocation, select the most promising teams, and build the credibility of quantum computing.

Rick Hao is ppartner at Speedinvest.