When will quantum computers outperform classical ones?
This question has hovered over the field for decades, shaping billion-dollar investments and driving scientific debate.
The question has more meaning in context, as the answer depends on the problem at hand. We already have estimates of the quantum computing resources needed for Shor’s algorithm, which has a superpolynomial advantage for integer factoring over the best-known classical methods, threatening cryptographic protocols. Quantum simulation allows one to glean insights into exotic materials and chemical processes that classical machines struggle to capture, especially when strong correlations are present. But even within these examples, estimates change surprisingly often, carving years off expected timelines. And outside these famous cases, the map to quantum advantage is surprisingly hazy.
Researchers at ĢƵ have taken a fresh step toward drawing this map. In a new theoretical framework, Harry Buhrman, Niklas Galke, and Konstantinos Meichanetzidis introduce the concept of “queasy instances” (quantum easy) – problem instances that are comparatively easy for quantum computers but appear difficult for classical ones.
From Problem Classes to Problem Instances
Traditionally, computer scientists classify problems according to their worst-case difficulty. Consider the problem of Boolean satisfiability, or SAT, where one is given a set of variables (each can be assigned a 0 or a 1) and a set of constraints and must decide whether there exists a variable assignment that satisfies all the constraints. SAT is a canonical NP-complete problem, and so in the worst case, both classical and quantum algorithms are expected to perform badly, which means that the runtime scales exponentially with the number of variables. On the other hand, factoring is believed to be easier for quantum computers than for classical ones. But real-world computing doesn’t deal only in worst cases. Some instances of SAT are trivial; others are nightmares. The same is true for optimization problems in finance, chemistry, or logistics. What if quantum computers have an advantage not across all instances, but only for specific “pockets” of hard instances? This could be very valuable, but worst-case analysis is oblivious to this and declares that there is no quantum advantage.
To make that idea precise, the researchers turned to a tool from theoretical computer science: Kolmogorov complexity. This is a way of measuring how “regular” a string of bits is, based on the length of the shortest program that generates it. A simple string like 0000000000 can be described by a tiny program (“print ten zeros”), while the description of a program that generates a random string exhibiting no pattern is as long as the string itself. From there, the notion of instance complexity was developed: instead of asking “how hard is it to describe this string?”, we ask “how hard is it to solve this particular problem instance (represented by a string)?” For a given SAT formula, for example, its polynomial-time instance complexity is the size of the smallest program that runs in polynomial time and decides whether the formula is satisfiable. This smallest program must be consistently answering all other instances, and it is also allowed to declare “I don’t know”.
In their new work, the team extends this idea into the quantum realm by defining polynomial-time quantum instance complexity as the size of the shortest quantum program that solves a given instance and runs on polynomial time. This makes it possible to directly compare quantum and classical effort, in terms of program description length, on the very same problem instance. If the quantum description is significantly shorter than the classical one, that problem instance is one the researchers call “qܱ”: quantum-easy and classically hard. These queasy instances are the precise places where quantum computers offer a provable advantage – and one that may be overlooked under a worst-case analysis.
Why “Queasy”?
The playful name captures the imbalance between classical and quantum effort. A queasy instance is one that makes classical algorithms struggle, i.e. their shortest descriptions of efficient programs that decide them are long and unwieldy, while a quantum computer can handle the same instance with a much simpler, faster, and shorter program. In other words, these instances make classical computers “queasy,” while quantum ones solve them efficiently and finding them quantum-easy. The key point of these definitions lies in demonstrating that they yield reasonable results for well-known optimisation problems.
By carefully analysing a mapping from the problem of integer factoring to SAT (which is possible because factoring is inside NP and SAT is NP-complete) the researchers prove that there exist infinitely many queasy SAT instances. SAT is one of the most central and well-studied problems in computer science that finds numerous applications in the real-world. The significant realisation that this theoretical framework highlights is that SAT is not expected to yield a blanket quantum advantage, but within it lie islands of queasiness – special cases where quantum algorithms decisively win.
Algorithmic Utility
Finding a queasy instance is exciting in itself, but there is more to this story. Surprisingly, within the new framework it is demonstrated that when a quantum algorithm solves a queasy instance, it does much more than solve that single case. Because the program that solves it is so compact, the same program can provably solve an exponentially large set of other instances, as well. Interestingly, the size of this set depends exponentially on the queasiness of the instance!
Think of it like discovering a special shortcut through a maze. Once you’ve found the trick, it doesn’t just solve that one path, but reveals a pattern that helps you solve many other similarly built mazes, too (even if not optimally). This property is called algorithmic utility, and it means that queasy instances are not isolated curiosities. Each one can open a doorway to a whole corridor with other doors, behind which quantum advantage might lie.
A North Star for the Field
Queasy instances are more than a mathematical curiosity; this is a new framework that provides a language for quantum advantage. Even though the quantities defined in the paper are theoretical, involving Turing machines and viewing programs as abstract bitstrings, they can be approximated in practice by taking an experimental and engineering approach. This work serves as a foundation for pursuing quantum advantage by targeting problem instances and proving that in principle this can be a fruitful endeavour.
The researchers see a parallel with the rise of machine learning. The idea of neural networks existed for decades along with small scale analogue and digital implementations, but only when GPUs enabled large-scale trial and error did they explode into practical use. Quantum computing, they suggest, is on the cusp of its own heuristic era. ‾ܰپ” will be prominent in finding queasy instances, which have the right structure so that classical methods struggle but quantum algorithms can exploit, to eventually arrive at solutions to typical real-world problems. After all, quantum computing is well-suited for small-data big-compute problems, and our framework employs the concepts to quantify that; instance complexity captures both their size and the amount of compute required to solve them.
Most importantly, queasy instances shift the conversation. Instead of asking the broad question of when quantum computers will surpass classical ones, we can now rigorously ask where they do. The queasy framework provides a language and a compass for navigating the rugged and jagged computational landscape, pointing researchers, engineers, and industries toward quantum advantage.
About ĢƵ
ĢƵ, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. ĢƵ’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, ĢƵ leads the quantum computing revolution across continents.
Blog
|
corporate
March 25, 2026
Celebrating Our First Annual Q-Net Connect!
This month, ĢƵ welcomed its global user community to the first-ever Q-Net Connect, an annual forum designed to spark collaboration, share insights, and accelerate innovation across our full-stack quantum computing platforms. Over two days, users came together not only to learn from one another, but to build the relationships and momentum that we believe will help define the next chapter of quantum computing.
Q-Net Connect 2026 drew over 170 attendees from around the world to Denver, Colorado, including representatives from commercial enterprises and startups, academia and research institutions, and the public sector and non-profits - all users of ĢƵ systems.
The program was packed with inspiring keynotes, technical tracks, and customer presentations. Attendees heard from leaders at ĢƵ, as well as our partners at NVIDIA, JPMorganChase and BlueQubit; professors from the University of New Mexico, the University of Nottingham and Harvard University; national labs, including NIST, Oak Ridge National Laboratory, Sandia National Laboratories and Los Alamos National Laboratory; and other distinguished guests from across the global quantum ecosystem.
Congratulations to Q-Net Connect 2026 Award Recipients!
The mission of the ĢƵ Q-Net user community is to create a space for shared learning, collaboration and connection for those who adopt ĢƵ’s hardware, software and middleware platform. At this year’s Q-Net Connect, we awarded four organizations who made notable efforts to champion this effort.
JPMorganChase received the ‘Guppy Adopter Award’ for their exemplary adoption of our quantum programming language, Guppy, in their research workflows.
Phasecraft, a UK and US-based quantum algorithms startup, received the ‘Rising Star’ award for demonstrating exceptional early impact and advancing science using ĢƵ hardware, which they published in a December 2025 .
Qedma, a quantum software startup, received the ‘Startup Partner Engagement’ award for their sustained engagement with ĢƵ platforms dating back to our first commercially deployed quantum computer, H1.
Anna Dalmasso from the University of Nottingham received our ‘New Student Award’ for her impressive debut project on ĢƵ hardware and for delivering outstanding results as a new Q-Net student user.
Congratulations, again, and thank you to everyone who contributed to the success of the first Q-Net Connect!
Become a Q-Net Member
Q-Net offers year‑round support through user access, developer tools, documentation, trainings, webinars, and events. Members enjoy many exclusive benefits, including being the first to hear about exclusive content, publications and promotional offers.
By joining the community, you will be invited to exclusive gatherings to hear about the latest breakthroughs and connect with industry experts driving quantum innovation. Members also get access to Q‑Net Connect recordings and stay connected for future community updates.
In a follow-up to our recent work with Hiverge using AI to discover algorithms for quantum chemistry, we’ve teamed up with Hiverge, Amazon Web Services (AWS) and NVIDIA to explore using AI to improve algorithms for combinatorial optimization.
With the rapid rise of Large Language Models (LLMs), people started asking “what if AI agents can serve as on-demand algorithm factories?” We have been working with Hiverge, an algorithm discovery company, AWS, and NVIDIA, to explore how LLMs can accelerate quantum computing research.
Hiverge – named for Hive, an AI that can develop algorithms – aims to make quantum algorithm design more accessible to researchers by translating high-level problem descriptions in mostly natural language into executable quantum circuits. The Hive takes the researcher’s initial sketch of an algorithm, as well as special constraints the researcher enumerates, and evolves it to a new algorithm that better meets the researcher’s needs. The output is expressed in terms of a familiar programming language, like Guppy or , making it particularly easy to implement.
The AI is called a “Hive” because it is a collective of LLM agents, all of whom are editing the same codebase. In this work, the Hive was made up of LLM powerhouses such as Gemini, ChatGPT, Claude, Llama, as well as which was accessed through AWS’ Amazon Bedrock service. Many models are included because researchers know that diversity is a strength – just like a team of human researchers working in a group, a variety of perspectives often leads to the strongest result.
Once the LLMs are assembled, the Hive calls on them to do the work writing the desired algorithm; no new training is required. The algorithms are then executed and their ‘fitness’ (how well they solve the problem) is measured. Unfit programs do not survive, while the fittest ones evolve to the next generation. This process repeats, much like the evolutionary process of nature itself.
After evolution, the fittest algorithm is selected by the researchers and tested on other instances of the problem. This is a crucial step as the researchers want to understand how well it can generalize.
In this most recent work, the joint team explored how AI can assist in the discovery of heuristic quantum optimization algorithms, a class of algorithms aimed at improving efficiency across critical workstreams. These span challenges like optimal power grid dispatch and storage placement, arranging fuel inside nuclear reactors, and molecular design and reaction pathway optimization in drug, material, and chemical discovery—where solutions could translate into maximizing operational efficiency, dramatic reduction in costs, and rapid acceleration in innovation.
In other AI approaches, such as reinforcement learning, models are trained to solve a problem, but the resulting "algorithm" is effectively ‘hidden’ within a neural network. Here, the algorithm is written in Guppy or CUDA-Q (or Python), making it human-interpretable and easier to deploy on new problem instances.
This work leveraged the NVIDIA CUDA-Q platform, running on powerful NVIDIA GPUs made accessible by AWS. It’s state-of-the art accelerated computing was crucial; the research explored highly complex problems, challenges that lie at the edge of classical computing capacity. Before running anything on ĢƵ’s quantum computer, the researchers first used NVIDIA accelerated computing to simulate the quantum algorithms and assess their fitness. Once a promising algorithm is discovered, it could then be deployed on quantum hardware, creating an exciting new approach for scaling quantum algorithm design.
More broadly, this work points to one of many ways in which classical compute, AI, and quantum computing are most powerful in symbiosis. AI can be used to improve quantum, as demonstrated here, just as quantum can be used to extend AI. Looking ahead, we envision AI evolving programs that express a combination of algorithmic primitives, much like human mathematicians, such as Peter Shor and Lov Grover, have done. After all, both humans and AI can learn from each other.
As quantum computing power grows, so does the difficulty of error correction. Meeting that demand requires tight integration with high-performance classical computing, which is why we’ve partnered with NVIDIA to push the boundaries of real-time decoding performance.
Realizing the full power of quantum computing requires more than just qubits, it requires error rates low enough to run meaningful algorithms at scale. Physical qubits are sensitive to noise, which limits their capacity to handle calculations beyond a certain scale. To move beyond these limits, physical qubits must be combined into logical qubits, with errors continuously detected and corrected in real time before they can propagate and corrupt the calculation. This approach, known as fault tolerance, is a foundational requirement for any quantum computer intended to solve problems of real-world significance.
Part of the challenge of fault tolerance is the computational complexity of correcting errors in real time. Doing so involves sending the error syndrome data to a classical co-processor, solving a complex mathematical problem on that processor, then sending the resulting correction back to the quantum processor - all fast enough that it doesn’t slow down the quantum computation. For this reason, Quantum Error Correction (QEC) is currently one of the most demanding use-cases for tight coupling between classical and quantum computing.
Given the difficulty of the task, we have partnered with NVIDIA, leaders in accelerated computing. With the help of NVIDIA’s ultra-fast GPUs (and the GPU-accelerated BP-OSD decoder developed by NVIDIA as part of library), we were able to demonstrate real-time decoding of Helios’ qubits, all in a system that can be connected directly to our quantum processors using .
While real-time decoding has been demonstrated before (notably, by our own scientists in this study), previous demonstrations were limited in their scalability and complexity.
In this demonstration, we used Brings’ code, a high-rate code that is possible with our all-to-all connectivity, to encode our physical qubits into noise-resilient logical qubits. Once we had them encoded, we ran gates as well as let them idle to see if we could catch and correct errors quickly and efficiently. We submitted the circuits via both as well as our own Guppy language, underlining our commitment to accessible, ecosystem-friendly quantum computing.
The results were excellent: we were able to perform low-latency decoding that returned results in the time we needed, even for the faster clock cycles that we expect in future generation machines.
A key part of the achievement here is that we performed something called “correlated” decoding. In correlated decoding, you offload work that would normally be performed on the QPU onto the classical decoder. This is because, in ‘standard’ decoding, as you improve your error correction capabilities, it takes more and more time on the QPU. Correlated decoding elides this cost, saving QPU time for the tasks that only the quantum computer can do.
Stay tuned for our forthcoming paper with all the details.