How ĢƵ researchers used quantum tensor networks to measure the properties of quantum particles at a phase transition
Quantum tensor networks demonstrate potential exponential resource reduction in both time and memory for calculation of critical state properties in digital quantum computers
April 9, 2023
When thinking about changes in phases of matter, the first images that come to mind might be ice melting or water boiling. The critical point in these processes is located at the boundary between the two phases – the transition from solid to liquid or from liquid to gas.
Phase transitions like these get right to the heart of how large material systems behave and are at the frontier of research in condensed matter physics for their ability to provide insights into emergent phenomena like magnetism and topological order. In classical systems, phase transitions are generally driven by thermal fluctuations and occur at finite temperature. On the contrary, quantum systems can exhibit phase transitions even at zero temperatures; the residual fluctuations that control such phase transitions at zero temperature are due to entanglement and are entirely quantum in origin.
ĢƵ researchers recently used the H1-1 quantum computer to computationally model a group of highly correlated quantum particles at a quantum critical point — on the border of a transition between a paramagnetic state (a state of magnetism characterized by a weak attraction) to a ferromagnetic one (characterized by a strong attraction).
Simulating such a transition on a classical computer is possible using tensor network methods, though it is difficult. However, generalizations of such physics to more complicated systems can pose serious problems to classical tensor network techniques, even when deployed on the most powerful supercomputers. On a quantum computer, on the other hand, such generalizations will likely only require modest increases in the number and quality of available qubits.
In a technical paper submitted to the arXiv, , the ĢƵ team demonstrated how the powerful components and high fidelity of the H-Series digital quantum computers could be harnessed to tackle a 128-site condensed matter physics problem, combining a quantum tensor network method with qubit reuse to make highly productive use of the 20-qubit H1-1 quantum computer.
Reza Haghshenas, Senior Advanced Physicist, and the lead author the paper said, “This is the kind of problem that appeals to condensed-matter physicists working with quantum computers, who are looking forward to revealing exotic aspects of strongly correlated systems that are still unknown to the classical realm. Digital quantum computers have the potential to become a versatile tool for working scientists, particularly in fields like condensed matter and particle physics, and may open entirely new directions in fundamental research.”
The role of quantum tensor networks
Abstract representation of the 128-site MERA used in this work
Tensor networks are mathematical frameworks whose structure enables them to represent and manipulate quantum states in an efficient manner. Originally associated with the mathematics of quantum mechanics, tensor network methods now crop up in many places, from machine learning to natural language processing, or indeed any model with a large number of interacting, high-dimensional mathematical objects.
The ĢƵ team described using a tensor network method--the multi-scale entanglement renormalization ansatz (MERA)--to produce accurate estimates for the decay of ferromagnetic correlations and the ground state energy of the system. MERA is particularly well-suited to studying scale invariant quantum states, such as ground states at continuous quantum phase transitions, where each layer in the mathematical model captures entanglement at different scales of distance.
“By calculating the critical state properties with MERA on a digital quantum computer like the H-Series, we have shown that research teams can program the connectivity and system interactions into the problem,” said Dave Hayes, Lead of the U.S. quantum theory team at ĢƵ and one of the paper’s authors. “So, it can, in principle, go out and simulate any system that you can dream of.”
Simulating a highly entangled quantum spin model
In this experiment, the researchers wanted to accurately calculate the ground state of the quantum system in its critical state. This quantum system is composed of many tiny quantum magnets interacting with one another and pointing in different directions, known as a quantum spin model. In the paramagnetic phase, tiny, individual magnets in the material are randomly oriented, and only correlated with each other over small length-scales. In the ferromagnetic phase, these individual atomic magnetic moments align spontaneously over macroscopic length scales due to strong magnetic interactions.
In the computational model, the quantum magnets were initially arranged in one dimension, along a line. To describe the critical point in this quantum magnetism problem, particles in the line needed to be entangled with one another in a complex way, making this as a very challenging problem for a classical computer to solve in high dimensional and non-equilibrium systems.
“That's as hard as it gets for these systems,” Dave explained. “So that's where we want to look for quantum advantage – because we want the problem to be as hard as possible on the classical computer, and then have a quantum computer solve it.”
To improve the results, the team used two error mitigation techniques, symmetry-based error heralding, which is made possible by the MERA structure, and , a method originally developed by researchers at IBM. The first involved enforcing local symmetry in the model so that errors affecting the symmetry of the state could be detected. The second strategy, zero-noise extrapolation, involves adding noise to the qubits to measure the impact it has, and then using those results to extrapolate the results that would be expected under conditions with less noise than was present in the experiment.
Future applications
The ĢƵ team describes this sort of problem as a stepping-stone, which allows the researchers to explore quantum tensor network methods on today’s devices and compare them either to simulations or analytical results produced using classical computers. It is a chance to learn how to tackle a problem really well before quantum computers scale up in the future and begin to offer solutions that are not possible to achieve on classical computers.
“Potentially, our biggest applications over the next couple of years will include studying solid-state systems, physics systems, many-body systems, and modeling them,” said Jenni Strabley, Senior Director of Offering Management at ĢƵ.
The team now looks forward to future work, exploring more complex MERA generalizations to compute the states of 2D and 3D many-body and condensed matter systems on a digital quantum computer – quantum states that are much more difficult to calculate classically.
The H-Series allows researchers to simulate a much broader range of systems than analog devices as well as to incorporate quantum error mitigation strategies, as demonstrated in the experiment. Plus, ĢƵ’s System Model H2 quantum computer, which was launched earlier this year, should scale this type of simulation beyond what is possible using classical computers.
About ĢƵ
ĢƵ, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. ĢƵ’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, ĢƵ leads the quantum computing revolution across continents.
Blog
|
corporate
March 25, 2026
Celebrating Our First Annual Q-Net Connect!
This month, ĢƵ welcomed its global user community to the first-ever Q-Net Connect, an annual forum designed to spark collaboration, share insights, and accelerate innovation across our full-stack quantum computing platforms. Over two days, users came together not only to learn from one another, but to build the relationships and momentum that we believe will help define the next chapter of quantum computing.
Q-Net Connect 2026 drew over 170 attendees from around the world to Denver, Colorado, including representatives from commercial enterprises and startups, academia and research institutions, and the public sector and non-profits - all users of ĢƵ systems.
The program was packed with inspiring keynotes, technical tracks, and customer presentations. Attendees heard from leaders at ĢƵ, as well as our partners at NVIDIA, JPMorganChase and BlueQubit; professors from the University of New Mexico, the University of Nottingham and Harvard University; national labs, including NIST, Oak Ridge National Laboratory, Sandia National Laboratories and Los Alamos National Laboratory; and other distinguished guests from across the global quantum ecosystem.
Congratulations to Q-Net Connect 2026 Award Recipients!
The mission of the ĢƵ Q-Net user community is to create a space for shared learning, collaboration and connection for those who adopt ĢƵ’s hardware, software and middleware platform. At this year’s Q-Net Connect, we awarded four organizations who made notable efforts to champion this effort.
JPMorganChase received the ‘Guppy Adopter Award’ for their exemplary adoption of our quantum programming language, Guppy, in their research workflows.
Phasecraft, a UK and US-based quantum algorithms startup, received the ‘Rising Star’ award for demonstrating exceptional early impact and advancing science using ĢƵ hardware, which they published in a December 2025 .
Qedma, a quantum software startup, received the ‘Startup Partner Engagement’ award for their sustained engagement with ĢƵ platforms dating back to our first commercially deployed quantum computer, H1.
Anna Dalmasso from the University of Nottingham received our ‘New Student Award’ for her impressive debut project on ĢƵ hardware and for delivering outstanding results as a new Q-Net student user.
Congratulations, again, and thank you to everyone who contributed to the success of the first Q-Net Connect!
Become a Q-Net Member
Q-Net offers year‑round support through user access, developer tools, documentation, trainings, webinars, and events. Members enjoy many exclusive benefits, including being the first to hear about exclusive content, publications and promotional offers.
By joining the community, you will be invited to exclusive gatherings to hear about the latest breakthroughs and connect with industry experts driving quantum innovation. Members also get access to Q‑Net Connect recordings and stay connected for future community updates.
In a follow-up to our recent work with Hiverge using AI to discover algorithms for quantum chemistry, we’ve teamed up with Hiverge, Amazon Web Services (AWS) and NVIDIA to explore using AI to improve algorithms for combinatorial optimization.
With the rapid rise of Large Language Models (LLMs), people started asking “what if AI agents can serve as on-demand algorithm factories?” We have been working with Hiverge, an algorithm discovery company, AWS, and NVIDIA, to explore how LLMs can accelerate quantum computing research.
Hiverge – named for Hive, an AI that can develop algorithms – aims to make quantum algorithm design more accessible to researchers by translating high-level problem descriptions in mostly natural language into executable quantum circuits. The Hive takes the researcher’s initial sketch of an algorithm, as well as special constraints the researcher enumerates, and evolves it to a new algorithm that better meets the researcher’s needs. The output is expressed in terms of a familiar programming language, like Guppy or , making it particularly easy to implement.
The AI is called a “Hive” because it is a collective of LLM agents, all of whom are editing the same codebase. In this work, the Hive was made up of LLM powerhouses such as Gemini, ChatGPT, Claude, Llama, as well as which was accessed through AWS’ Amazon Bedrock service. Many models are included because researchers know that diversity is a strength – just like a team of human researchers working in a group, a variety of perspectives often leads to the strongest result.
Once the LLMs are assembled, the Hive calls on them to do the work writing the desired algorithm; no new training is required. The algorithms are then executed and their ‘fitness’ (how well they solve the problem) is measured. Unfit programs do not survive, while the fittest ones evolve to the next generation. This process repeats, much like the evolutionary process of nature itself.
After evolution, the fittest algorithm is selected by the researchers and tested on other instances of the problem. This is a crucial step as the researchers want to understand how well it can generalize.
In this most recent work, the joint team explored how AI can assist in the discovery of heuristic quantum optimization algorithms, a class of algorithms aimed at improving efficiency across critical workstreams. These span challenges like optimal power grid dispatch and storage placement, arranging fuel inside nuclear reactors, and molecular design and reaction pathway optimization in drug, material, and chemical discovery—where solutions could translate into maximizing operational efficiency, dramatic reduction in costs, and rapid acceleration in innovation.
In other AI approaches, such as reinforcement learning, models are trained to solve a problem, but the resulting "algorithm" is effectively ‘hidden’ within a neural network. Here, the algorithm is written in Guppy or CUDA-Q (or Python), making it human-interpretable and easier to deploy on new problem instances.
This work leveraged the NVIDIA CUDA-Q platform, running on powerful NVIDIA GPUs made accessible by AWS. It’s state-of-the art accelerated computing was crucial; the research explored highly complex problems, challenges that lie at the edge of classical computing capacity. Before running anything on ĢƵ’s quantum computer, the researchers first used NVIDIA accelerated computing to simulate the quantum algorithms and assess their fitness. Once a promising algorithm is discovered, it could then be deployed on quantum hardware, creating an exciting new approach for scaling quantum algorithm design.
More broadly, this work points to one of many ways in which classical compute, AI, and quantum computing are most powerful in symbiosis. AI can be used to improve quantum, as demonstrated here, just as quantum can be used to extend AI. Looking ahead, we envision AI evolving programs that express a combination of algorithmic primitives, much like human mathematicians, such as Peter Shor and Lov Grover, have done. After all, both humans and AI can learn from each other.
As quantum computing power grows, so does the difficulty of error correction. Meeting that demand requires tight integration with high-performance classical computing, which is why we’ve partnered with NVIDIA to push the boundaries of real-time decoding performance.
Realizing the full power of quantum computing requires more than just qubits, it requires error rates low enough to run meaningful algorithms at scale. Physical qubits are sensitive to noise, which limits their capacity to handle calculations beyond a certain scale. To move beyond these limits, physical qubits must be combined into logical qubits, with errors continuously detected and corrected in real time before they can propagate and corrupt the calculation. This approach, known as fault tolerance, is a foundational requirement for any quantum computer intended to solve problems of real-world significance.
Part of the challenge of fault tolerance is the computational complexity of correcting errors in real time. Doing so involves sending the error syndrome data to a classical co-processor, solving a complex mathematical problem on that processor, then sending the resulting correction back to the quantum processor - all fast enough that it doesn’t slow down the quantum computation. For this reason, Quantum Error Correction (QEC) is currently one of the most demanding use-cases for tight coupling between classical and quantum computing.
Given the difficulty of the task, we have partnered with NVIDIA, leaders in accelerated computing. With the help of NVIDIA’s ultra-fast GPUs (and the GPU-accelerated BP-OSD decoder developed by NVIDIA as part of library), we were able to demonstrate real-time decoding of Helios’ qubits, all in a system that can be connected directly to our quantum processors using .
While real-time decoding has been demonstrated before (notably, by our own scientists in this study), previous demonstrations were limited in their scalability and complexity.
In this demonstration, we used Brings’ code, a high-rate code that is possible with our all-to-all connectivity, to encode our physical qubits into noise-resilient logical qubits. Once we had them encoded, we ran gates as well as let them idle to see if we could catch and correct errors quickly and efficiently. We submitted the circuits via both as well as our own Guppy language, underlining our commitment to accessible, ecosystem-friendly quantum computing.
The results were excellent: we were able to perform low-latency decoding that returned results in the time we needed, even for the faster clock cycles that we expect in future generation machines.
A key part of the achievement here is that we performed something called “correlated” decoding. In correlated decoding, you offload work that would normally be performed on the QPU onto the classical decoder. This is because, in ‘standard’ decoding, as you improve your error correction capabilities, it takes more and more time on the QPU. Correlated decoding elides this cost, saving QPU time for the tasks that only the quantum computer can do.
Stay tuned for our forthcoming paper with all the details.