ĢƵ

Initiating Impact Today: Combining the World’s Most Powerful in Quantum and Classical Compute

ĢƵ and NVIDIA are reeling in a future where quantum and classical compute power unite to accelerate scientific discovery and unlock enormous commercial value.

March 20, 2025
A diagram of a diagram of a diagramDescription automatically generated with medium confidence

ĢƵ and NVIDIA, world leaders in their respective sectors, are combining forces to fast-track commercially scalable quantum supercomputers, further bolstering the announcement ĢƵ made earlier this year about the exciting new potential in Generative Quantum AI. 

Make no mistake about it, the global quantum race is on. With over $2 billion raised by companies in 2024 alone, and over 150 new startups in the past five years, quantum computing is no longer restricted to ‘the lab’.  

The United Nations proclaimed 2025 as the International Year of Quantum Science and Technology (IYQ), and as we march toward the end of the first quarter, the old maxim that quantum computing is still a decade (or two, or three) away is no longer relevant in today’s world. Governments, commercial enterprises and scientific organizations all stand to benefit from quantum computers, led by those built by ĢƵ.

That is because, amid the flurry of headlines and social media chatter filled with aspirational statements of future ambitions shared by those in the heat of this race, we at ĢƵ continue to lead by example. We demonstrate what that future looks like today, rather than relying solely on slide deck presentations.

Our quantum computers are the most powerful systems in the world. Our H2 system, the only quantum computer that cannot be classically simulated, is years ahead of any other system being developed today. In the coming months, we’ll introduce our customers to Helios, a trillion times more powerful than H2, further extending our lead beyond where the competition is still only planning to be. 

At ĢƵ, we have been convinced for years that the impact of quantum computers on the real world will happen earlier than anticipated. However, we have known that impact will be when powerful quantum computers and powerful classical systems work together. 

This sort of hybrid ‘supercomputer’ has been referenced a few times in the past few months, and there is, rightly, a sense of excitement about what such an accelerated quantum supercomputer could achieve.

The Power of Hybrid Quantum and Classical Compute

In a revolutionary move on March 18th, 2025, at the GTC AI conference, NVIDIA announced the opening of a world-class accelerated quantum research center with ĢƵ selected as a key founding collaborator to work on projects with NVIDIA at the center. 

With details shared in an accompanying and , the NVIDIA Accelerated Quantum Research Center (NVAQC) being built in Boston, Massachusetts, will integrate quantum computers with AI supercomputers to ultimately explore how to build accelerated quantum supercomputers capable of solving some of the world’s most challenging problems. The center will begin operations later this year.

As shared in ĢƵ’s accompanying statement, the center will draw on the , alongside a system containing 576 dedicated to quantum research. 

The Role of CUDA-Q in Quantum-Classical Integration  

Integrating quantum and classical hardware relies on a platform that can allow researchers and developers to quickly shift context between these two disparate computing paradigms within a single application. NVIDIA CUDA-Q platform will be the entry-point for researchers to exploit the NVAQC quantum-classical integration. 

In 2022, ĢƵ became the first company to bring CUDA-Q to its quantum systems, establishing a pioneering collaboration that continues to today. Users of CUDA-Q are currently offered access to ĢƵ’s System H1 QPU and emulator for 90 days.

ĢƵ’s future systems will continue to support the CUDA-Q platform. Furthermore, ĢƵ and NVIDIA are committed to evolving and improving tools for quantum classical integration to take advantage of the latest hardware features, for example, on our upcoming Helios generation. 

The Gen-Q-AI Moment

A few weeks ago, we disclosed high level details about an AI system that we refer to as Generative Quantum AI, or GenQAI. We highlighted a timeline between now and the end of this year when the first commercial systems that can accelerate both existing AI and quantum computers.

At a high level, an AI system such as GenQAI will be enhanced by access to information that has not previously been accessible. Information that is generated from a quantum computer that cannot be simulated. This information and its effect can be likened to a powerful microscope that brings accuracy and detail to already powerful LLM’s, bridging the gap from today’s impressive accomplishments towards truly impactful outcomes in areas such as biology and healthcare, material discovery and optimization.

Through the integration of the most powerful in quantum and classical systems, and by enabling tighter integration of AI with quantum computing, the NVAQC will be an enabler for the realization of the accelerated quantum supercomputer needed for GenQAI products and their rapid deployment and exploitation.

Innovating our Roadmap

The NVAQC will foster the tools and innovations needed for fully fault-tolerant quantum computing and will be enabler to the roadmap ĢƵ released last year.

With each new generation of our quantum computing hardware and accompanying stack, we continue to scale compute capabilities through more powerful hardware and advanced features, accelerating the timeline for practical applications. To achieve these advances, we integrate the best CPU and GPU technologies alongside our quantum innovations. Our long-standing collaboration with NVIDIA drives these advancements forward and will be further enriched by the NVAQC. 

Here are a couple of examples: 

In quantum error correction, error syndromes detected by measuring "ancilla" qubits are sent to a "decoder." The decoder analyzes this information to determine if any corrections are needed. These complex algorithms must be processed quickly and with low latency, requiring advanced CPU and GPU power to calculate and apply corrections keeping logical qubits error-free. ĢƵ has been collaborating with NVIDIA on the development of customized GPU-based decoders which can be coupled with our upcoming Helios system. 

In our application space, we recently announced the integration of InQuanto v4.0, the latest version of ĢƵ’s cutting edge computational chemistry platform, with to enable previously inaccessible tensor-network-based methods for large-scale and high-precision quantum chemistry simulations.

Our work with NVIDIA underscores the partnership between quantum computers and classical processors to maximize the speed toward scaled quantum computers. These systems offer error-corrected qubits for operations that accelerate scientific discovery across a wide range of fields, including drug discovery and delivery, financial market applications, and essential condensed matter physics, such as high-temperature superconductivity.

We look forward to sharing details with our partners and bringing meaningful scientific discovery to generate economic growth and sustainable development for all of humankind.

About ĢƵ

ĢƵ, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. ĢƵ’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, ĢƵ leads the quantum computing revolution across continents. 

Blog
|
corporate
March 25, 2026
Celebrating Our First Annual Q-Net Connect!

This month, ĢƵ welcomed its global user community to the first-ever Q-Net Connect, an annual forum designed to spark collaboration, share insights, and accelerate innovation across our full-stack quantum computing platforms. Over two days, users came together not only to learn from one another, but to build the relationships and momentum that we believe will help define the next chapter of quantum computing.

Q-Net Connect 2026 drew over 170 attendees from around the world to Denver, Colorado, including representatives from commercial enterprises and startups, academia and research institutions, and the public sector and non-profits - all users of ĢƵ systems.  

The program was packed with inspiring keynotes, technical tracks, and customer presentations. Attendees heard from leaders at ĢƵ, as well as our partners at NVIDIA, JPMorganChase and BlueQubit; professors from the University of New Mexico, the University of Nottingham and Harvard University; national labs, including NIST, Oak Ridge National Laboratory, Sandia National Laboratories and Los Alamos National Laboratory; and other distinguished guests from across the global quantum ecosystem.

Congratulations to Q-Net Connect 2026 Award Recipients! 

The mission of the ĢƵ Q-Net user community is to create a space for shared learning, collaboration and connection for those who adopt ĢƵ’s hardware, software and middleware platform. At this year’s Q-Net Connect, we awarded four organizations who made notable efforts to champion this effort. 

  • JPMorganChase received the ‘Guppy Adopter Award’ for their exemplary adoption of our quantum programming language, Guppy, in their research workflows. 
  • Phasecraft, a UK and US-based quantum algorithms startup, received the ‘Rising Star’ award for demonstrating exceptional early impact and advancing science using ĢƵ hardware, which they published in a December 2025 .
  • Qedma, a quantum software startup, received the ‘Startup Partner Engagement’ award for their sustained engagement with ĢƵ platforms dating back to our first commercially deployed quantum computer, H1.
  • Anna Dalmasso from the University of Nottingham received our ‘New Student Award’ for her impressive debut project on ĢƵ hardware and for delivering outstanding results as a new Q-Net student user. 

Congratulations, again, and thank you to everyone who contributed to the success of the first Q-Net Connect!

Become a Q-Net Member

Q-Net offers year‑round support through user access, developer tools, documentation, trainings, webinars, and events. Members enjoy many exclusive benefits, including being the first to hear about exclusive content, publications and promotional offers.

By joining the community, you will be invited to exclusive gatherings to hear about the latest breakthroughs and connect with industry experts driving quantum innovation. Members also get access to Q‑Net Connect recordings and stay connected for future community updates.

corporate
All
events
All
Blog
|
partnership
March 16, 2026
We’re Using AI to Discover New Quantum Algorithms

In a follow-up to our recent work with Hiverge using AI to discover algorithms for quantum chemistry, we’ve teamed up with Hiverge, Amazon Web Services (AWS) and NVIDIA to explore using AI to improve algorithms for combinatorial optimization.

With the rapid rise of Large Language Models (LLMs), people started asking “what if AI agents can serve as on-demand algorithm factories?” We have been working with Hiverge, an algorithm discovery company, AWS, and NVIDIA, to explore how LLMs can accelerate quantum computing research.

Hiverge – named for Hive, an AI that can develop algorithms – aims to make quantum algorithm design more accessible to researchers by translating high-level problem descriptions in mostly natural language into executable quantum circuits. The Hive takes the researcher’s initial sketch of an algorithm, as well as special constraints the researcher enumerates, and evolves it to a new algorithm that better meets the researcher’s needs. The output is expressed in terms of a familiar programming language, like Guppy or , making it particularly easy to implement.

The AI is called a “Hive” because it is a collective of LLM agents, all of whom are editing the same codebase. In this work, the Hive was made up of LLM powerhouses such as Gemini, ChatGPT, Claude, Llama, as well as which was accessed through AWS’ Amazon Bedrock service. Many models are included because researchers know that diversity is a strength – just like a team of human researchers working in a group, a variety of perspectives often leads to the strongest result.

Once the LLMs are assembled, the Hive calls on them to do the work writing the desired algorithm; no new training is required. The algorithms are then executed and their ‘fitness’ (how well they solve the problem) is measured. Unfit programs do not survive, while the fittest ones evolve to the next generation. This process repeats, much like the evolutionary process of nature itself.

After evolution, the fittest algorithm is selected by the researchers and tested on other instances of the problem. This is a crucial step as the researchers want to understand how well it can generalize.

In this most recent work, the joint team explored how AI can assist in the discovery of heuristic quantum optimization algorithms, a class of algorithms aimed at improving efficiency across critical workstreams. These span challenges like optimal power grid dispatch and storage placement, arranging fuel inside nuclear reactors, and molecular design and reaction pathway optimization in drug, material, and chemical discovery—where solutions could translate into maximizing operational efficiency, dramatic reduction in costs, and rapid acceleration in innovation.

In other AI approaches, such as reinforcement learning, models are trained to solve a problem, but the resulting "algorithm" is effectively ‘hidden’ within a neural network. Here, the algorithm is written in Guppy or CUDA-Q (or Python), making it human-interpretable and easier to deploy on new problem instances.

This work leveraged the NVIDIA CUDA-Q platform, running on powerful NVIDIA GPUs made accessible by AWS. It’s state-of-the art accelerated computing was crucial; the research explored highly complex problems, challenges that lie at the edge of classical computing capacity. Before running anything on ĢƵ’s quantum computer, the researchers first used NVIDIA accelerated computing to simulate the quantum algorithms and assess their fitness. Once a promising algorithm is discovered, it could then be deployed on quantum hardware, creating an exciting new approach for scaling quantum algorithm design.

More broadly, this work points to one of many ways in which classical compute, AI, and quantum computing are most powerful in symbiosis. AI can be used to improve quantum, as demonstrated here, just as quantum can be used to extend AI. Looking ahead, we envision AI evolving programs that express a combination of algorithmic primitives, much like human mathematicians, such as Peter Shor and Lov Grover, have done. After all, both humans and AI can learn from each other.

partnership
All
technical
All
Blog
|
partnership
March 16, 2026
Real Time Error Correction at Increased Scale

As quantum computing power grows, so does the difficulty of error correction. Meeting that demand requires tight integration with high-performance classical computing, which is why we’ve partnered with NVIDIA to push the boundaries of real-time decoding performance.

Realizing the full power of quantum computing requires more than just qubits, it requires error rates low enough to run meaningful algorithms at scale. Physical qubits are sensitive to noise, which limits their capacity to handle calculations beyond a certain scale. To move beyond these limits, physical qubits must be combined into logical qubits, with errors continuously detected and corrected in real time before they can propagate and corrupt the calculation. This approach, known as fault tolerance, is a foundational requirement for any quantum computer intended to solve problems of real-world significance.

Part of the challenge of fault tolerance is the computational complexity of correcting errors in real time. Doing so involves sending the error syndrome data to a classical co-processor, solving a complex mathematical problem on that processor, then sending the resulting correction back to the quantum processor - all fast enough that it doesn’t slow down the quantum computation. For this reason, Quantum Error Correction (QEC) is currently one of the most demanding use-cases for tight coupling between classical and quantum computing.

Given the difficulty of the task, we have partnered with NVIDIA, leaders in accelerated computing. With the help of NVIDIA’s ultra-fast GPUs (and the GPU-accelerated BP-OSD decoder developed by NVIDIA as part of library), we were able to demonstrate real-time decoding of Helios’ qubits, all in a system that can be connected directly to our quantum processors using .

While real-time decoding has been demonstrated before (notably, by our own scientists in this study), previous demonstrations were limited in their scalability and complexity.

In this demonstration, we used Brings’ code, a high-rate code that is possible with our all-to-all connectivity, to encode our physical qubits into noise-resilient logical qubits. Once we had them encoded, we ran gates as well as let them idle to see if we could catch and correct errors quickly and efficiently. We submitted the circuits via both as well as our own Guppy language, underlining our commitment to accessible, ecosystem-friendly quantum computing.

The results were excellent: we were able to perform low-latency decoding that returned results in the time we needed, even for the faster clock cycles that we expect in future generation machines.

A key part of the achievement here is that we performed something called “correlated” decoding. In correlated decoding, you offload work that would normally be performed on the QPU onto the classical decoder. This is because, in ‘standard’ decoding, as you improve your error correction capabilities, it takes more and more time on the QPU. Correlated decoding elides this cost, saving QPU time for the tasks that only the quantum computer can do.

Stay tuned for our forthcoming paper with all the details.

partnership
All