

Among other research, the Global Technology Applied Research (GTAR) Center at JPMorgan Chase is experimenting with quantum algorithms for constrained optimization to perform Natural Language Processing (NLP) for document summarization, addressing various application points across the firm.
Marco Pistoia, Ph.D., Managing Director, Distinguished Engineer, and Head of GT Applied Research recently led the research effort around a constrained version of the Quantum Approximate Optimization Algorithm (QAOA) that can extract and summarize the most important information from legal documents and contracts. This work was recently published in Nature Scientific Reports () and deemed the “largest demonstration to date of constrained optimization on a gate-based quantum computer.”
JPMorgan Chase was one of the early-access users of the ĢƵ H1-1 system when it was upgraded from 12 qubits with 3 parallel gating zones to 20 qubits with 5 parallel gating zones. The research team at JPMorgan Chase found the 20-qubit machine returned significantly better results than random guess without any error mitigation, despite the circuit depth exceeding 100 two-qubit gates. The circuits used were deeper than any quantum optimization circuits previously executed for any problem. “With 20 qubits, we could summarize bigger documents and the results were excellent,” Pistoia said. “We saw a difference, both in terms of the number of qubits and the quality of qubits.”
JPMorgan Chase has been working with ĢƵ’s quantum hardware since 2020 (pre-merger) and Pistoia has seen the evolution of the machine over time, as companies raced to add qubits. “It was clear early on that the number of qubits doesn't matter,” he said. “In the short term, we need computers whose qubits are reliable and give us the results that we expect based on the reference values.”
Jenni Strabley, Sr., Director of Offering Management for ĢƵ, stated, “Quality counts when it comes to quantum computers. We know our users, like JPMC, expect that every time they use our H-Series quantum computers, they get the same, repeatable, high-quality performance. Quality isn’t typically part of the day-to-day conversation around quantum computers, but it needs to be for users like Marco and his team to progress in their research.”
More broadly, the researchers claimed that “this demonstration is a testament to the overall progress of quantum computing hardware. Our successful execution of complex circuits for constrained optimization depended heavily on all-to-all connectivity, as the circuit depth would have significantly increased if the circuit had to be compiled to a nearest-neighbor architecture.”
The objective of the experiment was to produce a condensed text summary by selecting sentences verbatim from the original text. The specific goal was to maximize the centrality and minimize the redundancy of the sentences in the summary and do so with a limited number of sentences.
The JPMorgan Chase researchers used all 20 qubits of the H1-1 and executed circuits with two-qubit gate depths of up to 159 and two-qubit gate counts of up to 765. The team used IBM’s Qiskit for circuit manipulation and noiseless simulation. For the hardware experiments, they used to optimize the circuits for H1-1’s native gate set. They also ran the quantum circuits in an emulator of the H1-1 device.
The JPMorgan Chase research team tested three algorithms: L-VQE, QAOA and XY-QAOA. L-VQE was easy to execute on the hardware but difficult to find good parameters for. Regarding the other two algorithms, it was easier to find good parameters, but the circuits were more expensive to execute. The XY-QAOA algorithm provided the best results.
Dr. Pistoia mentions that constrained optimization problems, such as extractive summarization, are ubiquitous in banks, thus finding high-quality solutions to constrained optimization problems can positively impact customers of all lines of business. It is also important to note that the optimization algorithm built for this experiment can also be used across other industries (e.g., transportation) because the underlying algorithm is the same in many cases.
Even with the quality of the results from this extractive summarization work, the NLP algorithm is not ready to roll out just yet. “Quantum computers are not yet that powerful, but we're getting closer,” Pistoia said. “These results demonstrate how algorithm and hardware progress is bringing the prospect of quantum advantage closer, which can be leveraged across many industries.”
ĢƵ, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. ĢƵ’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, ĢƵ leads the quantum computing revolution across continents.

This month, ĢƵ welcomed its global user community to the first-ever Q-Net Connect, an annual forum designed to spark collaboration, share insights, and accelerate innovation across our full-stack quantum computing platforms. Over two days, users came together not only to learn from one another, but to build the relationships and momentum that we believe will help define the next chapter of quantum computing.
Q-Net Connect 2026 drew over 170 attendees from around the world to Denver, Colorado, including representatives from commercial enterprises and startups, academia and research institutions, and the public sector and non-profits - all users of ĢƵ systems.
The program was packed with inspiring keynotes, technical tracks, and customer presentations. Attendees heard from leaders at ĢƵ, as well as our partners at NVIDIA, JPMorganChase and BlueQubit; professors from the University of New Mexico, the University of Nottingham and Harvard University; national labs, including NIST, Oak Ridge National Laboratory, Sandia National Laboratories and Los Alamos National Laboratory; and other distinguished guests from across the global quantum ecosystem.
The mission of the ĢƵ Q-Net user community is to create a space for shared learning, collaboration and connection for those who adopt ĢƵ’s hardware, software and middleware platform. At this year’s Q-Net Connect, we awarded four organizations who made notable efforts to champion this effort.
Congratulations, again, and thank you to everyone who contributed to the success of the first Q-Net Connect!
Q-Net offers year‑round support through user access, developer tools, documentation, trainings, webinars, and events. Members enjoy many exclusive benefits, including being the first to hear about exclusive content, publications and promotional offers.
By joining the community, you will be invited to exclusive gatherings to hear about the latest breakthroughs and connect with industry experts driving quantum innovation. Members also get access to Q‑Net Connect recordings and stay connected for future community updates.

In a follow-up to our recent work with Hiverge using AI to discover algorithms for quantum chemistry, we’ve teamed up with Hiverge, Amazon Web Services (AWS) and NVIDIA to explore using AI to improve algorithms for combinatorial optimization.
With the rapid rise of Large Language Models (LLMs), people started asking “what if AI agents can serve as on-demand algorithm factories?” We have been working with Hiverge, an algorithm discovery company, AWS, and NVIDIA, to explore how LLMs can accelerate quantum computing research.
Hiverge – named for Hive, an AI that can develop algorithms – aims to make quantum algorithm design more accessible to researchers by translating high-level problem descriptions in mostly natural language into executable quantum circuits. The Hive takes the researcher’s initial sketch of an algorithm, as well as special constraints the researcher enumerates, and evolves it to a new algorithm that better meets the researcher’s needs. The output is expressed in terms of a familiar programming language, like Guppy or , making it particularly easy to implement.
The AI is called a “Hive” because it is a collective of LLM agents, all of whom are editing the same codebase. In this work, the Hive was made up of LLM powerhouses such as Gemini, ChatGPT, Claude, Llama, as well as which was accessed through AWS’ Amazon Bedrock service. Many models are included because researchers know that diversity is a strength – just like a team of human researchers working in a group, a variety of perspectives often leads to the strongest result.
Once the LLMs are assembled, the Hive calls on them to do the work writing the desired algorithm; no new training is required. The algorithms are then executed and their ‘fitness’ (how well they solve the problem) is measured. Unfit programs do not survive, while the fittest ones evolve to the next generation. This process repeats, much like the evolutionary process of nature itself.
After evolution, the fittest algorithm is selected by the researchers and tested on other instances of the problem. This is a crucial step as the researchers want to understand how well it can generalize.
In this most recent work, the joint team explored how AI can assist in the discovery of heuristic quantum optimization algorithms, a class of algorithms aimed at improving efficiency across critical workstreams. These span challenges like optimal power grid dispatch and storage placement, arranging fuel inside nuclear reactors, and molecular design and reaction pathway optimization in drug, material, and chemical discovery—where solutions could translate into maximizing operational efficiency, dramatic reduction in costs, and rapid acceleration in innovation.

In other AI approaches, such as reinforcement learning, models are trained to solve a problem, but the resulting "algorithm" is effectively ‘hidden’ within a neural network. Here, the algorithm is written in Guppy or CUDA-Q (or Python), making it human-interpretable and easier to deploy on new problem instances.
This work leveraged the NVIDIA CUDA-Q platform, running on powerful NVIDIA GPUs made accessible by AWS. It’s state-of-the art accelerated computing was crucial; the research explored highly complex problems, challenges that lie at the edge of classical computing capacity. Before running anything on ĢƵ’s quantum computer, the researchers first used NVIDIA accelerated computing to simulate the quantum algorithms and assess their fitness. Once a promising algorithm is discovered, it could then be deployed on quantum hardware, creating an exciting new approach for scaling quantum algorithm design.
More broadly, this work points to one of many ways in which classical compute, AI, and quantum computing are most powerful in symbiosis. AI can be used to improve quantum, as demonstrated here, just as quantum can be used to extend AI. Looking ahead, we envision AI evolving programs that express a combination of algorithmic primitives, much like human mathematicians, such as Peter Shor and Lov Grover, have done. After all, both humans and AI can learn from each other.
As quantum computing power grows, so does the difficulty of error correction. Meeting that demand requires tight integration with high-performance classical computing, which is why we’ve partnered with NVIDIA to push the boundaries of real-time decoding performance.
Realizing the full power of quantum computing requires more than just qubits, it requires error rates low enough to run meaningful algorithms at scale. Physical qubits are sensitive to noise, which limits their capacity to handle calculations beyond a certain scale. To move beyond these limits, physical qubits must be combined into logical qubits, with errors continuously detected and corrected in real time before they can propagate and corrupt the calculation. This approach, known as fault tolerance, is a foundational requirement for any quantum computer intended to solve problems of real-world significance.
Part of the challenge of fault tolerance is the computational complexity of correcting errors in real time. Doing so involves sending the error syndrome data to a classical co-processor, solving a complex mathematical problem on that processor, then sending the resulting correction back to the quantum processor - all fast enough that it doesn’t slow down the quantum computation. For this reason, Quantum Error Correction (QEC) is currently one of the most demanding use-cases for tight coupling between classical and quantum computing.
Given the difficulty of the task, we have partnered with NVIDIA, leaders in accelerated computing. With the help of NVIDIA’s ultra-fast GPUs (and the GPU-accelerated BP-OSD decoder developed by NVIDIA as part of library), we were able to demonstrate real-time decoding of Helios’ qubits, all in a system that can be connected directly to our quantum processors using .
While real-time decoding has been demonstrated before (notably, by our own scientists in this study), previous demonstrations were limited in their scalability and complexity.
In this demonstration, we used Brings’ code, a high-rate code that is possible with our all-to-all connectivity, to encode our physical qubits into noise-resilient logical qubits. Once we had them encoded, we ran gates as well as let them idle to see if we could catch and correct errors quickly and efficiently. We submitted the circuits via both as well as our own Guppy language, underlining our commitment to accessible, ecosystem-friendly quantum computing.
The results were excellent: we were able to perform low-latency decoding that returned results in the time we needed, even for the faster clock cycles that we expect in future generation machines.
A key part of the achievement here is that we performed something called “correlated” decoding. In correlated decoding, you offload work that would normally be performed on the QPU onto the classical decoder. This is because, in ‘standard’ decoding, as you improve your error correction capabilities, it takes more and more time on the QPU. Correlated decoding elides this cost, saving QPU time for the tasks that only the quantum computer can do.
Stay tuned for our forthcoming paper with all the details.