ĢƵ

λambeq Gen II: A Quantum-Enhanced Interpretable and Scalable Text-based NLP Software Package

May 22, 2025
By Bob Coecke and Dimitri Kartsaklis
Introduction

Today we announce the next generation of λambeq , ĢƵ’s quantum natural language processing (QNLP) package.

Incorporating recent developments in both quantum NLP and quantum hardware, λambeq Gen II allows users not only to model the semantics of natural language (in terms of vectors and tensors), but to convert linguistic structures and meaning directly into quantum circuits for real quantum hardware.

Five years ago, our team of Quantum Natural Language Processing (QNLP). In their work, the team realized that there is a direct correspondence between the meanings of words and quantum states, and between grammatical structures and quantum entanglement. As that article put it: “Language is effectively quantum native”.

Our team realized an NLP task on quantum hardware and provided the data and code via , attracting the interest of a then-nascent quantum NLP community, which has since grown around successive releases of λambeq. supported by on the arXiv.

첹: an open-source python library that turns sentences into quantum circuits, and then feeds these to quantum computers subject to VQC methodologies. Initial release in October 2021

From that moment onwards, anyone could play around with QNLP on the then freely available quantum hardware. Our λambeq software has been downloaded over 50,000 times, and the user community is supported by an active , where practitioners can interact with each other and with our development team.  

The QNLP Back-Story

In order to demonstrate that QNLP was possible, even on the hardware available in 2021, we focused exclusively on small noisy quantum computers. Our motivation was to produce some exploratory findings, looking for a potential quantum advantage for natural language processing using quantum hardware. We published in 2016, detailing a quadratic speedup over classical computers (in certain circumstances). We are strongly convinced that there is a lot more potential than indicated in that paper.

That first realization of QNLP marked a shift away from brute-force machine learning, which has now taken the world by storm in the shape of large language models (LLMs) running on algorithms called “transformers”.

Instead of the transformer approach, we decoded linguistic structure using a compositional theory of meaning. With deep roots in computational linguistics, our approach was inspired by research into compositional linguistic algorithms, and such as quantum teleportation. As we continued our work, it became clear that our approach reduced training requirements by relying on a natural relationship between linguistic structure and quantum structure, in practice.

Embedding recent progress in λambeq Gen II

We haven’t sat still, and neither have the teams working in the field of quantum hardware. ĢƵ’s stack now performs at a level we only dreamed of in 2020. While we look forward to continued progress on the hardware front, we are getting ahead of these future developments by shifting the focus in our algorithms and software packages, to ensure we and λambeq’s users are ready to chase far more ambitious goals!

We moved away from the compositional theory of meaning that was the focus of , called DisCoCat, to called DisCoCirc. This enabled us to between text generation and text circuits, concluding that “text circuits are generative for text”.

Formally speaking, DisCoCirc embraces substantially more compositional structure present in language than DisCoCat does, and that pays off in many ways:

  • Firstly, the new theoretical backbone enables one to compose the structure of sentences into text structure, so we can now deal with large texts.
  • Secondly, the compositional structure of language is represented in a compressed manner, that, in fact, makes the formalism language-neutral, as reported in .
  • Thirdly, the augmented compositional linguistic structure, together with the requirement of learnability, makes a quantum model now canonical, and we now have solid theoretical evidence for genuine enhanced performance on quantum hardware, as shown in .  
  • Fourthly, the problems associated with trainability of quantum machine learning models vanish, thanks to compositional generalization, which was the .
  • Lastly, and surely not least, we of compositional interpretability and explored the myriad ways that it supports explainable AI (XAI), which we also discussed extensively in .

Today, our users can benefit from these recent developments with the release λambeq Gen II. Our open-source tools have always benefited from the attention and feedback we receive from our users. Please give it a try, and we look forward to hearing your feedback on λambeq Gen II.

Enjoy!

About ĢƵ

ĢƵ, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. ĢƵ’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, ĢƵ leads the quantum computing revolution across continents. 

Blog
|
corporate
March 25, 2026
Celebrating Our First Annual Q-Net Connect!

This month, ĢƵ welcomed its global user community to the first-ever Q-Net Connect, an annual forum designed to spark collaboration, share insights, and accelerate innovation across our full-stack quantum computing platforms. Over two days, users came together not only to learn from one another, but to build the relationships and momentum that we believe will help define the next chapter of quantum computing.

Q-Net Connect 2026 drew over 170 attendees from around the world to Denver, Colorado, including representatives from commercial enterprises and startups, academia and research institutions, and the public sector and non-profits - all users of ĢƵ systems.  

The program was packed with inspiring keynotes, technical tracks, and customer presentations. Attendees heard from leaders at ĢƵ, as well as our partners at NVIDIA, JPMorganChase and BlueQubit; professors from the University of New Mexico, the University of Nottingham and Harvard University; national labs, including NIST, Oak Ridge National Laboratory, Sandia National Laboratories and Los Alamos National Laboratory; and other distinguished guests from across the global quantum ecosystem.

Congratulations to Q-Net Connect 2026 Award Recipients! 

The mission of the ĢƵ Q-Net user community is to create a space for shared learning, collaboration and connection for those who adopt ĢƵ’s hardware, software and middleware platform. At this year’s Q-Net Connect, we awarded four organizations who made notable efforts to champion this effort. 

  • JPMorganChase received the ‘Guppy Adopter Award’ for their exemplary adoption of our quantum programming language, Guppy, in their research workflows. 
  • Phasecraft, a UK and US-based quantum algorithms startup, received the ‘Rising Star’ award for demonstrating exceptional early impact and advancing science using ĢƵ hardware, which they published in a December 2025 .
  • Qedma, a quantum software startup, received the ‘Startup Partner Engagement’ award for their sustained engagement with ĢƵ platforms dating back to our first commercially deployed quantum computer, H1.
  • Anna Dalmasso from the University of Nottingham received our ‘New Student Award’ for her impressive debut project on ĢƵ hardware and for delivering outstanding results as a new Q-Net student user. 

Congratulations, again, and thank you to everyone who contributed to the success of the first Q-Net Connect!

Become a Q-Net Member

Q-Net offers year‑round support through user access, developer tools, documentation, trainings, webinars, and events. Members enjoy many exclusive benefits, including being the first to hear about exclusive content, publications and promotional offers.

By joining the community, you will be invited to exclusive gatherings to hear about the latest breakthroughs and connect with industry experts driving quantum innovation. Members also get access to Q‑Net Connect recordings and stay connected for future community updates.

corporate
All
events
All
Blog
|
partnership
March 16, 2026
We’re Using AI to Discover New Quantum Algorithms

In a follow-up to our recent work with Hiverge using AI to discover algorithms for quantum chemistry, we’ve teamed up with Hiverge, Amazon Web Services (AWS) and NVIDIA to explore using AI to improve algorithms for combinatorial optimization.

With the rapid rise of Large Language Models (LLMs), people started asking “what if AI agents can serve as on-demand algorithm factories?” We have been working with Hiverge, an algorithm discovery company, AWS, and NVIDIA, to explore how LLMs can accelerate quantum computing research.

Hiverge – named for Hive, an AI that can develop algorithms – aims to make quantum algorithm design more accessible to researchers by translating high-level problem descriptions in mostly natural language into executable quantum circuits. The Hive takes the researcher’s initial sketch of an algorithm, as well as special constraints the researcher enumerates, and evolves it to a new algorithm that better meets the researcher’s needs. The output is expressed in terms of a familiar programming language, like Guppy or , making it particularly easy to implement.

The AI is called a “Hive” because it is a collective of LLM agents, all of whom are editing the same codebase. In this work, the Hive was made up of LLM powerhouses such as Gemini, ChatGPT, Claude, Llama, as well as which was accessed through AWS’ Amazon Bedrock service. Many models are included because researchers know that diversity is a strength – just like a team of human researchers working in a group, a variety of perspectives often leads to the strongest result.

Once the LLMs are assembled, the Hive calls on them to do the work writing the desired algorithm; no new training is required. The algorithms are then executed and their ‘fitness’ (how well they solve the problem) is measured. Unfit programs do not survive, while the fittest ones evolve to the next generation. This process repeats, much like the evolutionary process of nature itself.

After evolution, the fittest algorithm is selected by the researchers and tested on other instances of the problem. This is a crucial step as the researchers want to understand how well it can generalize.

In this most recent work, the joint team explored how AI can assist in the discovery of heuristic quantum optimization algorithms, a class of algorithms aimed at improving efficiency across critical workstreams. These span challenges like optimal power grid dispatch and storage placement, arranging fuel inside nuclear reactors, and molecular design and reaction pathway optimization in drug, material, and chemical discovery—where solutions could translate into maximizing operational efficiency, dramatic reduction in costs, and rapid acceleration in innovation.

In other AI approaches, such as reinforcement learning, models are trained to solve a problem, but the resulting "algorithm" is effectively ‘hidden’ within a neural network. Here, the algorithm is written in Guppy or CUDA-Q (or Python), making it human-interpretable and easier to deploy on new problem instances.

This work leveraged the NVIDIA CUDA-Q platform, running on powerful NVIDIA GPUs made accessible by AWS. It’s state-of-the art accelerated computing was crucial; the research explored highly complex problems, challenges that lie at the edge of classical computing capacity. Before running anything on ĢƵ’s quantum computer, the researchers first used NVIDIA accelerated computing to simulate the quantum algorithms and assess their fitness. Once a promising algorithm is discovered, it could then be deployed on quantum hardware, creating an exciting new approach for scaling quantum algorithm design.

More broadly, this work points to one of many ways in which classical compute, AI, and quantum computing are most powerful in symbiosis. AI can be used to improve quantum, as demonstrated here, just as quantum can be used to extend AI. Looking ahead, we envision AI evolving programs that express a combination of algorithmic primitives, much like human mathematicians, such as Peter Shor and Lov Grover, have done. After all, both humans and AI can learn from each other.

partnership
All
technical
All
Blog
|
partnership
March 16, 2026
Real Time Error Correction at Increased Scale

As quantum computing power grows, so does the difficulty of error correction. Meeting that demand requires tight integration with high-performance classical computing, which is why we’ve partnered with NVIDIA to push the boundaries of real-time decoding performance.

Realizing the full power of quantum computing requires more than just qubits, it requires error rates low enough to run meaningful algorithms at scale. Physical qubits are sensitive to noise, which limits their capacity to handle calculations beyond a certain scale. To move beyond these limits, physical qubits must be combined into logical qubits, with errors continuously detected and corrected in real time before they can propagate and corrupt the calculation. This approach, known as fault tolerance, is a foundational requirement for any quantum computer intended to solve problems of real-world significance.

Part of the challenge of fault tolerance is the computational complexity of correcting errors in real time. Doing so involves sending the error syndrome data to a classical co-processor, solving a complex mathematical problem on that processor, then sending the resulting correction back to the quantum processor - all fast enough that it doesn’t slow down the quantum computation. For this reason, Quantum Error Correction (QEC) is currently one of the most demanding use-cases for tight coupling between classical and quantum computing.

Given the difficulty of the task, we have partnered with NVIDIA, leaders in accelerated computing. With the help of NVIDIA’s ultra-fast GPUs (and the GPU-accelerated BP-OSD decoder developed by NVIDIA as part of library), we were able to demonstrate real-time decoding of Helios’ qubits, all in a system that can be connected directly to our quantum processors using .

While real-time decoding has been demonstrated before (notably, by our own scientists in this study), previous demonstrations were limited in their scalability and complexity.

In this demonstration, we used Brings’ code, a high-rate code that is possible with our all-to-all connectivity, to encode our physical qubits into noise-resilient logical qubits. Once we had them encoded, we ran gates as well as let them idle to see if we could catch and correct errors quickly and efficiently. We submitted the circuits via both as well as our own Guppy language, underlining our commitment to accessible, ecosystem-friendly quantum computing.

The results were excellent: we were able to perform low-latency decoding that returned results in the time we needed, even for the faster clock cycles that we expect in future generation machines.

A key part of the achievement here is that we performed something called “correlated” decoding. In correlated decoding, you offload work that would normally be performed on the QPU onto the classical decoder. This is because, in ‘standard’ decoding, as you improve your error correction capabilities, it takes more and more time on the QPU. Correlated decoding elides this cost, saving QPU time for the tasks that only the quantum computer can do.

Stay tuned for our forthcoming paper with all the details.

partnership
All