Welcome! This is The Quantum Stack. Created by a Travis L. Scholten, it brings clarity to the topic of quantum computing, with an aim to separate hype from reality, and help readers stay on top of trends and opportunities. To subscribe, click the button below.
Hard to believe summer is about halfway over, isn’t it? Though perhaps for readers in the Southern Hemisphere, this is good news…!
This article compiles some key insights from 3 recent speaking engagements, diving into the advancements shaping the future of quantum computing. We’ll explore why quantum algorithms matter right now, assess the current state of the quantum industry, and examine the push to build national infrastructure supporting the convergence of AI and quantum computing. For each topic, I’ve pulled out essential take-aways, and included relevant links in case you want to go deeper. A subsequent article – drawing on other engagements – will look at broader topics such as career development as a STEM PhD, the evolving intersection of quantum and HPC, and quantum computing’s regional impact.
Event: Womanium/WISER Quantum Summer School
Topic: “Why Quantum Algorithms Matter Now”
First up is a keynote at the Quantum Summer School organized by Womanium and The WISER. This year’s school – “Quantum solvers: algorithms for the world's hardest problems” – focused on quantum algorithms, from the foundational topics to software development kits.
My talk, “Why Quantum Algorithms Matter Now”, was intended to set the scene as to why quantum algorithms are critical today, and discuss IBM and how we are bringing useful quantum computing to the world.
Key take-aways
A small number of quantum algorithms exist. The absolute number of quantum algorithms is actually quite low: the most comprehensive catalog I am aware of, “The Quantum Algorithm Zoo”, has only 79 unique entries! So in terms of the fundamental computational primitives that can be run on quantum computers, there’s likely a lot left to discover.
Quantum algorithms always involve using classical computers. Why? Because you need: (a) external classical controls to run the quantum computer, and (b) classical post-processing of the results obtained. So in that sense, the notion of “hybrid quantum-classical algorithms” is a bit redundant – all quantum algorithms are hybrid! It’s just a question of how much classical computing resources are required, and for what purposes. Further, when designing new quantum algorithms, leveraging best-in-class classical computational capabilities would make a lot of sense: why use a quantum computer to do something it's not necessarily the best at as compared to a classical one?
Attaining quantum advantage requires looking at not just the right problems, but also the right instances of them. Related to the above point, when thinking about quantum advantage, at least 2 desiderata must be considered. First, the quantum circuits required for the problem need to be at a scale and complexity for which purely-classical methods of emulating a quantum computer (i.e., brute-force or approximate circuit simulation methods) fail. Second, the problem itself needs to be of a scale and complexity wherein purely-classical approximation methods break down. The former ensures that using a quantum computer is the only way to get the most reliable quantum-related answer, whereas the latter ensures that using a quantum computer could be necessary in the first place.
Event: The Trilateral Technology Leaders Training Program at the Johns Hopkins School of Advanced International Studies
Topic: “The State of Quantum Computation”
I was invited to speak with the 2nd cohort of “Trilateral Technology Leaders Training Program” participants. This program is organized by the Johns Hopkins School of Advanced International Studies, and its purpose is to help mid-level government officials from the US, Japan, and Korea “deepen their understanding of global technological trends, policy-making frameworks, and the challenges and opportunities in governing emerging technologies.” That quantum computing was included is a testament to its geostrategic importance across these 3 nations, and the deep ties amongst them for using quantum computers as a tool for technical, economic, and social impact. Of special, IBM-relevant note is an announcement we made a couple of years ago to train nearly 40K students in quantum within 10 years; thankfully, we’re making good progress on it!
This talk was much more of an overview of the industry as a whole, what IBM has been doing to foster global collaboration, and to preview the advances industry plans to make by decade’s end.
Key take-aways
The nascent quantum industry is accelerating. End-user exploration continues to grow, and venture capital continues to fund startups. Large enterprises such as IBM, Amazon, Google, and Microsoft continue to advance their approaches. Increasingly, the question of “Can quantum computers be built at useful scales?” is a matter of “when”, not “if”.
Open software has provided a strong base upon which community can be nucleated and talent development accelerated. Regardless of which package or tool is being used, open software has grabbed the attention of enthusiasts, developers, and researchers. IBM’s support of Qiskit has established it as the de facto software development kit for quantum computing, with nearly 75% of developers indicating in a recent survey that they use it. And while the Unitary Foundation didn’t make it into the talk, I would be remiss not to mention the work they have done in building public goods for quantum technologies, and through its microgrant program, supporting those “quantum explorers” to work on interesting projects.
Industry roadmaps clearly suggest an inflection point by 2030 with the deployment of fault-tolerant quantum computers. Several companies – including IBM – have published forward-looking roadmaps outlining various advances in their technologies. Most of them point to developing fault-tolerant quantum computers (i.e., computers for which the occurrence of faults or errors doesn’t completely destroy the computation) by 2030. Now, on IBM’s roadmap, such a computer is envisioned to be made available for end-users by 2029. As the end of the decade approaches, there is likely going to be an increased focus by governments on the question of quantum cryptography, securing relevant information systems, etc.
Event: A “Government-Industry-University-Philanthropy-Research Roundtable" Event on AI Infrastructure
Topic: “Building National Infrastructure to Support the Convergence of AI and Quantum Technologies”
The Government-Industry-University-Philanthropy-Research Roundtable (GUIPRR) is an organization within the National Academies with a mission “to convene senior leaders from government, universities, and industry to define and explore critical issues related to the national and global science and technology agenda that are of shared interest; to frame the next critical question stemming from current debate and analysis; and to incubate activities of on-going value to the stakeholders.” I joined a panel at a GUIPRR-organized workshop on “AI Infrastructure to Accelerate AI Convergence and Catalyze U.S. Scientific Innovation”.
Given the discussion in the news these days about AI, data centers, energy, etc., attending such a workshop was extremely interesting. One of my key take-aways was realizing just how much the discussion around AI and American leadership thereof comes with it related discussions on construction, building, national infrastructure, democratization of advanced compute technologies, etc. Viewed through that lens, including a panel on quantum computing actually makes a lot of sense: it’s an advanced compute technology, there’s a need to activate the scientific/researcher base to leverage it for American innovation, and large-scale quantum computers can be thought of as a piece of that infrastructure puzzle.
And you may recall that one of my predictions for this year was that AI + quantum would continue to gain steam:
The theme of the panel was at the intersection of quantum and AI, which made for a rather dynamic conversation. Panel participants included:
(Moderator) Prachi Vakharia, Co-Founder, Womanium Foundation
Geetha Senthil, Deputy Director, Office of Special Initiatives, National Center for Advancing Translational Sciences, NIH
Zachary Dutton, Vice President of Defense Technology, RTX Corporation
Robert Visser, Vice President, Engineering, Office of the CTO, Applied Materials, Inc.
This panel was a bit free-wheeling (in a good way!).



Key take-aways
Quantum use cases matter a lot. Much of our discussion focused on what quantum computers could be used for. It feels like there is still a big gap between those on the “inside” of the industry and those on the “outside” as far as having realistic expectations about use cases is concerned. The more clarity the industry can give on near-future use cases, the kinds of computational problems the computers they are building will be particularly useful for, etc. all help de-risk investment on the part of end-users to get quantum ready.
Quantum-centric supercomputers are a kind of “future of computing” infrastructure that the Nation should build. As mentioned earlier, quantum computers will always need classical ones. So when thinking about the future of computing, figuring out how to use quantum as an accelerator for classical – and vice-versa – is extremely important. One concept – “quantum-centric” supercomputers – has gotten a lot of traction in the industry (IBM, Pasqal, Microsoft, NVIDIA all have takes on what this concept means and what such supercomputers could look like).
Quantum + AI has many facets; AI for quantum seems a bit more feasible near-term than quantum for AI. As mentioned, use cases matter a lot. In the context of quantum and AI, you can break them down into 3 categories: use cases for which quantum and AI are used together, AI-based use cases which can be enhanced by quantum-enabled AI approaches, and using AI to accelerate quantum computations. As an example of each:
Quantum and AI (supply chain context): an AI model uses information provided by documents, shipping records, weather forecasts, etc., to come up with an optimization problem whose solution can help ensure robust supply chains. That optimization problem is solved using a quantum computer, and the results of that quantum computation are sent back to the AI to give a human-interpretable result.
Quantum for AI (chemistry context): consider an AI model which uses information about a chemical or molecule to, e.g., predict toxicity to human beings. You could use a quantum computer to more accurately model certain properties of that chemical or molecule with the hope that giving more accurate information to the AI model gives more accurate predictions.
AI for quantum (use case development context): AI models could help summarize relevant research information to help subject-matter experts develop an understanding of the state of the field and develop their own use cases. (Austin-based startup Strangeworks has some interesting work in this vein.) And AI models for software development can become quantum-literate to help software developers write code. For example, the Qiskit Code Assistant helps developers write Qiskit code.
AI for quantum (system development and deployment): AI models could be used to facilitate the development of quantum chips (qubit layout, coupler properties, etc.), and AI agents could be used to autonomously monitor the state/quality of a given system and perform system calibrations or maintenance automatically.
The QED-C recently released a report on the topic of Quantum and AI which might also be of interest; see here.
Wrap Up
The next several years are likely some of the most critical for the quantum computing community and industry. Tremendous progress has been made in de-risking the development of quantum computers, but whether some near-term advantage is discovered will be crucial for building continued momentum around this technology. Each of the topics covered in this article can contribute in some small way to that:
New quantum algorithms can help make current and near-future quantum computers more useful, by reconsidering the ways in which classical and quantum computers work together to solve practical problems.
Policy makers and governmental stakeholders have a role to play in nurturing the development of the industry & regional/national ecosystems, facilitating the adoption of quantum safe cryptography by all organizations, and getting quantum computers into the hands of scientists and researchers (academic and corporate alike).
AI can help accelerate the development of quantum computers, enable domain experts to become quantum-literate, and facilitate the development of use cases and software.
As mentioned at the start of this article, another one is in the works — based on other speaking engagements — that will be looking at themes of broader, more social context.
Was this article useful? Please share it.
P.S. In addition to The Quantum Stack, you can find me online here.
NOTE: All opinions and views expressed are my own, are not representative or indicative of those of my employer, and in no way are intended to indicate prospective business approaches, strategies, and/or opportunities.
Copyright 2025 Travis L. Scholten