With the end of 2023 upon us, I thought it might be fun to take a look at the past year through the lens of the research literature. Note that, according to the arXiv, in 2023 to date just over 15,000 papers have been published in the “quantum physics” category! So we are going to just skim the surface here.
To that end, I’ve compiled a collection of some of the most “impactful” papers of 2023, as measured by ‘scites’ over on SciRate (kind of a physicist’s version of reddit: papers posted to the arXiv get listed, you can upvote [‘scite’] them, and there’s a comments box). This list is not intended to be exhaustive; rather, I’ve simply pulled some I found interesting from the top ~50 papers. After each is a short commentary, and suggestions for further reading from other papers published in 2023 on similar and adjacent topics.
Some (9) Top Quantum Computing Papers of 2023
* Quantum algorithms: A survey of applications and end-to-end complexities
Clocking in at 337 pages, the Amazon quantum computing team deserves major kudos for pulling this together! This paper can be a go-to resource for anyone looking to improve their technical understanding of what is currently known about quantum applications and algorithms. For each application, the survey discusses what is currently known about fault-tolerant resource estimates, the availability (and applicability) of near-term heuristic algorithms, the speedups, and any caveats.
Going forward, I expect the field to focus with a more critical eye on unpacking nuances and subtleties when implementing quantum algorithms for applications to help end-users get a handle on when and where quantum could make an impact.
For further reading, check out these papers which explore how quantum computing could make impacts across various topics. These papers are the result of activities inaugurated by IBM Quantum as part of its Technical Working Groups initiative:
[2307.05734] Towards quantum-enabled cell-centric therapeutics
[2312.02279] Quantum Optimization: Potential, Challenges, and the Path Forward
* Fast classical simulation of evidence for the utility of quantum computing before fault tolerance & Efficient tensor network simulation of IBM's Eagle kicked Ising experiment
This past summer, the IBM Quantum team published a result in Nature called Evidence for the utility of quantum computing before fault tolerance. This work (summarized here) presented results using a quantum computer which were at a scale (number of qubits) and complexity (number of operations) which would elude direct, brute-force classical methods. However (and as the paper itself noted), approximate classical methods could have been expected to perform similarly well as the quantum computer. So while there is no notion of quantum being ‘advantageous’, per se, for this task, it does have utility as a tool for discovery.
Both of these papers present new classical methods for simulating the particular experiment done by the IBM Quantum team. The first presents a general-purpose approximate classical method for simulation, and shows good agreement between the numerical results obtained by it and the experimental results. This method has the advantage that it can be run on a laptop (whereas the method used in the original paper could not). The second paper presents a method based on tensor networks wherein the structure of the tensor network is designed in such a way as to mimic the connectivity of the qubits used in the experiment. This method also shows good agreement with the experimental results.
It should be expected that state-of-the-art quantum computing experiments get pitted against innovations in purely-classical methods. These developments are a good thing for the field! Going forward, it will be more important to pay attention to how researchers think about quantifying the gap between what quantum and classical computers can do, the kinds of experiments used to show such a gap, and how that gap is changing over time.
For further reading, check out these papers which also provide new approximate methods for simulating this experiment. The first 6 are classical methods, while the last one in this list uses a quantum annealer.
[2306.17839] Classical benchmarking of zero noise extrapolation beyond the exactly-verifiable regime
[2308.03082] Simulation of IBM's kicked Ising experiment with Projected Entangled Pair Operator
[2308.01339] Dissipative mean-field theory of IBM utility experiment
[2308.09109] Classical surrogate simulation of quantum systems with LOWESA
[2309.15642] Efficient tensor network simulation of IBM's largest quantum processors
In addition, this paper does a nice job surveying what is known about classically simulating quantum computers, and is a useful primer.
* Constant-Overhead Fault-Tolerant Quantum Computation with Reconfigurable Atom Arrays & Logical quantum processor based on reconfigurable atom arrays
Recall from the first Quantum Quickies article there are a variety of modalities (types) of qubits. This year was a kind of breakout year for the neutral atom modality. These papers showed progress in this modality for realizing various aspects of fault-tolerant quantum computation. The first paper studies how a particular kind of error-correcting code (“quantum low-density parity check” [qLDPC] codes) can be realized using this modality by shuttling (moving) neutral atoms around. It is generally expected that such codes will be used in the future for fault-tolerant quantum computation. The second paper, using a few different error-correcting codes (but not qLDPC codes), does experimental implementations to demonstrate pieces of fault-tolerant computation. If you are interested in this work, here’s a talk by the first author on YouTube.
Although these results do not unambiguously demonstrate a fault-tolerant quantum computer, they do show the field is making substantial strides towards realizing such a system.
For further reading, check out these two papers on neutral-atom-based quantum computing.
* High-threshold and low-overhead fault-tolerant quantum memory & Hierarchical memories: Simulating quantum LDPC codes with local gates
Speaking of quantum error correction, 2023 was also a good year for the development of new error-correcting codes! As mentioned above, qLDPC codes (or similar codes) are where the field as a whole is moving. These codes offer more favorable properties for realizing fault-tolerance than the surface code (one of the most widely studied codes).
These papers both propose new qLDPC codes. The first presents a new qLDPC code which can be used to realize a fault-tolerant quantum memory (into which you could encode and preserve quantum information). This idea is getting baked into the IBM Quantum development roadmap, and is anticipated to show up circa 2026. The second paper also presents a new error-correcting code for quantum memory; in this paper, the code is constructed through joining a given qLDPC code with the surface code. Though it should be noted the resulting code is not a qLDPC code itself.
If qLDPC codes are “where it’s at” for quantum error correction, I expect more advances and innovations to discover and define new such codes.
For future reading, check out this paper showing another construction for a fault-tolerant quantum memory:
* An Efficient Quantum Factoring Algorithm
The discovery of Shor’s algorithm for factoring integers in 1994 helped ignite the entire field of quantum computing. Now, almost 30 years later, researcher Oded Regev discovered how to improve upon it. This improvement reduces the circuit complexity (specifically, the number of operations needed), but increases the number of qubits required. In addition, Regev’s algorithm requires running more circuits than Shor’s. Quanta Magazine has a nice article describing this result here.
It should be noted that, unlike Shor’s algorithm, the exact costs for Regev’s algorithm (in terms of numbers of qubits and circuit complexity) are currently unknown. (Regev’s result is the asymptotic cost, meaning there are constant factors which are not determined.) Hence, whether Regev’s result substantially changes the estimated time horizons for realizing a quantum-computing-based threat to cryptography is to be determined.
I expect the community will take a determined look at getting a full handle on this algorithm and its implications for cybersecurity. I will also note the US National Institute of Standards and Technology (NIST) is expected to release its standardized versions of quantum-safe cryptographic algorithms in 2024.
For further reading, see these two papers. One provides a new implementation of Regev’s algorithm, and the other extends the algorithm to the task of computing discrete logarithms, the computational primitive which secures elliptic-curve cryptography (ECC).
[2310.00899] Optimizing Space in Regev's Factoring Algorithm
[2311.05545] Extending Regev's factoring algorithm to compute discrete logarithms
* Certified Randomness from Quantum Supremacy
Google’s 2019 experiment described in the paper Quantum supremacy using a programmable superconducting processor triggered a lot of press around both the topic itself and the field of quantum computing. (Just search for “Google quantum supremacy”.) The core idea of the task performed in this experiment is to generate samples (bitstrings) from the quantum computer with the property that trying to spoof such samples would be extremely costly for a classical computer. At the time of the experiment, it wasn’t so clear what one could do with a system capable of running such an experiment. One of the ideas floating around at the time (and which helped motivate the experiment) was using the samples generated by the quantum computer as inputs for generating random bits. In this paper, one of the originators of this idea, Scott Aaronson, fleshes out the idea and proves its soundness. So now, at long last, we have something practical (and provably sound!) to do with quantum supremacy experiments!
It would be interesting to see whether the idea presented in this paper gets developed into a product by Google and/or the Google Quantum AI team.
For further reading, check out these 2 papers. The first is the Google Quantum AI teams’ extension of its 2019 work (scaling from 53 qubits to 70), and the second is an analysis of properties of the output distribution of bitstrings for such experiments in the presence of a particular kind of (hardware-realistic) noise. While this work by itself is by no means definitive, it does suggest that the consensus view on the difficulty of these experiments may need to be updated. (In particular, the paper leaves open the question of whether the samples generated by a quantum computer experiencing this kind of noise are easy to spoof classically.)
As you can see, 2023 has been quite a year for the quantum computing community. While research isn’t linear, and no doubt new twists and turns will abound in 2024, it feels like this year was a great one for quantum algorithms, applications, and hardware. I can’t wait to see what 2024 brings!