Figure

Description

An erasure qubit is a qubit encoding paradigm in which the dominant physical errors are converted into detectable erasure errors — the qubit leaks to a known non-computational state that can be identified by a non-destructive check, revealing the location (but not the content) of the error. The key insight, established by the quantum erasure channel theory, is that an error whose location is known costs dramatically less to correct than a Pauli error at an unknown location: the threshold error rate for erasure errors in the surface code is approximately , compared to for depolarizing noise. This 2–10× reduction in overhead motivates engineering qubits where the dominant failure mode is erasure.

Erasure qubits have been demonstrated across multiple platforms:

Neutral atoms (alkaline earth): In optical tweezer arrays, the metastable clock state and ground state encode the qubit, while Rydberg-mediated gate errors predominantly result in atom loss — a detectable erasure (Wu et al. 2022, Ma et al. 2023).

Dual-rail superconducting: Two coupled transmons or cavities encode and ; photon loss sends the system to , a detectable erasure outside the code space (Levine et al. 2024).

Trapped ions: Metastable shelving states can convert decay errors to detectable leakage events.

Hamiltonian

The erasure qubit is an encoding paradigm, not a single physical system. The general structure is a logical qubit encoded in a subspace such that the dominant error channel maps to an orthogonal, detectable subspace :

where the Lindblad operators satisfy , meaning errors always leave the code space and can be detected by measuring the projector .

For the dual-rail superconducting encoding, the effective Hamiltonian is:

with the code space spanned by and the dominant error (single photon loss) producing the detectable state .

Motivation

Quantum error correction overhead is dominated by the rate and type of physical errors. Unheralded Pauli errors require physical qubits per logical qubit with code distance determined by . Erasure errors, because their locations are known, effectively double the code distance for free — the same code corrects erasures vs. Pauli errors. Converting dominant errors to erasures can reduce the physical-to-logical qubit ratio by factors of 3–10×, potentially bringing practical fault-tolerant computing to nearer-term hardware scales.

Experimental Status

Erasure conversion theory — Stace, Barrett, and Doherty (2009):

  • Established that the surface code threshold for erasure errors is , compared to for depolarizing noise.

Neutral atom erasure — Wu et al. (2022):

  • Proposed erasure conversion for alkaline earth Rydberg atom arrays using the metastable clock state.
  • Theoretical framework for converting dominant Rydberg gate errors to detectable erasures.

High-fidelity Rydberg erasure — Scholl et al. (2023):

  • Demonstrated erasure conversion in tweezer arrays with erasure detection efficiency.
  • Mid-circuit erasure detection compatible with real-time decoding.

Dual-rail superconducting — Levine et al. (2024):

  • Demonstrated long-coherence dual-rail erasure qubit using tunable transmons.
  • Achieved erasure fractions of total errors.

Key Metrics

MetricValueNotesFidelity reference
Erasure fraction>99%Fraction of total errors that are detectable erasuresLevine et al. 2024
Erasure detection efficiency>98%Probability of detecting an erasure eventScholl et al. 2023
Surface code erasure threshold~50%Vs. ~1% for depolarizing Pauli errorsStace et al. 2009
QEC overhead reduction3–10×Compared to equivalent unheralded error rate
2Q gate fidelity (erasure-converted)99.0–99.5%After post-selecting on no erasure detectedScholl et al. 2023
Residual Pauli error rate0.1–0.5%Errors that are NOT converted to erasureLevine et al. 2024

References

Theory

Experimental demonstrations

Linked Papers