Figure

Description
An erasure qubit is a qubit encoding paradigm in which the dominant physical errors are converted into detectable erasure errors — the qubit leaks to a known non-computational state that can be identified by a non-destructive check, revealing the location (but not the content) of the error. The key insight, established by the quantum erasure channel theory, is that an error whose location is known costs dramatically less to correct than a Pauli error at an unknown location: the threshold error rate for erasure errors in the surface code is approximately , compared to for depolarizing noise. This 2–10 reduction in overhead motivates engineering qubits where the dominant failure mode is erasure.
Erasure qubits have been demonstrated across multiple platforms:
-
Neutral atoms (alkaline earth): In optical tweezer arrays, the metastable clock state and ground state encode the qubit, while Rydberg-mediated gate errors predominantly result in atom loss — a detectable erasure (Wu et al. 2022, Ma et al. 2023).
-
Dual-rail superconducting: Two coupled transmons or cavities encode and ; photon loss sends the system to , a detectable erasure outside the code space (Kubica et al. 2023, Levine et al. 2024).
-
Trapped ions: Metastable shelving states can convert decay errors to detectable leakage events.
Hamiltonian
The erasure qubit is an encoding paradigm, not a single physical system. The general structure is a logical qubit encoded in a subspace such that the dominant error channel maps to an orthogonal, detectable subspace :
where the Lindblad operators satisfy , meaning errors always leave the code space and can be detected by measuring the projector .
For the dual-rail superconducting encoding, the effective Hamiltonian is:
with the code space spanned by and the dominant error (single photon loss) producing the detectable state .
Motivation
Quantum error correction overhead is dominated by the rate and type of physical errors. Unheralded Pauli errors require physical qubits per logical qubit with code distance determined by . Erasure errors, because their locations are known, effectively double the code distance for free — the same code corrects erasures vs. Pauli errors. Converting dominant errors to erasures can reduce the physical-to-logical qubit ratio by factors of 3–10, potentially bringing practical fault-tolerant computing to nearer-term hardware scales.
Key Findings
- Surface code threshold for erasure errors is , compared to for depolarizing noise, giving massive headroom for error correction (Stace et al. 2009).
- In tweezer arrays, Rydberg gate errors converted to erasure with detection efficiency (Wu et al. 2022).
- Dual-rail superconducting erasure qubits demonstrated erasure fractions of total errors (Levine et al. 2024).
- Theoretical analysis shows erasure conversion reduces surface code overhead by for realistic noise models (Kubica et al. 2023).
- Mid-circuit erasure detection compatible with real-time decoding demonstrated in neutral atom arrays (Ma et al. 2023).
Key Metrics
| Metric | Value | Notes | Fidelity reference |
|---|---|---|---|
| Erasure fraction | >99% | Fraction of total errors that are detectable erasures | Levine et al. 2024 |
| Erasure detection efficiency | >98% | Probability of detecting an erasure event | Wu et al. 2022 |
| Surface code erasure threshold | ~50% | Vs. ~1% for depolarizing Pauli errors | Stace et al. 2009 |
| QEC overhead reduction | 3–10× | Compared to equivalent unheralded error rate | Kubica et al. 2023 |
| 2Q gate fidelity (erasure-converted) | 99.0–99.5% | After post-selecting on no erasure detected | Wu et al. 2022 |
| Residual Pauli error rate | 0.1–0.5% | Errors that are NOT converted to erasure | Levine et al. 2024 |