Randomized Compiling (RC)

“I couldn’t imagine building a quantum computer without this.”

— John Martinis, Google AI

Randomized compilinga,b (RC) takes a circuit and randomizes its gates using virtual twirling groups while preserving the overall ideal unitary of the circuit (see [6]). This has the effect of converting coherent and non-Markovian error sources into stochastic errors, resulting in dramatically improved run-time performance and overall stability for quantum algorithms. Actual experimental data illustrating these dramatic improvements is available upon request.

More concretely, randomized compilinga,b inserts virtual random gates into a circuit and compiles them as shown in the figure below so that:

  1. The logical circuit is unchanged

  2. The depth of the circuit is unchanged

  3. The probability of obtaining a correct output for an algorithm run on noisy quantum hardware is significantly increased

  4. The error prone quantum hardware can solve a given problem instance with greater accuracy and precision

  5. The error prone quantum hardware can solve bigger problem instances with the same cumulative error budget

  6. Algorithm output errors are more stable and predictable


These improvements arise because coherent errors are suppressed (converted into stochastic errors) and many non-Markovian errors are eliminated. The main benefits of suppressing coherent and non-Markovian errors through randomized compilinga,b include:

  1. Dramatically improved run-time performance and overall performance stability for quantum algorithms

  2. Closing the gap between the standard performance assessment data, as obtained under methods such as Streamlined Randomized Benchmarking (SRB) and Cycle Benchmarking (CB), and the predictable performance of quantum algorithms based on the assessment data

These technical features imply the above performance improvements because of the following theoretical considerations. Norm-based quantities that quantify the impact of errors determine (i) the overall success probability with which noisy quantum hardware can execute a quantum algorithm of interest and (ii) the error threshold for fault-tolerance quantum error correction are both bounded by the diamond norm, defined by:

\[\epsilon_\diamond = ||\mathcal{E} - \mathcal{I}||_\diamond\]

The relationship between randomized benchmarking error rates and the diamond norm is very loose in the presence of coherent errors [9][10] :

\[r(\mathcal{E})(1 + 2^{-N}) \leq \frac{1}{2}\epsilon_\diamond(\mathcal{E}) \leq 2^N \sqrt{r(\mathcal{E})}\sqrt{1 + 2^{-N}}\]

In particular, the exponential scaling with the number \(N\) of qubits and the \(\sqrt{r(\mathcal{E})}\) scaling is tight. Crucially, the \(\sqrt{r(\mathcal{E})}\) scaling increases reported error rates \(10^{-6}\) to \(10^{-3}\). The diamond norm also bounds the increase in the TVD due to an error in a gate [11] for both worst-case and typical circuits.

a U.S. Patent 10,360,088 B2.
b U.S. Patent 10,031,791 B2.