9 Minutes to Crack Bitcoin? The Technical Boundaries and Industry Misreading of Google’s Quantum White Paper

9 Minutes to Crack Bitcoin? The Technical Boundaries and Industry Misreading of Google’s Quantum White Paper

By Max He
|
The Technical Boundaries and Industry Misreading of Google’s Quantum White Paper

Key Takeaways

  • Google’s white paper significantly advances the engineering assessment of quantum risk, but does not demonstrate that a CRQC is close to practical deployment
  • A reduction in resource estimates ≠ practical attack capability is ready — substantial unresolved engineering challenges remain in between
  • What the industry needs to build is not merely the capacity to “adopt post-quantum algorithms,” but the capacity to “adapt to continuous cryptographic change”
  • 2030–2035 is the critical reference window for working backward on migration readiness, not a precise timestamp for the arrival of quantum attacks

Introduction

On March 30, 2026, Google Quantum AI, in collaboration with researchers from the Ethereum Foundation and Stanford University, released a landmark white paper [1]. This 57-page paper systematically analyzes the threat that quantum computing poses to cryptocurrencies, and presents the most aggressive resource estimates to date: breaking the 256-bit elliptic curve cryptography underlying Bitcoin and Ethereum would require fewer than 500,000 physical qubits — a reduction of nearly 20x compared to previous best estimates.

The paper also broadens the discussion of quantum attacks beyond Bitcoin to the broader cryptocurrency ecosystem, further pointing out that quantum attack surfaces exist within Ethereum’s smart contracts, staking consensus mechanisms, and data availability sampling. This means the paper is no longer addressing the narrow question of “whether quantum computers can break Bitcoin private keys,” but is driving the entire industry to reexamine which security assumptions in existing blockchain systems may need reassessment in light of advancing quantum capabilities.

The white paper has sent a visible shockwave through the blockchain industry. Claims that “quantum computing could crack Bitcoin in minutes” spread rapidly, prompting many practitioners to reconsider their existing security assumptions. The reaction was so strong not only because resource estimates dropped further, but because this paper was the first to place “whether a window attack on a live transaction is possible” and “whether blockchain systems have sufficient time to complete migration” on the same plane of discussion. The question has shifted from the academic “can it be broken?” to the engineering and governance question of “is there still enough time to prepare?”

Behind these reactions, however, a more fundamental question deserves scrutiny: What has Google actually proven? And what has it not proven? To what extent does this work change our understanding of quantum risk?

It is worth noting that the white paper’s scope extends beyond the key-exposure problem familiar from Bitcoin-style discussions, reaching into a broader attack surface across cryptocurrency systems. This article, however, focuses primarily on how this work changes the overall assessment of quantum risk, rather than examining the specific implications for each individual on-chain mechanism.

What Did Google Actually Do?

ECDLP: The Foundational Assumption of Blockchain Security

The security of today’s mainstream cryptocurrencies rests on the Elliptic Curve Discrete Logarithm Problem (ECDLP) [2]. Taking the secp256k1 curve used by Bitcoin and Ethereum as an example [3], the core assumption is: under classical computing conditions, it is computationally infeasible to derive a private key from a given public key (a point on the elliptic curve).

This assumption has been widely accepted for decades and forms the foundational security premise of the entire blockchain system. However, Shor’s algorithm [4] shows that under an idealized quantum computing model, ECDLP can be solved efficiently — theoretically undermining this security foundation.

Resource Estimation: How Much Quantum Computing Power Does Breaking It Require?

The core contribution of Google’s work is not a new attack method, but a revisited answer to a long-standing question:

If a quantum computer large enough, stable enough, and capable enough to run such quantum algorithms were actually built, how much computational resource would be required to break ECDLP?

The paper constructs and optimizes quantum circuits targeting secp256k1, presenting two implementation paths with different optimization objectives: one minimizes the number of logical qubits, the other minimizes the number of non-Clifford gates (such as Toffoli gates). Under a clearly specified set of hardware and error-correction assumptions, these circuits can be executed with fewer than 500,000 physical qubits.

Compared to prior leading estimates [5][6], this result shows meaningful improvement on the composite metric of “spacetime volume.” More importantly, it transforms what was previously a largely theoretical discussion into a set of engineering parameters that can be compared and tracked.

“9 Minutes”: Where Does This Number Come From?

Beyond resource estimation, the paper also provides an intuitive order-of-magnitude figure for attack duration.

Assuming quantum gate operation times on the order of microseconds and accounting for certain execution overheads, running the full quantum circuit would take on the order of tens of minutes. Given that a portion of the quantum algorithm’s computation can be precomputed before the target public key is known, the time attributable specifically to the target key can be compressed to roughly half — yielding the “approximately 9 minutes” estimate.

This figure attracted widespread attention because it is close to Bitcoin’s average block time of approximately 10 minutes. This implies that, under certain assumptions, an attacker could theoretically complete private key recovery before a transaction is confirmed.

It must be emphasized that this time estimate depends on a comprehensive set of idealized assumptions. Its significance is more as an order-of-magnitude reference than as a direct reflection of real-world attack capability.

Zero-Knowledge Proofs: Why the Circuit Is Not Published

Another notable feature of the paper is its use of “verifiable disclosure” without revealing the specific quantum circuits [7].

The research team committed to the circuit via a hash, and within a public verification procedure, checked the circuit’s behavior on a set of random inputs and verified the resource upper bound. The entire verification process is encapsulated as a zero-knowledge proof, allowing any third party to confirm the validity of the stated claims without accessing the circuit details.

This approach strikes a balance between “protecting attack specifics” and “increasing the credibility of the conclusions,” and elevates the resource estimates beyond mere researchers’ assertions to claims with cryptographic verifiability.

How Should We Interpret This?

Before going further, one concept is worth clarifying upfront.

The paper repeatedly references CRQC (Cryptographically Relevant Quantum Computer). This term literally means “a quantum computer relevant to cryptography,” but it is not a general term for quantum computers — it refers specifically to quantum computing systems that already possess practical cryptanalytic capability. In other words, what the blockchain industry should be watching for is not simply whether quantum computing continues to progress, but when it reaches the point of being able to break cryptographic problems like ECDLP under real-world conditions.

Viewed this way, the significance of Google’s paper lies not merely in showcasing quantum computing progress, but in more concretely answering: what scale of resources, execution capability, and time characteristics would a quantum computer need in order to pose a genuine threat to real-world cryptographic systems?

This question can be understood along three dimensions: the execution characteristics of quantum computing systems, the different paths that technological evolution might take, and the attack modalities those capabilities ultimately enable.

Fast Clocks and Slow Clocks: Quantum Computers Are Not All Alike

One important perspective introduced in the paper is the distinction between different types of quantum computing architectures.

Some platforms (such as superconducting qubits [13]) have faster fundamental operation speeds and shorter error-correction cycles, enabling the execution of deep circuits in shorter timeframes. Others (such as trapped ions [14] or neutral atoms [15]) operate more slowly but may hold advantages in other respects.

This distinction means that “quantum computing capability” is not a single metric. Quantum systems of equivalent scale may differ by orders of magnitude in their practical cryptanalytic capability depending on their architecture.

These differences in execution characteristics directly affect how and on what timeline a CRQC might emerge: some systems are better suited to completing computations within short time windows, while others are better suited to long-duration runs.

Two Possible Evolutionary Paths

Building on these architectural distinctions, it is useful to consider possible paths for the evolution of quantum computing capability.

In one scenario, fast-execution quantum systems are the first to achieve fault-tolerant scale — in which case, real-time attacks on live transactions (such as recovering a private key before a transaction is confirmed) would be the primary risk. In another scenario, slower but more stable systems achieve the breakthrough first — in which case, attacks would more likely target long-exposed public keys, such as historical addresses or reused keys.

These two paths are not mutually exclusive, but the risk timelines and defensive priorities they imply differ significantly.

From this perspective, the emergence of a CRQC does not necessarily correspond to a single definitive moment, but is more likely to manifest as a gradual accumulation of distinct capabilities.

Three Attack Modalities

Within this framework, quantum attacks can be roughly categorized into three types.

The first is the “on-spend attack,” which involves recovering a private key within the time window after a transaction enters the mempool but before it is written into a block. The second is the “at-rest attack,” targeting public keys that have been exposed on-chain for extended periods — giving attackers more generous computation time. The third is the “on-setup attack,” targeting protocols that rely on public parameters, using a one-time quantum computation to obtain a reusable backdoor.

What these three attack types share is that they all depend on the same foundational capability — solving ECDLP within an acceptable timeframe — but each places different demands on time windows and system structure.

Taken together, these three attack types are simply different expressions of the same underlying reality: once quantum computing capability reaches the level represented by a CRQC, the specific implications for different system conditions and time constraints.

How Far Are We from a Real Quantum Attack?

What This White Paper Does Not Prove

It bears emphasizing that while this white paper significantly advances the engineering assessment of quantum risk, it does not demonstrate that a CRQC is close to practical realization, nor that existing blockchain systems face a viable quantum attack in the near term.

What the paper actually does is further compress the resource estimates for breaking secp256k1 under a clearly specified set of assumptions, and advance a previously abstract risk discussion to a position better suited for engineering evaluation. It demonstrates that the problem is more concrete and more worth tracking than previously understood — but it does not demonstrate that the large-scale fault-tolerant quantum systems needed to mount such attacks are imminent.

Resource Requirements Are Falling, But the Engineering Gap Remains Substantial

To go further: the path from “quantum algorithms can theoretically break ECDLP” to “quantum computing capability sufficient to threaten real cryptographic systems actually exists in the world” is not simply a matter of engineering scale-up. What ultimately determines whether a quantum attack can be realized is not just the resource estimates on paper, but also fault-tolerant architecture, error correction, real-time decoding, control systems, and the overall system capability required to execute deep circuits stably over extended periods.

Some of these conditions do fall within the domain of engineering implementation problems. But they cannot simply be understood as “inevitably solvable given sufficient investment and time.” Quantum error correction and fault-tolerant computation provide a theoretically scalable path, but whether the real world can actually integrate all of these conditions into a continuously operational CRQC capable of threatening real cryptographic systems remains genuinely uncertain.

From this perspective, the more accurate significance of Google’s white paper is not that it heralds the imminent arrival of quantum attacks, but that it gives the industry, for the first time, more concrete engineering parameters with which to discuss this risk — while also reminding us not to equate a reduction in resource estimates with practical attack capability already being in place.

This Is Not a Problem Amenable to Precise Year-Based Prediction

And precisely for this reason, the arrival of quantum attacks is not something that should be understood as a precisely predictable date. For the blockchain industry, what truly matters is not “in which year will a CRQC definitely emerge,” but whether the relevant capabilities are evolving in an increasingly concerning direction.

On the one hand, a key breakthrough could significantly alter resource requirements in a short span of time. On the other hand, a seemingly imminent technical path could stall indefinitely at certain fundamental bottlenecks. This means it is very difficult to judge when practical attack capability will arrive through simple linear extrapolation — “X qubits this year, Y qubits next year.”

The more prudent framing, therefore, is not to try to bet on a specific year, but to acknowledge the genuine uncertainty involved while directing attention to the underlying signals that would actually change the risk assessment.

The Most Concerning Possibility: Advance Warning May Not Be Obvious

This also means the community should not expect a clear warning signal in the form of a “publicly demonstrated quantum attack.”

Many people tend to treat a public demonstration as the hallmark of technological maturity — as if the absence of a real-world demonstration means the threat is still a long way off. But in the context of quantum cryptanalysis, this intuition may not hold. By the time a landmark demonstration actually occurs, the underlying technical capability may already have been accumulating quietly for a considerable period, and the window for defensive action may have already narrowed significantly.

For the blockchain industry, this is precisely the most difficult aspect to manage: the most consequential changes may not unfold in a clear, gradual, publicly visible way.

How Should We Monitor Quantum Progress?

Don’t Just Watch Qubit Counts

If Chapter 3 addressed the question of “roughly where we are now,” the follow-on question is: what should we watch in the future to more accurately assess quantum progress?

The metric most commonly cited — and most easily misunderstood — is qubit count. It is intuitive and attention-grabbing, but it is far from the only indicator of cryptanalytic capability, and arguably not the most critical one. Simply increasing physical qubit count does not automatically imply that a system is approaching practical attack capability.

What deserves more attention is whether those qubits can be effectively organized under fault-tolerant conditions, whether they can reliably support deep circuit execution, and whether they form a closed loop with the algorithm and control system. For the industry, “how many qubits” can at most indicate changes in scale — it cannot, on its own, indicate how close a real-world threat is.

Three Categories of Signals Worth Watching

For a relatively actionable framework for assessing quantum progress, three categories of signals deserve attention.

The first is hardware signals. What really matters here is not just physical qubit count, but whether stable logical qubits are beginning to appear, whether error correction is entering a scalable phase, and whether the system can operate continuously under error-corrected conditions.

The second is algorithm signals. Google’s white paper itself is a prime example. What is more worth watching for the blockchain industry is not any single number per se, but whether resource estimates of this kind continue to fall: whether the number of logical qubits is declining, whether the number of critical gate operations is declining, and whether overall spacetime volume continues to converge.

The third is system-level signals. These are often the most overlooked. Even as hardware and algorithms improve, one must also watch whether system-level capability is gradually maturing — for example, the ability to execute deep circuits stably over extended periods, the scalability of control systems, and whether multiple critical conditions are beginning to converge simultaneously. Practical attack capability ultimately depends not on any single metric, but on whether these conditions can coalesce into a closed engineering path.

Public Demonstrations Are One Reference Point, Not the Only Signal

Many people naturally look for a “landmark moment”: perhaps a research platform publicly demonstrates running the relevant algorithm on a small-scale curve, and the community treats this as the signal that risk is finally beginning to materialize.

Such a signal is certainly worth noting, but it is not appropriate as the sole basis for judgment. From a technology evolution standpoint, public demonstrations are typically a result, not the earliest indication of change. What is more important are the underlying conditions described above — whether they have been gradually falling into place.

A more realistic approach for the industry is not to wait for a dramatic moment, but to develop habits of continuous monitoring: watching whether hardware is entering a new phase, whether algorithmic resource requirements continue to compress, and whether system capability is transitioning from “scattered improvements” to “coherent formation.” Rather than asking “when will we see a demonstration,” the more important question is: Before we see a demonstration, will we have already understood the direction of technological progress?

What Should the Industry Do Now?

This Is Not “Today’s Problem” — But Preparation Must Begin Today

From an engineering standpoint, quantum computing has not yet achieved the capability to attack existing cryptocurrency systems. Significant gaps remain — in hardware scale, error control, and the fault-tolerant capability required to execute deep circuits stably over time — relative to the conditions assumed in the paper.

But this does not mean the industry can continue to defer the issue indefinitely. Compared to the past, an important change has occurred: the relevant technical pathways are becoming increasingly well-defined, and resource estimates continue to converge. For blockchain systems, what truly warrants attention is not a specific future date, but whether sufficient time and space have been reserved for future migration.

Upgrading cryptographic infrastructure is rarely a simple software swap. It involves protocols, implementations, ecosystem coordination, asset migration, and changes in user behavior — timescales measured in years, not months or quarters. In this light, it is not a problem that will erupt imminently, but it is already a problem that must enter the planning horizon without further delay.

Algorithms Will Change, but Blockchain System Design Does Not Need to Be Rebuilt from Scratch

What quantum computing directly threatens are the cryptographic assumptions underlying blockchain systems — such as elliptic-curve-based signature schemes — not the fundamental nature of the security problems that blockchain systems, as a class of secure systems, are designed to address.

This means that many security mechanisms proven effective today will not lose their value simply because quantum computing emerges. For the blockchain and digital assets industry, mechanisms ranging from key management, secure multi-party computation (MPC), hardware isolation (TEE), access control, and audit mechanisms, to the overall security architecture built around account systems, transaction approval, risk control, and governance — all address the enduring real-world problems of key exposure, single points of failure, insider risk, and operational error. These problems do not disappear as underlying cryptographic primitives change.

The right framing, therefore, is not “the quantum era requires tearing down the entire blockchain security architecture and starting over,” but rather: what needs to be upgraded is primarily the underlying cryptographic components; what needs to be preserved and strengthened is the set of design principles that blockchain systems have already established around key protection, permission layering, risk isolation, and governance control. The truly important task is not merely replacing a particular signature algorithm, but ensuring the entire system possesses the capacity to accommodate this kind of cryptographic migration.

From “Which Algorithm to Choose” to “Can We Migrate Smoothly”

Post-quantum cryptography has already entered the phase of standardization and engineering deployment. NIST’s first batch of PQC standards was officially published in 2024 [12], but significant differences remain among schemes in terms of performance, signature size, implementation complexity, and security assumptions, and engineering practices and industry adoption paths continue to evolve.

In this context, the more important question is shifting away from prematurely committing to a specific algorithm and toward whether systems have the capability to migrate smoothly.

This capability encompasses several dimensions: whether a new signature scheme can be introduced without disrupting business continuity; whether a hybrid mode can be supported during a transitional period; and whether the system can continue to adapt and remain compatible as standards and engineering practices continue to evolve.

Looking at the long term, what the blockchain industry truly needs to build is not merely the capacity to “adopt post-quantum algorithms,” but the capacity to “adapt to continuous cryptographic change.” The former is a one-time migration; the latter is sustainable long-term system design.

Conclusion: An Important Technical Signal

From today’s engineering reality, quantum computing is still insufficient to pose a practical threat to existing cryptocurrency systems. Significant gaps remain — in hardware scale, error control, and the fault-tolerant capability required to execute deep circuits stably over time — relative to the conditions the paper assumes. In other words, a CRQC is not a technology that will “naturally materialize when the time comes”; its realization still depends on a series of engineering challenges that have not yet been fully overcome.

At the same time, this issue can no longer reasonably be treated as an abstract discussion about the distant future. In March 2026, Google explicitly set its own post-quantum migration timeline at 2029 [8]; the UK’s NCSC identified 2028, 2031, and 2035 as key migration milestones [9]; the G7 Cyber Expert Group’s roadmap for the financial sector, while not imposing regulatory deadlines, treats 2035 as a reference target for overall migration and recommends that critical systems prioritize completion by 2030–2032 [10].

It is also important to avoid over-interpretation. Looking at the mainstream publicly available assessments, even the more aggressive public positions tend to move the risk window forward to around 2030, rather than converging on a consensus that “a CRQC will definitively emerge before 2030.” The Global Risk Institute’s 2025 expert survey indicates that a CRQC within 10 years falls in the “quite possible (28%–49%)” range, and only enters the “likely (51%–70%)” range within 15 years [11].

Google’s white paper, therefore, is most significant not because it announces the arrival of quantum attacks, but because it makes this problem concrete for the first time: discussable, assessable, and requiring preparation to begin. For the blockchain and digital assets industry, 2030–2035 represents a critical window that deserves serious attention and deliberate provision for migration readiness. It may not correspond to the specific year quantum attacks actually arrive, but it may very well determine whether the industry still has the latitude to respond with composure when that time comes.

References

[1] R. Babbush et al., Securing Elliptic Curve Cryptocurrencies against Quantum Vulnerabilities: Resource Estimates and Mitigations, Google Quantum AI, 2026.

[2] IETF, Fundamental Elliptic Curve Cryptography Algorithms (RFC 6090), 2011.

[3] Secp256k1, Bitcoin Wiki.

[4] P. W. Shor, Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer, SIAM Journal on Computing, vol. 26, pp. 1484–1509, 1997.

[5] D. Litinski, How to compute a 256-bit elliptic curve private key with only 50 million Toffoli gates, arXiv:2306.08585, 2023.

[6] C. Chevignard et al., Reducing the number of qubits in quantum discrete logarithms on elliptic curves, Cryptology ePrint Archive, 2026.

[7] ISO/IEC, ISO/IEC 29147:2018 Vulnerability disclosure, 2018.

[8] Google, Quantum frontiers may be closer than they appear, Google Blog, 2026-03-25.

[9] UK NCSC, Timelines for migration to post-quantum cryptography, 2025-03-20.

[10] G7 Cyber Expert Group, Advancing a Coordinated Roadmap for the Transition to Post-Quantum Cryptography in the Financial Sector, 2026-01.

[11] Global Risk Institute, Quantum Threat Timeline Report 2025, 2026-03-09.

[12] NIST, Post-Quantum Cryptography. First-batch standards: FIPS 203 (ML-KEM), FIPS 204 (ML-DSA), FIPS 205 (SLH-DSA), published August 2024.

[13] Realizing practical quantum computers based on superconductors, Nature News, 2023.

[14] A new ion-based quantum computer makes error correction simpler, MIT Technology Review, 2025.

[15] The Best Qubits for Quantum Computing Might Just Be Atoms, Quanta Magazine, 2024.

About Safeheron

Safeheron has long focused on the application of MPC and TEE technologies in institutional-grade digital asset custody. We believe the key to post-quantum migration lies not simply in replacing any specific algorithm, but in building systems capable of adapting to the continuous evolution of cryptography.

With this in mind, we will continue to monitor developments in quantum computing fundamentals and conduct research and engineering preparation around the replaceability of signature schemes, protocol compatibility in key management, and system-level migration pathways.

SHARE THIS ARTICLE
联系我们