Quantum Computing Benchmarks Under Scrutiny: Researchers Claim “Sleight-of-Hand” in Factorization Claims
new research suggests that current quantum factorization benchmarks may be misleading,with numbers chosen specifically for ease of factoring rather than real-world applicability.
A recent paper by Peter Gutmann and Stephan Neuhaus, titled “Replication of Quantum Factorisation Records with an 8-bit Home Computer, an Abacus, and a Dog,” raises serious questions about the validity of reported quantum factorization achievements. The researchers argue that many of these benchmarks rely on “sleight-of-hand numbers” that are artificially easy to factor, rendering the results unrepresentative of actual cryptographic challenges.
According to the paper, a common technique involves selecting numbers where the factors differ by only a few bits. This structural property allows for factoring through simple search-based methods, which the authors contend have “nothing to do with factorization” in the context of breaking strong encryption. the paper highlights that such numbers would not typically be encountered in real-world RSA key generation, which usually mandates a significant difference between prime factors (e.g., |p-q| > 100 bits).
Gutmann and neuhaus also point to a second method where preprocessing on classical computers transforms the number to be factored into a different problem, one that is then amenable to solution via quantum experiments. This approach, they suggest, further distorts the perceived capabilities of quantum computers for factorization.
The paper asserts that the largest number legitimately factored by a quantum computer to date is a mere 35. This claim casts a shadow over many publicly reported quantum factorization records, suggesting a potential overstatement of current quantum computing prowess in this critical area.These findings align with broader skepticism regarding the timeline for practical, large-scale quantum computing. While the potential of quantum computing is undeniable, the engineering hurdles to achieving machines capable of factoring large RSA moduli remain ample. The current research suggests that the path to realizing this capability may be even more complex than previously understood, with significant challenges in developing algorithms and hardware that can tackle real-world cryptographic problems without relying on artificially simplified test cases.