Adding to your comment a similar letter was published as recently as September 2025 https://support.apple.com/en-us/122234 "we have never built a backdoor or master key to any of our products or services and we never will."
My read of this post is that there's nuance here and that by the time we see 32-bit integers being factored then the roadmap to 256 bit integers can be counted in months on ten fingers rather than being a decade out. The underlying scaling needed to go to 32 bit requires only linear progress to get to 256
>The underlying scaling needed to go to 32 bit requires only linear progress to get to 256
Nope. Firstly, for RSA you need to scale from 32 to 4096. Secondly, Shor requires N^2*log(N) quantum gates where N is number of bits in the integer, so the scaling is superquadratic. And it's very much an open question whether QEC protocols will continue to work with the same efficiency on the required scales.
We are talking to different things. There is linear engineering progress for getting from 32 bits to 256 bits being factored is my claim.
If we want to talk RSA the engineering journey from factoring 21 to 35 is big, because it requires creating logical qubits with error rates that we are only now seeing companies report. But the engineering journey from 32 bits that are tolerant enough to run a factoring algorithm to doing the same with 4096 appears linear in engineering cost is what I am claiming.
For RSA specifically the resource have come down. I am not yet up to date on this round of papers however the 2024 result https://eprint.iacr.org/2024/222 had it down to n/2 + O(N) logical qubits.
>it requires creating logical qubits with error rates that we are only now seeing companies report
And yet 21 was not factored on a real hardware.
>There is linear engineering progress for getting from 32 bits to 256 bits being factored is my claim.
IMO it's a very bold claim until linear progress is demonstrated between 8, 16, and 32 bits. Not in theoretical papers. On a real hardware. With honest experiments using arbitrary integers.
It's easy to claim "QC will repeat Moore's law!" especially when your salary depends on it, but the practical evidence is quite lacking at the moment.
So once again since I think I am not explaining it well, it might take a long time to go from factoring 21 to 35, and a long time from 35 to anything bigger, but from that point on the engineering has scaled up to the point that progress is very sudden. So if the canary in the coal mine is a 32-bit integer being factored, then the runway for deploying fixes is terminally short for defenders
Beware the Ides of march: this is 1 of 2 cryptographic doom papers that was released this week. This google paper with Babbush, Gidney, Boneh is authoritative. And we also have another with Preskill and Hsin-Yuan Huang (widely cited for classical shadows among other quantum work) among others: https://arxiv.org/pdf/2603.28627
"Here, by
leveraging advances in high-rate quantum error-correcting codes, efficient logical instruction sets,
and circuit design, we show that Shor’s algorithm can be executed at cryptographically relevant
scales with as few as 10,000 reconfigurable atomic qubits. "
I interned for the author at 18. I assumed security testing worked like this:
1. Static analysis catches nearly all bugs with near-total code coverage
2. Private tooling extends that coverage further with better static analysis and dynamic analysis, and that edge is what makes contractors valuable
3. Humans focus on design flaws and weird hardware bugs like cryptographic side-channels from electromagnetic emanations
Turns out finding all the bugs is really hard. Codebases and compiler output have exploded in complexity over 20 years which has not helped the static analysis vision. Todays mitigations are fantastic compared to then, but just this month a second 0day chain got patched on one of the best platforms for hardware mitigations.
I think LLMs get us meaningfully closer to what I thought this work already was when I was 18 and didn't know anything.
lots of security issues form at the boundaries between packages, zones, services, sessions, etc. Static analysis could but doesn't seem to catch this stuff from my perspective. Bugs are often chains and that requires a lot of creativity, planning etc
consider logic errors and race conditions. Its surely not impossible for llm to find these, but it seems likely that you'll need to step throught the program control flow in order to reveal a lot of these interactions.
I feel like people consider LLM as free since there isn't as much hand-on-keyboard. I kinda disgree, and when the cost of paying out these vulns falls, I feel like nobody is gonna wanna eat the token spend. Plenty of hackers already use ai in their workflows, even then it is a LOT OF WORK.
A lot of software doing useful work halts pretty trivialy, consuming inputs and doing bounded computation on each of them. You're not going to recurse much in click handlers or keep making larger requests to handle the current one.
I was just very naive at 18 about program analysis. I haven't lost my imagination though. I was a self-taught IOI gold division competitor. I thought every problem had an algorithm. It doesn't work like that. Program analysis is collecting special snowflakes that melt in your hand. There is no end to the ways you can write a bug in C. Ghosts of Semmle, Semgrep, Coccinelle past, be humbled. LLMs saturate test coverage in a way no sane human would. I do not think they can catch all bugs because of the state space explosion though, but they will help all programmers get better testing. At the end of the day I believe language choice can obviate security bugs, and C/C++ is not easy or simple to secure.
You've never seen the full power of static analysis, dynamic analysis, and test generation. The best examples were always silo'd, academic codebases. If they were combined, and matured, the results would be amazing. I wanted to do that back when I was in INFOSEC.
That doesn't even account for lightweight, formal methods. SPARK Ada, Jahob verification system with its many solvers, Design ny Contract, LLM's spitting this stuff out from human descriptions, type systems like Rust's, etc. Speed run (AI) producing those with unsafe stuff checked by the combo of tools I already described.
The silo'd codebases I was referring to are verification tools they produce. They're used to prevent attacks. Each tool has one or more capabilities others lack. If combined, they'd catch many problems.
Examples: KLEE test generator; combinatorial or path-bases testing; CPAChecker; race detectors for concurrency; SIF information flow control; symbolic execution; Why3 verifier which commercial tools already build on.
If you start with safety in mind and don't just try to bolt it on, you're in a much better place. With the kind of code you need in typical applications you could force vast majority of it in a shape that passes termination checks in theorem provers without much overhead, especially if you can just put gnarly things in standard libarary and validate (with proofs hopefully) once.
Though starting with C/C++ is a losing proposition in that regard. And I guess any kind of discipline loses to just throwing half-baked javascript at wall, because deadlines don't care about bugs.
on a related note the search space for https://www.qdayprize.org/curves seems far too small to be a meaningful contest and the rules dont seem to address how they judge the validity of the "quantumness" when sifting such small groups.
there's been advances, at least for RSA work from håstad, ekerå, gidney has brought this to O(N) qubits, on the runtime the papers are a little bit harder to read as they differ in notation but O(log(N)^3) runtime is what i recall. its possible i am wrong on the runtime and its O(log(N)^2)