Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> With decompilation I think there's a higher risk of it missing the intention of the code.

I'm not sure but suspect the lack of comments and documentation might be an advantage to LLMs for this use case. For security/reverse engineering work, the code's actual behavior matters a lot more than the developer's intention.



I think the other side of that is that mismatches between intention and implementation are exactly where you're going to find vulnerabilities. The LLM that looks at closed source code has to guess the intention to a greater degree.


This is true for a lot of things but for low-level code you can always fall back to "the intention is to not violate memory safety".


That's true, but certainly that's limiting. Still, even then, `# SAFETY:` comments seem extremely helpful. "For every `unsafe`, determine its implied or stated safety contract, then build a suite of adversarial tests to verify or break those contracts" feels like a great way to get going.


It's limiting from the PoV of a developer who wants to ensure that their own code is free of all security issues. It is not limiting from the point of view of an attacker who just needs one good memory safety vuln to win.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: