> If AI is capable of performing these attacks, what would stop AI from replacing the security engineers?
Because the threat model is one-sided - if an AI attack fails, the controller simply moves to the next target. If an AI defense fails, the victim is fucked.
Therefore, there is still value in being the human in Cyber Security (however you are supposed to capitalise that!)
There are still protections and mitigations that targets can do, but those things require humans. The things that attackers can do require no humans in the loop.
> Therefore, there is still value in being the human in Cyber Security
Why? Your logic applies equally well to humans. If the AI attacker fails they move onto the next target, if the human defence fails the victim is fucked.
> There are still protections and mitigations that targets can do, but those things require humans.
> Why? Your logic applies equally well to humans. If the AI attacker fails they move onto the next target, if the human defence fails the victim is fucked.
I didn't claim that the human defence is the only layer. Your analogy is only valid if my claim is that it's AI attackers vs Human defenders. It's not. It's AI attackers vs AI + Human defenders.
> Which things would you point to here?
If you cannot imagine any value that a human can add to an AI defence, then this conversation is effectively over; I am not in the mood to enumerate the value that a human can add to AI defence.
> If you cannot imagine any value that a human can add to an AI defence, then this conversation is effectively over
I honestly find that a bizarre response in the middle of a discussion but you do you.
Maybe someone else could humour me since you're not in the mood to expand on the point that you made? The topic of the thread was that the ability of the AI tooling is outpacing what individuals can handle. Why would a human then be in a position to defend better than an AI when an AI is in a better position to attack than a human?
> There are still protections and mitigations that targets can do, but those things require humans.
You claimed that there are certain protections and mitigations (i.e. defence moves) which require humans (ergo humans do these things better than AI, necessitating an AI+human team).
But you've still avoided expanding on what they might be, preferring instead to make petty remarks about my imaginative abilities.
> You claimed that there are certain protections and mitigations (i.e. defence moves) which require humans (ergo humans do these things better than AI, necessitating an AI+human team).
Right, but I did not make the claim that humans are better than AIs.
>Because the threat model is one-sided - if an AI attack fails, the controller simply moves to the next target. If an AI defense fails, the victim is fucked.
This was always the case? Security is asymmetric and attacker only needs to succeed once.
They're not and they won't. I'm from genx and have a background in infosec. I don't agree that AI is the cause of this sudden surge in activity or if this is even a sudden surge. This stuff was always occurring if you were paying attention. It just making the mainstream news now.
Geopolitics is the cause of the recent uptick in activity. Many of these groups are state sponsored or just fronts for nation-states themselves. genAI just makes it easier for people further down the chain to go after low hanging fruit.
The most significant impact genAI is having on infosec is creating work for those people in infosec through vibe coding and turning untested AI systems loose on internal networks. genAI just lets developers and admins shoot themselves in the foot faster. genAI is an artificial intern.
This is less true than it seems. It is pretty rare to go from vuln to simple exploit for systems that people care about. There are plenty of vulns in chrome or whatever that were difficult to actually weaponize because you need just the right kind of gadgets to create a sandbox escape and the vuln only lets you write to ineffective memory addresses.
Stealing a bitcoin wallet by cracking the private key for it also requires red team to be lucky once. Once AI security gets to the point where the probability is infinitesimal for causing actual harm to the business it will be fine.
Existing concepts like defense in depth make it exponentially harder for an AI to build a full exploit chain. Even with a full exploit chain with one mistake you'll trigger a detection system which can fool your attack.
I suppose "doing well" isn't a very good metric. It's based on my feelings and experiences having traveled to 5 wealthy countries and chatting with people there. Even in third world countries, like Brazil, I didn't see people dying of opioid overdoses everywhere downtown.
Originally websites had usernames and passwords. Username was used as a primary key (such as this website).
Using the email address directly as the username/key is a more modern trend (mid-late 00s). I believe this coincided with the dominance of gmail where people would have a forever email address. Before that, your email address would regularly change if you moved ISPs/schools/jobs so it wasn't a good identifier.
Recently i found that several services I “signed into with Google” allow neither converting to a password nor binding to a different Google account. B2b SaaS apps in fact.
I see this kind of sentiment on HN a lot and it's weird to me. Like, what's the issue with discussing on a hacker forum ways that Google is making Android worse for hackers? Especially considering the alternative is iOS and it's much worse in that regard.
"Exception" has a meaning. Exceptions are supposed to be used for just that, unexpected situations. Not being able to parse something is not an exception. It's a normal thing. RegEx doesn't throw an exception when there's no match. Array.indexOf doesn't throw an exception when it doesn't find a something.
It's really nice to able to go into the debugger and say "stop on all exceptions" and not be spammed with the wrong use of exceptions
An invalid URL in a config file is exceptional. An invalid URL typed in by a user or from an external source (eg the body of an API request or inside of some HTML) is Tuesday.
Null checking can be fine if a failure mode is unambiguous. However, if an operation can fail for many reasons, it can be helpful to carry that information around in an object. For example with URL parsing, it might be nice to be able to know why parsing failed. Was it the lack of protocol? Bad path format? Bad domain format? Bad query format? Bad anchor format? This information could theoretically be passed back using an exception object, but this information is eliminated if null is returned.
Unfortunately this installs it as a user cert and only works for app that explicitly request it. To work everywhere you need to install it as a system cert which requires root
Interestingly ios (which is generally more locked down for dev stuff like this) allows users to install certs for all apps without jailbreak
Yes, well, this is a story about American senators being contacted by their American constituents about an American bill that will affect how Americans interact with this app while in America. So it is a bit relevant here
True, but any US ban will have effects beyond just the US. It's also important to remind folks that what TikTok is doing is no different than what Meta (et.al.) is doing.
If AI is capable of performing these attacks, what would stop AI from replacing the security engineers?
reply