Obscurity can be fine but it's not security. I think of it like cover and concealment in the military. Security is cover. Something you can get behind so the bullets don't hit you. Obscurity is concealment. Harder to see, harder to find, so the enemy doesn't know where to shoot, but it's not stopping any bullets. Both have advantages and disadvantages and can complement each other depending on how they're used.
Example: there are teenage gangs going around on high powered scooters in my city, carrying hammers and mini grinders. They pair up on a scooter, steal a bike and disappear.
I watched them. They don't want to hang around longer than necessary. They will only approach a bike rack that is clearly visible from the road. They will only steal a bike that has unobstructed access to the road (no tricky bollards or other bikes to get around). Even though they are full of bravado, and shout obscenities and threats at me when I tell them to fuck off, they still run away (even though the one approaching the bikes is carrying a weapon while his companion stays on the scooter ready to escape)
Anything that even mildly inconveniences these guys is enough to stop them attempting theft. The bikes they steal needs to be expensive, out in the open, with direct access to the road, and with a shitty lock. And believe it or not, those tumblers line up a lot.
Throwing a blanket over a bike is probably enough to stop them from even approaching it.
That's fine as long you are really clear in your mind about what's going on.
It IT there are a lot of people tossing a blanket over the scooter and believing they're affecting the ability of the the attacker when they're really changing the likelihood of an attack.
Imagine if every single person put a blanket over their bike. Now imagine if everyone got a chain that was 10 times stronger. Which world would you rather live in?
Honey pots, tar pits, bot motels, janky configs, visible telemetry (for example): these slow down adversaries in two ways. 1) They directly slow the adversary down and force them to navigate deliberately. 2) They increase uncertainty in uncomfortable ways, the effectiveness of this depends on how important it is for the adversary to remain undiscovered, not "poke the bear". Together, the effect is more than additive.
In addition to likelihood, attacks have shape. And proper installations can force your adversaries' maneuvers to take a certain shape. I've heard this referred to as terraforming.
If you're going to "do it in the road" (a highly visible bike rack), your lock or chain works much better when it is better / stronger than the herd. If everyone has a chain which is 10x stronger, then a better grinder becomes a cost of doing business. Maybe I'd rather live in a world where I didn't use that bike rack.
This is an especially good analogy because facing a well-resourced adversary in cybersecurity is like finding out that the enemy brought artillery -- hopefully you weren't relying entirely on obscurity because pretty soon there will be nowhere to hide
Funny analogy, in that when the high caliber shells start raining, most forms of cover won't make a difference. The ones that will, are not something you want to stay behind on days when you're not being actively bombed. In fact, keeping you behind such protections is by itself a military tactic - it lets the enemy roam freely and maneuver around you.
But the basic flaw of this analogy is that it implies you're at war, and your system is always in battle.
Agreed with your sentiment, and that was a great example.
Just like any security control, if it's your only means of security, it will not offer much risk reduction. Just like all security controls, the if you want risk reduction use more security controls together. Like all security controls, there is no way to eliminate risk, just reduce it as much as possible while still being able to effectively achieve your mission.
Because of this I believe security through obscurity to be important component in a healthy and mature risk posture.
It irks me when it's dismissed because obscurity is not security. No single security control is security on its own.
Think about leaving your bike unlocked in times square, vs. the top of a 7 000 meter mountain in the himalayas.
Which unlocked (unsecure) bike is more likely to be stolen, and ergo has a lower risk attached?
----
Obscurity does not help you when the thief has already found your bike, nor is obscurity very helpful for keeping your bike safe if you happen to live in times square.
But if you live at the top of a himalayan peak, you can be fairly certain you're not going to have your bike stolen.
the security controls for a bike on a high mountain are not obscurity, they're the lack of oxygen (that kills), the cold (that kills), the height (that kills), and the literal sheer difficulty of getting there.
you could put the bike right on the side of the mountain without any obfuscation and it won't get got because ain't no one gonna die for a bike.
its like how we know where dead people are on Everest but we can't get them down; they serve as landmarks.
I don't think that really works because obscurity isn't harder to see or find. I don't know the analogy, it's like standing out in the open and being like "yeah but who would think to look here lol".
I think you're misinterpreting "obscurity" for "lack of obscurity". If you have a vulnerability in an API interface that is completely undocumented that is a vulnerability that is obscured. It's hiding in the woods, not standing in a field.
To keep with the analogy: no one is going to stand in a field when people are shooting at you. So then why do a small subset of vocal people online suggest that you just put your bulletproof vest and claim that hiding in the woods, regardless of the vest, is a bad idea?
You know when people are shooting at you. You don't know when or if people are exploring undocumented/obscure features of your system and what they have learned about it that you were trying to hide.
Therefore, the safest assumption to make is that an adversary already has figured out all of your obscurity, because they always can do this given sufficient time and interest, at which point the only thing between them and you is your security.
That is why we design systems without obscurity and only care about security.
I agree that it's a good principle but it's taken too far when justifying needlessly growing risk surface area. Like the principle is useful to justify security hardening. It is not useful when used to increase the odds of being attacked.
Obscurity is not worthwhile when it increases your own costs. Nevertheless, if you can add obscurity with negligible additional cost and inconvenience, then you should do it.
This isn't about what's a good idea or bad idea. Perhaps it's best to simply leave analogies behind, otherwise we'll just focus on the wrong thing.
Security through obscurity merely means that your system is atypical. It's not hidden, it's not secret, it's not hard to find, it's not hard to examine, it's not less visible, etc - there is nothing inherently different about the systems at all other than that one is more common than the other. It's just less typical.
What you're describing is a thing that is not obscured. Don't refer to things as obscured if they are not obscured. When others talk about about things that are obscured they are talking about things that are obscured, not things that are not obscured.
I'm having a hard time understanding what you mean here. If something is obscured, by definition it is less visible. Being 'less typical' is a form of security because most attacks rely on some form of pattern recognition, and obscurity literally dissolves patterns into noise.
You're overly focusing on the term and not the meaning. The term comes about from people choosing tools like "foxit" or "Opera" and saying that those products are safer than their cohorts Adobe/ Firefox because they are attacked less often.
This notion was termed "security through obscurity" ie: "you use the less popular option, therefor that option is safer". It has nothing to do with "obscuring" in the sense of "hiding", that's a linguistic quirk of a colloquial term. If you were actually taking action to reduce the ability to understand a system in a way that you could meaningfully defend, it would no longer be "security through obscurity".
The argument has persisted because there are two different questions that sound the same (X is less typical than Y):
1. Is "X" safer than "Y"?
2. Is a user of "X" safer than a user of "Y"?
When looking at (1) in isolation, you can say things like "X lacks security features, therefor Y is safer" and "X is less often used, therefor X is safer", etc. This is a question about the posture of the project itself, in isolation.
(2) is about the context for users. The reality is that X, which perhaps is fundamentally less well built software, may actually have users who are attacked far less frequently.
Both are likely to favor "rarity is a poor indicator of safety" as we generally reject mitigation approaches that rely on attackers to behave specific ways, but what's important is that these are completely different questions and neither has to do with being obscured but rather rare.
None of this is about what is "obscured" or not. If something is obscured or obfuscated, that is a technique that can be evaluated separately by its own merits (ie: how hard is deobfuscation, how easy is it to adapt to deobfuscation, etc). All of this is about whether you're evaluating (1) or (2) - and in the case of (1), which is what the criticism always has focused on, the answer is that "rarity" is not a mitigation.
> The term comes about from people choosing tools like "foxit" or "Opera" and saying that those products are safer than their cohorts Adobe/ Firefox because they are attacked less often.
Visiblility is also a mental construct of what we expect to see and what we know already and can map to what we see. "Obscure" is doing a lot of work here. It doesn't necessarily mean hidden, it can mean the object's true purpose or form is hidden from some particular vantage, and only that vantage.
>If something is obscured, by definition it is less visible.
Obscurity is not the same thing as something being "obscured".
Obscurity means something is either difficult to comprehend, not well known or uncommon.
Obscured means something is hidden or concealed. When something is hidden, that means the thing is still there and there is a way to get to it. You can build automated tools around finding it.
>Being 'less typical' is a form of security because most attacks rely on some form of pattern recognition, and obscurity literally dissolves patterns into noise.
This is making the leap of faith assumption that "obscurity" is equivalent to "impossible to understand". In security you have no control over the attacker and therefore have to assume your attacker has more than enough knowledge and intelligence to perform the attack.
Since computer systems are static and unchanging without frequent patching, you can't assume that there is a cat and mouse game where the mouse is adapting its hiding strategies dynamically and managing to escape every single time.
Depends, some systems are dynamic. There is also a gray area where obscurity can be computationally infeasible to attack, but not bound by traditional polynomial assumptions in cryptography.
As is always the case in these semantic discussions, the answer depends on your initial axioms and assumptions, which does kind of make most of these discussions pointless (but I did learn a lot from this one).
Interesting. Have you seen the movie Braveheart? That's the leadup to the later humiliation of the king in battle, there's a movie / drama about this one too. Saw it recently, don't remember the name.
Basically the insurgents choose terrain they know well, because they live there. They choose a swamp / mire in an open field between two hills. They build fortifications. They obscure the true nature of the ground they're standing on, out in the open. They goad the king's army into finishing them then and there. They fight on foot against knights on horseback. It's a mess. They win.
Security through obscurity is mitigation basically. You reduce risk/impact, not eliminate it. There are problems - such as denial of wallet attacks - where you can only mitigate and can't eliminate the problem completely
the implication of the "don't be acquired" and "don't be penetrated" is some sort of anti-air or anti-tank missile.
"killed" in this case would be equivalent to having something penetrate and hit sensitive systems. at that point it's basically just a function of what the penetrator is trying to do -- if they just want $$$ they ransomware. if they want exfil or DoS or making critical systems do naughty things that is also a kill.
> the implication of the "don't be acquired" and "don't be penetrated" is some sort of anti-air or anti-tank missile
not necessarily - this model is also taught for army/marines type ground combat operations, in how to effectively camouflage, how to manoeuvre.
the "don't be penetrated" is more of an equipment choice and engineering decision specific to armor and active kinetic counter-munitions systems, like anti-drone shotguns, tanks with active protection systems, chobham armor, etc.
If a munition has been fired by you, first try to not get penetrated by it at all, and if that fails, try to prevent something catastrophic like a bolus of explosive formed penetrator molten copper from spraying into the inside of your armored personnel carrier.
The problem with that statement is that a lot of people who yield it fail to see the advantages that come with these extra shenanigans ; and let's just take pure concealment so I don't pushing weird arguments ; in the age of AI - each time we are able make an attacking AI misaligned we are essentially buying time ; an on-going attack is never a on-shot event ; it's an ongoing process where the attacker has to understand where it is located and what it can do ; since each element will be a resource ; do not let it have it in the first place.
It's a bit of an elitist view of security that romanticize concepts without thinking about what they can actually be used for. My personal bad experience with that was a manager who was stating me that having a different subdomain for the admin panel was a concealment and not a security practice.
I mean - it's very easy to see how this kind of argument actually prevents from doing something that can help just on the basis of philosophical purity - which often just miss the point - security is not a mechanism that will solve all your problems ; heck in fact I have to layer at least 4 mechanisms just on the http interface to feel safe ; it's more of a lot of layers that together form a barrier ;
We sit too much on TLS thinking "That's it, security job is done" - then we get some crazy stuff like French ANTS that get pawned with some IDOR ; as IF f* using some hash or something ; ANYTHING PLEASE F* HELL ; would have not helped
> All modes of cyber security depend on some obscurity (e.g. password)
That's not what the expression means.
"Security through obscurity" has a very specific meaning — that your system's security depends on your adversary not understanding how it works. E.g. understanding RSA is a few wikipedia articles away, and that doesn't compromise its security, so RSA isn't security through obscurity.
Lucketone likely knows this and was pointing out that "obscurity" is a misleading word to use when talking about systems which all rely on obscurity, in the plain English sense of the word.
We're in a technical forum, discussing a term of art that refers to a very specific bad practice.
Lucketone's argument is essentially saying that the bad practice itself isn't actually a bad practice by equivocating the term of art and the plain language definition.
The problem is that the term of art is confusing to technical people. See TFA. Technical people make logical leaps from "avoid security through obscurity [in the specific context of security systems which depend on obscurity and for which there are better alternatives than obscurity]" to "you should never obfuscate JavaScript" because the word is imprecise.
No, "Security through obscurity" is a valid and useful layer. A lot of weight hangs on your word “depends” though, in which case if it is the only layer then you will eventually have, uh, problems.
I’ve used it for a long long time. Like in 1999 I’d have a knock on certain ports in a certain order to unlock the ssh port.
And lots of weird stuff to stop forum spam. Which could work for weeks or months or even a year.
Port knocking isn't security through obscurity. Given the knowledge that you have a port knocking system in place doesn't tell me what specific sequence of knocks will open up the service I want to target. Even just a two knock sequence gives you a key with 32 bits of entropy, which makes it trivial to block attempts at bruteforcing the key.
Yeah absolutely. That was precisely my point — Requiring a secret (be it a password or the private part of an asymmetric key) isn't security through obscurity, and finding the sequence of knocks is equivalent to finding a password of equivalent complexity.
In cryptosystems there is a difference between things that can be changed and not, eg passwords/keys are a secret that can be easily charged. Algorithms not so much.
"Security through obscurity" refers to the practice of using an hard to change "thing" as a secret, which is indeed bad practice
Security through obscurity in cryptosystems would mean defining your own crypto algorithm (or using a secretly-defined one, secret in the sense that it is unknown to the adversaries) to protect your system.
It is NOT bad in itself. It IS bad if you only rely on that. Even if you'd use a "secret" algorithm, you MUST protect the keys as with a public algorithm. Also, being secret means you cannot benefit from the cryptanalysis of the community, which is in practice very important. BUT... if you have a lot of cryptanalysis expertise at disposal, then using a secret algorithm can be very effective.
i don't know a lot about the subject, but the little i know tells me this is not the way to look at this
your password (plain text) is secret because only you are supposed to a have it. in the digital realm, sharing the contents of the password (plain-text) is be akin to making a copy of it — undesirable
now, the algorithm that hashes the plain-text for comparison with the stored hash, that can be know by anyone, and typically is
It does seem to be a word game, because "it's not stopping any bullets" either isn't honest (it does stop bullets from hitting you when the enemy doesn't know where to shoot) or it's limited, just like obscurity is ("it may stop a few bullets, but it won't stop all, and there will be other weapons it can't stop either"). I think public key exchange is considered security, but it still requires to obscure your private keys.
Perhaps a better word would be resistance (to intrusion), which is a dimension orthogonal to visibility.
All security is security through obscurity. When it gets obscure enough we call it “public key cryptography”. Guess the 2048-bit prime number I'm thinking of and win a fabulous prize! (access to all of my data)