The nuance here is that brain-damaged appsec pentesters reported this as a vulnerability for years, and so tons of websites followed that advice and dutifully disabled the functionality.
But autocomplete has advantages: it lets users easily specify long, random, per-site passwords without ever having to worry about that. And when they can't do that, a pretty large percentage of them just give up and write the password down somewhere.
In the end, i find a lot of chrome's decision to implement spec-breaking behavior awful in the context of having a website that works forever (Looking at you, samesite). But this behavior rarely breaks functionality and on the whole makes the web a lot more secure.
I used to support a client facing app at a bank and the appsec pentesters were a joke:
* Username and Password fields must not autocomplete
* Username and Password fields must not allow text to be pasted in to the field
* Password must be at least 8 characters with lower case, upper case, numbers, and special characters (they didn't care it had a maximum length of 8 characters)
I straight up told our project management it was actively hurting our security, and was told the the point here was to fulfill a regulatory requirement to complete and resolve all issues from a independent "pentest" not to improve security.
I am currently arguing with the bargain-basement pentesters one of our clients hired. They are claiming the system we built is vulnerable because, and I quote, “any credentials sent over HTTPS are transmitted in plain text until they leave the user’s local network”. Not sure how exactly they think HTTPS works, but five minutes on Wikipedia could debunk that one.
They also flagged up that users can access JavaScript and CSS files. Not the original source files mind you, nor is directory indexing enabled or anything like that. They pointed to our compiled and minified app.js and app.css, and suggested we block access to these files as the source code to the app is “sensitive information”.
Having to tell a client another company they’ve hired are absolute clowns, without making it seem like we’re trying to save our own skin, is certainly interesting.
"Look, I'm going to be honest with you: your pentesters are morons. They're grossly incompetent and should be embarrassed. I can give you a list of qualified alternatives you might want to choose from, and not just to test the work I've done for you, but for all your other projects too. Seriously, their advice is just awful and you really need to switch."
This isn't the time to tread lightly, but to go scorched earth. This isn't an "oh, we disagree on the finer points!" debate between peers kind of situation, but a flat-out "these knuckleheads are putting you at risk and you need to know it". You want to get the point across that you're not messing around or leaving room for doubt.
Source: have had these conversations several times over the years. I normally pride myself on tact, but in my experience tact is the exact wrong approach here as it gives the client the impression that there's a wiggle room of doubt.
The key here is to make this a do-or-die conversation. Tell the customer the truth, and then tell them you’re not going to work for them any more if they keep the other morons on the payroll — you’re not going to risk your reputation and your business on being associated with that other company.
“I’m sorry if this means we can’t do business any more, but this situation has gotten so severe, that I just have to tell you the unvarnished truth, and ….”
>any credentials sent over HTTPS are transmitted in plain text
Hummmm. So a couple of years back, I was working on some internal tools that passed sensitive information around and I found some interesting info.
Some bloggers INCORRECTLY thought that HTTPS didn't secure the URL Flags. Correct fact: parameters passed in the URL like ?item=bla is encrypted
Also, some cloud providers aload Balancers (AWS) allow you to offer load HTTPS encryption/decryption - so there REALLY IS plain text stuff in the final leg of the journey (e.g. from the LB to the server)
In the end, the biggest thing I learned is that HTTPS is hard and it sucks.
The current default for the Referer header is to send the complete referrer for same-origin requests, to send the origin for cross-origin requests, and to send nothing if going from HTTPS to HTTP.
> Also, some cloud providers aload Balancers (AWS) allow you to offer load HTTPS encryption/decryption - so there REALLY IS plain text stuff in the final leg of the journey
At first I thought this must have been what they meant; perhaps there was some configuration thing we got wrong.
So we asked for clarification and nope, the example given was that someone logging in from an office could have their credentials sniffed freely by anyone else on the office LAN.
I had someone complaint they could ping the public address of our load balancer.
I sent the client back a list of government and military websites that responds to ping. As an extra bonus, it turned out the pentesters own website responded to ping.
Some hired "pentesters" found in our Asp.Net application that "Connection to the prod database is established before the user credentials have been
validated.". They even insist that this is come from some ISO security guidelines.
Cheese, this one line in their report causes around 3 hours of meetings with around 10-20 people on them... and there were a lot of lines like this.
This is the DB that contains the usernames and (hashed) passwords right? What do they expect? That you have a separate DB for authentication from everything else? What does that achieve? If you DoS the auth DB, you still DoS the application in this scenario.
They try to sell us external/internal Auth service, similar to KeyCloack with their support. What pentesters want to achieve is not improved security, but to sell their services as DevOps and developers. This were not what we expected from pentesting.
In the biz. What you need to do is address each issue with dispassionate detail in the response. Make no value judgements in the individual responses. Feel free to use words like “incorrect”, “false”, and my personal favorite, “logical inconsistency”. Quote specs, RFCs, platform dynamics, everything. Use diagrams, flowcharts, whatever it takes. But again, dispassionate, detached, and nonjudgmental. Then...
In the very last paragraph, as a conclusion to YOUR exercise, explain how the utter lack of competence in the subject matter displayed by the consultant has resulted in blah, blah, dollars, time, effort, all down the drain. Emphasize the harm to the organization and how it affects the trust required between different groups.
I guarantee it will get you promoted or fired. Which one depends on the organization and I expect you already know what will happen.
For the HTTPS thing, they’re suggesting client-side encryption. Which, to me, seems to be a combination of no real benefit and opens a window to introduce vulnerabilities if we get anything wrong.
Interestingly, I checked a few big sites, and while Google doesn’t, Facebook and Amazon both use client-side encryption. Is it just to provide some extra protection for pwned users who have trusted bad certs? I’m no security expert, and I’m struggling to think of any real benefit.
If you're stuck following their recommendations, you could try to sniff Accept headers or user agent or something to "block access" to JS/CSS but still allow the browser to load them in your page. Might risk breaking the app though.
In these cases, it makes sense to point people to NIST Special Publication 800-63B (Digital Identity Guidelines) https://pages.nist.gov/800-63-3/sp800-63b.html — their guidelines are pretty good and eliminate much of the braindead nonsense that is considered "accepted practice in the industry".
> Offer the option to display text during entry, as masked text entry is error-prone.
And under 10.2.1:
> Support copy and paste functionality in fields for entering memorized secrets, including passphrases.
(... snip ...)
> Allow at least 64 characters in length to support the use of passphrases. Encourage users to make memorized secrets as lengthy as they want, using any characters they like (including spaces), thus aiding memorization.
Do not impose other composition rules (e.g. mixtures of different character types) on memorized secrets.
Do not require that memorized secrets be changed arbitrarily (e.g., periodically) unless there is a user request or evidence of authenticator compromise. (See Section 5.1.1 for additional information).
Taken to the extreme is the US Government's TreasuryDirect website, where individuals can buy savings bonds. Instead of allowing you to type your password, they render a "virtual keyboard" that you have to use your mouse to click the keys one by one.
I heard that systems like this were designed when there was a point in time(this may just be erroneous and such a time never actually existed) where keyloggers were more common than RATs, so government websites would often have this requirement due to the higher probability of access from public computers(library, etc), since that was also a point in time when fewer people had their own at home.
Hard to believe it requires a mouse. The government (everyone really but especially the government) generally would need to follow basic ADA guidelines...
This is the continued dilution of security with audit/compliance. It's a mindless, check the box mentality. They don't care about real-world security, they offer insurance to cover the losses. But many insurers are no longer paying due to the volume of incidents and the lack of sound security.
The auditors are typically 10 to 15 years behind technical security expertise.
> This is the continued dilution of security with audit/compliance. It's a mindless, check the box mentality.
If I can play devil's advocate for a moment—isn't this just how insurance necessarily works? Your car insurance company isn't going to interview your teenage son; they don't care that he's a particularly mindful individual, who never speeds because he remembers the time a close friend died in a car crash. "The policy says 17-year-olds are high risk, pay us a zillion bucks a month."
Of course, guidelines that have literally zero value still have zero value. But they have to come up with something concrete...
Bad example. They aren't going to interview your son, but _most_ will take his high GPA and certificate of completion of Driver's Education class, and give you a discount for it, which is the next best thing without spending the time to interview him.
I think the difference is that taking a drivers education class, and (in my experience, at least) is that there is actual hands on driving experience. I think an IT certificate or security audit is a lot more abstract.
The only way to check the "Has taken a driving class and has at least 20 hours behind the wheel" is to do just that. How many different ways could you check the "Secure password requirements are enforced by users" box? How many ways could you check the "physical security to encrypted systems" box?
Totally—but I think that's actually what leads to the dumbest requirements people are complaining about. "Don't allow autofill." "Don't allow pasting passwords." "All passwords must contain at least five special characters and your first born son." Those are boxes that can only be checked one way.
I'm not quite sure where I'm going with this. Something about, maybe things are broken because they don't fit in the insurance company model, and someone needs to solve for that before anything gets better.
> The auditors are typically 10 to 15 years behind technical security expertise.
Probably not, but they are there to be paid by their customers. Does the customer have to mark a checkbox on a regulatory form? Give the customer some answer which is not blatantly false or useless, get the money, come back next year.
It's a problem, to put it mildly. There is humongous growth in this space and not enough skilled people to fill the gap. I'm lucky that my current employer is more discerning but i frequently get reports from previous assessment that are just the results of uninterpreted automatic tooling :(
Oh man, enterprise "security" firms used by banks and other old behemoths are a cancer for users. If you want your website to actively abuse users (especially one with special needs and pretty much anyone that doesn't fit into an "made up average person mold") get those people on board and listen to the dumb things they say.
I still can't believe that whole business managed to interpret 2FA for whole EU as "you MUST use SMS for 2FA!".
The ones that puzzle me even more are the intranet websites that log you off after x minutes whereas they work with single sign on, ie no password entered, so not sure what security benefit that achieves. But they make you lose whatever you were doing in the process.
What continues to grimly amuse me is that many of these websites that also have a mobile app will basically keep you signed in forever on the mobile app. It's just the website, where most people would prefer to do their heavy lifting work on, that has the anti-usability nonsense that makes you install plugins like auto-refresh-every-10-minutes.
The problem with the security industry is that there's no way for non-experts to reliably assess "I'm an expert, trust me!" from a practitioner.
I'm not really sure what the best fix is; there are many possible ones. I've seen total clowns pushing decades-old nonsense be taken seriously by competent businesses simply because they thought "hiring an expert" was enough, like they're a plumber or something.
It is no different than doctors or mechanics or lawyers. Reputation is your best guide. In security-land, there are some certifications that are fairly rigorous; some of those can serve as a distant second.
Doctors and lawyers are professions that are regulated by licensure, of which unauthorized practice comes with actual real and not made up legal consequences. Where is the similar licensure that tech security professionals are regulated by?
You may have missed the point being made. You find a good security professional the same way you find a good lawyer or doctor. Ask around for a reference for a good one. Then check their credentials (e.g., what certifications they have).
I believe there was an article on HN recently about a startup that used a "lawyer" that wasn't because they didn't check their credentials after getting a great reference. Just because there are consequences doesn't mean it doesn't happen.
I feel quite certain that I haven't, I just think the point is poorly made and I've spoken specifically to why I think that to be the case. You can get all the recommendations and referrals you want for an infosec professional; nothing stops that person from holding themselves out to be such a professional, quality of work or competency performing it notwithstanding.
You can absolutely suck as a pentester, but still legally hold yourself out to be one and advertise yourself as one to anyone who will hire you.
You can NOT do the same, holding yourself as an attorney or a doctor without very real risk of legal action if you are in fact-not licensed to do either. There are bar associations and medical boards governing various aspects of their work, and how their work is conducted, performs ethics and competency investigations on license holders, and can take away their license to continue working in such capacity if said investigations deem fit. No such governing board or ethical board exists for infosec professionals.
That is a pretty important difference that shouldn't be ignored just to make a petty point about how easy is is to ask for a referral.
Just because there are consequences doesn't mean it doesn't happen.
Which is only supplemental to all of this. My entire point is that it happens, and the prudent do the diligence to make sure it doesn't.
> You can get all the recommendations and referrals you want for an infosec professional; nothing stops that person from holding themselves out
Here is where you missed the point.
You are correct that we do not license, say, pen testers the same way we license doctors. You are incorrect in thinking that this matters.
The point is that in both cases, reputation is the best general-purpose measure of who you want. That's all.
My mentioning certs may have steered you wrong, and that was a bit of a distraction. My point there was that certs tell us something, usually not much, but are still better indicators than their self-advertising.
Let's dispense with the "right or wrong" aspect of this, because I don't think it's helpful towards moving the needle on this, and instead evaluate this as a matter of complementary perspectives.
Does reputation matter? Yes. This I will openly concede. Do I think credentials are meaningless? No.
Where we disagree is "thinking that this matters". I still think it absolutely does, and think the analogy is a poor one. You clearly think it doesn't, that's fine, but I don't think it makes either one of us less or more wrong. Perhaps that's all there is at play here, a difference of opinion in how an organization prosecutes the search for a qualified expert in security, medicine or law; and I think it's revealingly disingenuous to frame such organizational decision making and risk tolerances when seeking professional services with rigid and inflexible absolutes of "right way" or "wrong way" or whether or not method A matters whereas method B doesn't.
After 15 minutes, or 15 minutes of inactivity? The latter is defensible at least, in e.g. a public area where there is a risk of people leaving their desktops without locking them. I mean that's another policy issue that can be addressed (a policy that locks a system after x amount of inactivity), but as an app developer you can't know much about the system things are running on.
sounds like what an extension could do. store in localstorage the last hour of forms. I especially hate clicking submit to get an error and an empty form again.
Ugh, a form that takes 15 minutes or more to fill out, without any feedback or other interaction, is itself a UX problem. It should at least be auto-saving.
More likely it will have a “submit” button that runs a script that blocks submission wen you missed a field. And wipes out a couple of other fields (usually passwords) so that you have to re-enter those after hitting that “submit” button again.
But should all sites really be optimized for the user at a public library computer? At the expense of convenience for the large majority of users that are on a personal or work computer? Doesn’t make much sense to me.
Also the computer itself solves this problem for you in many cases, a guest profile typically deletes all browser session info when you log out.
I hate _all_ sites that do this and I actively avoid them. There are many very good reasons why I might not be able to complete a form without interruption. It's not for them second guess me.
And it's not just extremely annoying, it's also completely unnecessary. Just put a "trust this browser" checkbox on the sign-in page and adjust the session timeout accordingly.
I use Coface for work to check credit for potential customers. Instead of a password, they require a 6-digit pin. It can't be auto-filled or entered with the keyboard. There's an on-screen number pad that you have to click on and the numbers are scrambled - they show up in a different arrangement every time. Such a pain!
Yeah. One of my banks uses something like this. Here's how it works:
The client can only use numerical passwords. When loading the login page, their site also loads the number pad, which consists in an HTML pad containing the 10 digits. The digits are displayed as base 64 images and in a random order, so it's impossible to determine which digit is which from parsing the HTML alone. In the HTML, the images of the digits are each associated to a random 3 letters string. This string will be sent to the server instead of the plain digit.
With the number pad, the site also load a "challenge", and this challenge is sent to the server when connecting. My guess is that this challenge is an encrypted string that indicates what digit corresponds to what 3 letters string.
I made a script that logs in to my bank account to get some information and I was able to do it without using OCR on the images of the number pad because the images never change, so their base 64 strings are always the same. I was a bit disappointed when I realized it, I thought that the people who came with such a twisted login form would have added random noise to the image, just for fun.
I think this is the manifestation of non-logical associations humans make.
When I was a kid, a teacher told me learning was supposed to be hard and unpleasant, and I believed her for a long time. Only when I started enjoying myself in spite of that did I see it was wrong, and I started doing well in school, and (more importantly) pursuing my own interests.
There's a similar thing with security - people assume good security must be painful, so making it painful becomes a goal. Sometimes this is sincere, sometimes (TSA) intentional theater. But either way, the result is intentional hostility to the people who use the system.
I'd bet money they have a one-sentence answer for why it does each of those things ("order is scrambled to prevent shoulder-surfing"), but have done zero testing to determine whether those theories are correct.
I always associated these with key logging prevention. What drives me nuts however, is websites/apps that allow me to type my password but not paste it. Like they want to force that a keylogger can grab it?
Another favorite of mine are password conposition rules, which do nothing but reduce security and are everywhere :(
> I still can't believe that whole business managed to interpret 2FA for whole EU as "you MUST use SMS for 2FA!".
Weeeeeelll...
I'm familiar with two (2) common kinds of "2FA" implementations. TOTP and SMS.
Of those two, only SMS is actually a second factor, albeit not a particularly secure one. TOTP is fundamentally a password, and two passwords are no different than one password.
I see this view a lot. It's wrong. TOTP is fundamentally different to a password, as the stored "password" (by which I presume you mean the key) is never transmitted anywhere.
TOTP in fact has one property that makes it potentially* the most secure of all 2FA methods: it can be used airgapped. As the credential you type into the 2FA form is not the saved secret.
* I say "potentially" because the relative inconvenience + human factors conspire to make it less secure than e.g. U2F in most cases. But assuming hypothetical perfect conditions, there would be nothing more secure than TOTP for 2FA.
Digest auth can be air gapped but the time aspect of TOTP still makes digest comparatively less secure (plus digest isn't typically even done separately to the primary client device, nevermind airgapping, whereas TOTP is at least most commonly used via an entirely separate device).
> You’d need to type a nonce into the dongle, then type the result into your computer.
That would be a cool augmentation of digest auth, but afaik is hypothetical currently (at least as far as common use goes). I can use TOTP airgapped right now.
> in practice, the server has to have non-air-gappped access to a TOTP generator
This is a fair point, but requiring full server compromise is still a nice step up from being mitm-able.
> so it’s not really air gapped at all
That seems like a rather extreme conclusion to draw. Client-side only air gapping is still airgapping, the fact it doesn't extend to protection from server compromise doesn't completely invalidate the benefits.
I guess you can argue the definition of the word "password"; language is fluid, especially English.
I would say SRP is strictly a misnomer (though it's a useful conflation). Generally speaking password is a value provided for authentication (if it's no longer being "provided", as in SRP, it's something different... but I understand using a familiar word for that something different is helpful when communicating).
Either way, in saying TOTP was "just a password", the point you were trying to make was that TOTP is "no different than and therefore no better than a 2nd traditional password". The fact it's not transmitted makes it very different to, and better than, a traditional password. So whatever you want to define the definition as, the point stands.
> and no properties that passwords don't have
It has 1 property that passwords don't have: it is not transmitted!
There are authentication mechanisms that rely on passwords but work by not transmitting the password too. One example is kerberos.
TOTP is a password. The fact that it is a password doesn't matter though since it is something you have (and can't know) which augments the something you know. This satisfies the intent of MFA.
It kills me that most enterprise environments use Kerberos via Active Directory, LDAP, or NIS. So, your workstation probably has Kerberos tickets sitting on it, which would allow very light weight 2-way authentication and encryption of internal flows.
TLS client certificates and TLS-everywhere would be another good option, but it's particularly frustrating that the Kerberos TGTs are already on the client machines. The key management part is already solved in the Kerberos case.
Kerberos is even potentially resistant to quantum cracking. (Grover's quantum search algorithm effectively halves the key size of ideal symmetric ciphers, so you'd want 256-bit keys.) Forward secrecy is an issue, but there are proposals to incorporate DH key exchange in the pre-auth to give imperfect forward secrecy. A post-quantum key agreement protocol, like RLWE would be fairly strait forward to incorporate, with standardization being the main hurdle.
I agree Kerberos is somewhat under-used, but man isn't it half a pain to set up integrations with...
Part of the problem is that it's "enterprise" tech, which means all sorts of "enterprise" middleware claims to support it with some half-assed concoction that worked on the presales demo environment once, back in 2001, and nobody else has touched since. And it's also old and pretty obscure, with documentation lost to the fog of time, and very few people who remember how it was supposed to work - a bit like MS DCOM...
> Aside from the fact that I never transmit the actual password.
You realize that, out of the many comments I've made in this tree, the one you responded to was the one that said
> Are you familiar with SRP?
There are more ways of compromising someone's information than capturing it in transit. If you give me your phone, I can read your TOTP seeds straight out of Google Authenticator.
Yeah, TOTP is a password. Hell, it is in the name. One property it has that differs from classic passwords is the authentication factor. For TOTP, it changes from something you know you something you have. However, lots of passwords are now randomly generated and are no longer "something you know" either.
The "Password" named in "Time-based One Time Password" is the temporary generated value you transmit. It's not what's stored on the TOTP device, so in the context of this discussion, that temp value isn't what the gp was referring to.
The issue was that it was ONLY SMS - they immediately deprecated private certificates, 2FA "calculators" and other 2FA schemes.
After the security backlash they now backpedaled and implemented 2FA with ONLY apps. Apps that ONLY work on iOS and Google Android. I had endless calls from family where they couldn't access their banks anymore because they had a Huawei phone or a dumb phone. Banks are citing "security" as explanation why they can't use smartcards, hardware tokens or even bring apps to desktop computers or phones without Google services.
The funny part is - ALL banks did this at once. Why? Because the security consultants had "must have app" and "must check Google Safety net" on their check lists.
What country are you taking about? In regards to the EU 2FA thingy I start to belief to see a pattern. In countries who had established online banking standards with 2FA, nothing changed. But countries without, went ballistic. SMS or App only 2FA on every login and on every transaction. Yah, I can see that this is annoying.
While for me with my German banks I still access them using the FinTS protocol with a banking software of my choosing. For transaction above 20€* I need a TAN from my chipTAN/Sm@rt-TAN device (Which shows you the transaction details). Optional I could choose an app. SMS was phased out years ago (By my banks. Others perhaps still have it.)
(*only 3 transaction a day I believe. You can deactivate that so that you get asked for a TAN every time.)
The benefit of apps and SMS over hardware tokens, TOTP, smartcards, etc. is to have a out of band communications channel, not merely a second factor. This is crucial for dealing with malware that can change the transactions a user is entering on a banking site, and it being literally impossible for them to notice that it's happened just on the browser. With apps / SMS, they can be informed of the transaction details as part of the verification process on a secondary communications channel that hopefully is not affected by the malware.
chipTAN/Sm@rt-TAN device shows you the transaction details before showing you the TAN. This devices receive their information visually. Either via blinking code or via a coloured QR-Code. So they are are air-gapped.
It's a minor inconvenience for someone who is organised or is used to store secretes securely but a complete nightmare (including a security nightmare) for your average Joe.
Thanks EU, thanks governments for your precious regulations that keep us safe.
I wonder how many similar stories there are in fields I'm not an expert of.
The thing is - I read both EU and local regulations and they don't demand any certain approach to security. Nothing is stopping banks from providing a better experience except dire warnings and prescriptivism of security consultancies.
I talked with fintech founders and they mostly say "sure, we could give better user experience and then have a fight on our hands with auditors because we didn't fill out all the checkboxes from the reputable security consultancy that 'interprets' the requirements"
TOTP is no more a password than whatever one-time code you'd get by SMS. In fact, TOTP is arguably more secure, since it isn't nearly as vulnerable to hijacking as SMS is.
A password is information, something that can be freely duplicated.
The idea of "something you have" is that the thing can't be duplicated. As soon as it can, it's no longer "something you have". Any number of people might have it. A person who has it might not be you.
SMS hijacking, for example, converts your phone-based authentication to a password, where the password is your phone number. (Since an attacker who knows that number can pass the test.)
I would argue that since the totp secret is never in my head it is not a password.
Sms hijacking doesn't "convert" anything anymore than someone with a telephoto lens "converts" an old-style hardware token to a password. (Yes, I know the p in otp is password, and called that because it's entered by the user. It's not a password in terms of a factor you "know" because it's time-limited.)
These are also fluid ideas that are used to describe roughly different failure modes for different types of authentication:
Passwords are thought of as things the user can disclose.
Totp and other "second factors" are thought of as things that must be stolen, or if disclosed have a very short viability time.
Biometric are things that can't be disclosed, but can be lost, and (and when properly implemented) not stolen.
You're trying to argue that these categories of authentication factors have hard lines and definitions when they're fluid categories being used to think about failure modes of a method. Each specific authentication method has its own strengths and weaknesses.
Also, sms hijacks require a lot more than simply "knowing" a phone number. While sim cloning and ss7 attacks are known and very possible, they're still fairly complex. You can also social engineering tech support at phone companies to activate your sim for an account, but that is also significantly more difficult than simply "knowing" a phone number and also a failure of the authentication the phone carrier is using.
> the press has helpfully published a photograph of the keys, so you can make your own, even if you didn’t win the eBay auction.
with this official statement from the government of New York:
> “If you’re selling it, it’s in your possession for an unlawful reason,” said City Councilmember Elizabeth Crowley, chairwoman of the Fire and Criminal Justice committee.
Saying "you're not supposed to have this" won't stop people from having it. These keys are regulated as if they are "something you have", but the facts are otherwise.
> Totp and other "second factors" are thought of as things that must be stolen, or if disclosed have a very short viability time.
TOTP gets set up in the first place when the website discloses your seed to you. It's not something that can't be disclosed. Seeds get disclosed all the time; workflows are built around it.
> Biometric are things that can't be disclosed
Huh?? Biometrics are things that it's impossible to avoid disclosing. If you're ever in a police station, they are free to sample your DNA. You shed it all over the place. If you ever handle something, you just disclosed your fingerprints. If there are any pictures of you out there, your face is public information.
> sms hijacks require a lot more than simply "knowing" a phone number.
I didn't claim otherwise. The intent of my sentence above is to say that a context which involves a working hijack attack converts an SMS challenge from a second factor into a password. If your attack is working, knowing the phone number is sufficient to authenticate as the victim.
Yes, it starts its life as a password. After that, it is never communicated ever again, and therefore, after the initial exchange, it's something you have.
It seems to me you are ascribing properties to "something you have" that aren't warranted. The "something you have" needs to prove you were party to the initial exchange, not necessarily that you were the only one present -- that's why we use two factors, and not only TOTP.
> The "something you have" needs to prove you were party to the initial exchange
This is not something that can be proven at all. Accordingly, proving it is not a goal. Anything that can be had can also be transferred. Your delegated agent's login attempt is just as valid as yours is.
They will be able to trick you into typing it on the wrong site (more likely, wrong terminal) if they’ve compromised your machine. They just need to wait for you to log in.
Similarly, they can grab the shared secret from the server.
It’s marginally better than a password manager (though some of those support TOTP now), since they can’t pull all your credentials by keylogging your master password.
(And yes in theory you could remember the hash, and have a custom TOTP client that lets you enter it in. But unless you do this it is a theoretical argument only).
No need to be sarcrastic. He is absolutely right. The seed is all you need in case of the common TOTP algorithm. There is no connection to the device.
In fact, in Google Authenticator you can even conveniently export all running TOTP to another Google Authenticator without any connection with the apps or anything else whatsoever.
I don't even know if it was security consultants who ever recommended that. It's the same thing with disabling pasting into password fields. A lot of websites used to do that, many probably still do, but I have never seen a security team, no matter how braindead, recommend that nonsense. Rather, it's well-intentioned but stupid project managers following industry worst practices. You can't get in trouble for doing what everybody else is doing, no matter how terrible, I guess.
If you're on *nix, I've found that middle-click will usually work even if "CTRL-V" or right-click->paste is disabled. Something about the handling of Primary Selection vs. Clipboard in X11.
Ditto for credit card number entries. I use Dashlane to copy my CC info out of, and if that doesn't work, there is a good chance I'm not buying on your site. Maddening and pointless.
I agree this is probably product managers, but may also be engineers who have strongly held "security" opinions and nobody to check them.
This really used to be an issue when some JavaScript code constantly checked for new data in those inputs in hopes to find something interesting, like some personal info which shouldn't be there.
But I fully agree with the disable-paste stuff. Very few (web-related) things get as annoying as that.
> The nuance here is that brain-damaged appsec pentesters reported this as a vulnerability for years
as a low-risk privacy defect yes, because things like bank account and routing numbers would be stored in autofills for certain banking sites that don't require authn/authz to initiate a transfer.
(I can think of a handful of platforms frequently used for common services like paying HOA fees which are currently vulnerable to this, meaning another user sharing the machine can simply hit ⬇ on the keyboard in form fields on a page that doesn't require authn/authz to initiate an external transfer in order to capture any stored banking details that were previously entered into the form.)
Source: I was one of those brain-damaged appsec pentesters.
Maybe it's just me but I can't trust Chrome with my passwords anyway. It seems like every update they wipe out the store. So I only use Chrome for GSuite (or whatever they call it now). And, of course, I have to use a pw I can remember.
My biggest security vuln is Google. And I've seen too many new account usernames out there like forgotlastpasspw to use an external manager.
>i find a lot of chrome's decision to implement spec-breaking behavior awful
I recall working with some folks who supported load balancers when Chrome decided that something seemed 'unnecessary' and they updated Chrome and ... it broke load balancing.
Not really a glaring disadvantage. If someone has physical access to your unlocked computer and wants to do bad stuff to you, you are going to have a very bad day.
Consider Chrome, automatically, by default, replicates all your passwords to all your devices on which you are signed in.
Thankyouverymuch. I am gonna keep using my password book.
There is no sure way, as a private person and not being expert in security, to secure your browser. But there are ways to limit the damage that can be made. Maybe just don't make it too convenient and have a database of all your passwords on all your devices?
Yes, but let's be fair, it's a galaxy better than writing it on a post-it or password booklet, and still way better than using a memorable passphrase which will get reused and then leaked.
Besides, you can encrypt the local storage with a master password (and if you accept online as a requirement, you could even add 2FA to that).
A (well handled) physical password booklet is much more secure for the average home user, who is unlikely to ever be individually targetted by a third party attacker, let alone to the level of the attacker physically breaking into their home. My parents being victims of a zero-day vulnerability or installing a malicious application by mistake are much more realistic scenarios than their house being broken into and their password booklet being stolen by a thief who is meticulous and observant enough to take it and know how to make use of it.
Not only that, I would argue that a physical booklet is not only more secure but also safer. Nothing short of a house fire will destroy the booklet, and however much I like to rave about old-school ThinkPad durability, I don't think my locally stored encrypted database would survive that either.
A password booklet works well at home, but it's obviously much less secure if you wanted to sign in to a service while in public on your phone for example. One of the major benefits of a password manager is that your passwords are present, encrypted, on all of the device you need them on. Most people don't only need passwords at home, so the odds of theft or loss of the password book are much higher than your example makes it out to be. If we're talking about an average user, the solution of only sign into services at home isn't really an option.
You are correct that the access security of a booklet is almost certainly better than that of a password manager. The issue with the booklet is that humans do not like transcribing long strings between computer and paper so (at least in my experience) people who use the booklet method tend to eschew longer passwords, they tend to avoid creating new passwords when they can re-use an old one, and they don’t change the passwords very often (if at all). Also in the event that the booklet is ever lost or stolen (which is made significantly more likely by the fact that you must carry it around with you everywhere in this age of the pocket computer), you are suddenly in a very bad place.
>Yes, but let's be fair, it's a galaxy better than writing it on a post-it
the modern security hazard is not someone reading your post-it that is sitting on your desk, it is someone remotely getting access to some part of your computer or some service you own that can tell you what the password is.
The post-it note in our world is more secure than lots of things that have replaced it.
on edit: I see Mordisquitos said it better than I.
there are a lot of war zones in this world though. given that and the number of Third World countries with high levels of crime and poor public security, I suspect that a significant percentage of the worlds technology-using population might have better digital security than physical security
>but let's be fair, it's a galaxy better than writing it on a post-it or password booklet
Is it? If someone is physically in your home you are in greater trouble anyways and even then they likely aren't going to be grabbing a notebook. Just keep it somewhere nearby but hidden (notebook in a drawer on the desk).
When you connect to a website with ssl, your sensitive data is transmitted in a reversible form as well.
I believe moat browsers will use the system keyring (which is usually encrypted based on your login password or a tpm) if present or use a master password to encrypt them at rest.
Most websites are data sinks of anything that can be taken. No reason IMHO the login page should not always send a hash over ssl. (which is hashed again to test it)
I'm not sure what you mean by hash, but i6 think you're trying to describe mutual authentication, where the service also authenticates itself to the user. Look up things like pake, srp, and tls client certificates for more information.
In the end, i find a lot of chrome's decision to implement spec-breaking behavior awful in the context of having a website that works forever (Looking at you, samesite). But this behavior rarely breaks functionality and on the whole makes the web a lot more secure.