I used to support a client facing app at a bank and the appsec pentesters were a joke:
* Username and Password fields must not autocomplete
* Username and Password fields must not allow text to be pasted in to the field
* Password must be at least 8 characters with lower case, upper case, numbers, and special characters (they didn't care it had a maximum length of 8 characters)
I straight up told our project management it was actively hurting our security, and was told the the point here was to fulfill a regulatory requirement to complete and resolve all issues from a independent "pentest" not to improve security.
I am currently arguing with the bargain-basement pentesters one of our clients hired. They are claiming the system we built is vulnerable because, and I quote, “any credentials sent over HTTPS are transmitted in plain text until they leave the user’s local network”. Not sure how exactly they think HTTPS works, but five minutes on Wikipedia could debunk that one.
They also flagged up that users can access JavaScript and CSS files. Not the original source files mind you, nor is directory indexing enabled or anything like that. They pointed to our compiled and minified app.js and app.css, and suggested we block access to these files as the source code to the app is “sensitive information”.
Having to tell a client another company they’ve hired are absolute clowns, without making it seem like we’re trying to save our own skin, is certainly interesting.
"Look, I'm going to be honest with you: your pentesters are morons. They're grossly incompetent and should be embarrassed. I can give you a list of qualified alternatives you might want to choose from, and not just to test the work I've done for you, but for all your other projects too. Seriously, their advice is just awful and you really need to switch."
This isn't the time to tread lightly, but to go scorched earth. This isn't an "oh, we disagree on the finer points!" debate between peers kind of situation, but a flat-out "these knuckleheads are putting you at risk and you need to know it". You want to get the point across that you're not messing around or leaving room for doubt.
Source: have had these conversations several times over the years. I normally pride myself on tact, but in my experience tact is the exact wrong approach here as it gives the client the impression that there's a wiggle room of doubt.
The key here is to make this a do-or-die conversation. Tell the customer the truth, and then tell them you’re not going to work for them any more if they keep the other morons on the payroll — you’re not going to risk your reputation and your business on being associated with that other company.
“I’m sorry if this means we can’t do business any more, but this situation has gotten so severe, that I just have to tell you the unvarnished truth, and ….”
>any credentials sent over HTTPS are transmitted in plain text
Hummmm. So a couple of years back, I was working on some internal tools that passed sensitive information around and I found some interesting info.
Some bloggers INCORRECTLY thought that HTTPS didn't secure the URL Flags. Correct fact: parameters passed in the URL like ?item=bla is encrypted
Also, some cloud providers aload Balancers (AWS) allow you to offer load HTTPS encryption/decryption - so there REALLY IS plain text stuff in the final leg of the journey (e.g. from the LB to the server)
In the end, the biggest thing I learned is that HTTPS is hard and it sucks.
The current default for the Referer header is to send the complete referrer for same-origin requests, to send the origin for cross-origin requests, and to send nothing if going from HTTPS to HTTP.
> Also, some cloud providers aload Balancers (AWS) allow you to offer load HTTPS encryption/decryption - so there REALLY IS plain text stuff in the final leg of the journey
At first I thought this must have been what they meant; perhaps there was some configuration thing we got wrong.
So we asked for clarification and nope, the example given was that someone logging in from an office could have their credentials sniffed freely by anyone else on the office LAN.
I had someone complaint they could ping the public address of our load balancer.
I sent the client back a list of government and military websites that responds to ping. As an extra bonus, it turned out the pentesters own website responded to ping.
Some hired "pentesters" found in our Asp.Net application that "Connection to the prod database is established before the user credentials have been
validated.". They even insist that this is come from some ISO security guidelines.
Cheese, this one line in their report causes around 3 hours of meetings with around 10-20 people on them... and there were a lot of lines like this.
This is the DB that contains the usernames and (hashed) passwords right? What do they expect? That you have a separate DB for authentication from everything else? What does that achieve? If you DoS the auth DB, you still DoS the application in this scenario.
They try to sell us external/internal Auth service, similar to KeyCloack with their support. What pentesters want to achieve is not improved security, but to sell their services as DevOps and developers. This were not what we expected from pentesting.
In the biz. What you need to do is address each issue with dispassionate detail in the response. Make no value judgements in the individual responses. Feel free to use words like “incorrect”, “false”, and my personal favorite, “logical inconsistency”. Quote specs, RFCs, platform dynamics, everything. Use diagrams, flowcharts, whatever it takes. But again, dispassionate, detached, and nonjudgmental. Then...
In the very last paragraph, as a conclusion to YOUR exercise, explain how the utter lack of competence in the subject matter displayed by the consultant has resulted in blah, blah, dollars, time, effort, all down the drain. Emphasize the harm to the organization and how it affects the trust required between different groups.
I guarantee it will get you promoted or fired. Which one depends on the organization and I expect you already know what will happen.
For the HTTPS thing, they’re suggesting client-side encryption. Which, to me, seems to be a combination of no real benefit and opens a window to introduce vulnerabilities if we get anything wrong.
Interestingly, I checked a few big sites, and while Google doesn’t, Facebook and Amazon both use client-side encryption. Is it just to provide some extra protection for pwned users who have trusted bad certs? I’m no security expert, and I’m struggling to think of any real benefit.
If you're stuck following their recommendations, you could try to sniff Accept headers or user agent or something to "block access" to JS/CSS but still allow the browser to load them in your page. Might risk breaking the app though.
In these cases, it makes sense to point people to NIST Special Publication 800-63B (Digital Identity Guidelines) https://pages.nist.gov/800-63-3/sp800-63b.html — their guidelines are pretty good and eliminate much of the braindead nonsense that is considered "accepted practice in the industry".
> Offer the option to display text during entry, as masked text entry is error-prone.
And under 10.2.1:
> Support copy and paste functionality in fields for entering memorized secrets, including passphrases.
(... snip ...)
> Allow at least 64 characters in length to support the use of passphrases. Encourage users to make memorized secrets as lengthy as they want, using any characters they like (including spaces), thus aiding memorization.
Do not impose other composition rules (e.g. mixtures of different character types) on memorized secrets.
Do not require that memorized secrets be changed arbitrarily (e.g., periodically) unless there is a user request or evidence of authenticator compromise. (See Section 5.1.1 for additional information).
Taken to the extreme is the US Government's TreasuryDirect website, where individuals can buy savings bonds. Instead of allowing you to type your password, they render a "virtual keyboard" that you have to use your mouse to click the keys one by one.
I heard that systems like this were designed when there was a point in time(this may just be erroneous and such a time never actually existed) where keyloggers were more common than RATs, so government websites would often have this requirement due to the higher probability of access from public computers(library, etc), since that was also a point in time when fewer people had their own at home.
Hard to believe it requires a mouse. The government (everyone really but especially the government) generally would need to follow basic ADA guidelines...
This is the continued dilution of security with audit/compliance. It's a mindless, check the box mentality. They don't care about real-world security, they offer insurance to cover the losses. But many insurers are no longer paying due to the volume of incidents and the lack of sound security.
The auditors are typically 10 to 15 years behind technical security expertise.
> This is the continued dilution of security with audit/compliance. It's a mindless, check the box mentality.
If I can play devil's advocate for a moment—isn't this just how insurance necessarily works? Your car insurance company isn't going to interview your teenage son; they don't care that he's a particularly mindful individual, who never speeds because he remembers the time a close friend died in a car crash. "The policy says 17-year-olds are high risk, pay us a zillion bucks a month."
Of course, guidelines that have literally zero value still have zero value. But they have to come up with something concrete...
Bad example. They aren't going to interview your son, but _most_ will take his high GPA and certificate of completion of Driver's Education class, and give you a discount for it, which is the next best thing without spending the time to interview him.
I think the difference is that taking a drivers education class, and (in my experience, at least) is that there is actual hands on driving experience. I think an IT certificate or security audit is a lot more abstract.
The only way to check the "Has taken a driving class and has at least 20 hours behind the wheel" is to do just that. How many different ways could you check the "Secure password requirements are enforced by users" box? How many ways could you check the "physical security to encrypted systems" box?
Totally—but I think that's actually what leads to the dumbest requirements people are complaining about. "Don't allow autofill." "Don't allow pasting passwords." "All passwords must contain at least five special characters and your first born son." Those are boxes that can only be checked one way.
I'm not quite sure where I'm going with this. Something about, maybe things are broken because they don't fit in the insurance company model, and someone needs to solve for that before anything gets better.
> The auditors are typically 10 to 15 years behind technical security expertise.
Probably not, but they are there to be paid by their customers. Does the customer have to mark a checkbox on a regulatory form? Give the customer some answer which is not blatantly false or useless, get the money, come back next year.
It's a problem, to put it mildly. There is humongous growth in this space and not enough skilled people to fill the gap. I'm lucky that my current employer is more discerning but i frequently get reports from previous assessment that are just the results of uninterpreted automatic tooling :(
* Username and Password fields must not autocomplete * Username and Password fields must not allow text to be pasted in to the field * Password must be at least 8 characters with lower case, upper case, numbers, and special characters (they didn't care it had a maximum length of 8 characters)
I straight up told our project management it was actively hurting our security, and was told the the point here was to fulfill a regulatory requirement to complete and resolve all issues from a independent "pentest" not to improve security.