I'm glad to see this getting roasted in the comments, as it's a really good example of how companies put out self-serving pseudo-statistical nonsense in an effort to promote themselves.
There's no effort to quantify what "technical" or "communication" skills are - these are left to the interpretation of a the interviewer. It makes no effort to show where these engineers are going, what the interview process they're completing looks like, what impact demographics had on this, etc.
I find this stuff repugnant. It perpetuates the myth that there's something really special about Silicon Valley engineers, while making only lazy and perfunctory efforts to examine any alternative explanations than "this is where the rockstar ninja coders work." Shameful.
"There's no effort to quantify what "technical" or "communication" skills are - these are left to the interpretation of a the interviewer."
And yet, these scores are measuring something and averaged across tens of thousands of technical interviews, you have enough statistical power to average out the particularities of each interviewer.
I'm sorry you find the results repugnant, but the results are what they are. And the article did have a large section on limitations of their analysis.
It's not the results I find repugnant, it's the assumption that the results have any real world validity. The _something_ they're measuring is as likely to be demographic biases as it is technical or communication skills.
> it's the assumption that the results have any real world validity
Of course it has real world validity. It has real world validity regarding how well people are likely to do in tech interviews.
You can feel free to say that the way the tech industry does interviews is bad, or biased, or has all sorts of problems. But having vague measurements, such as "technical" or "communication" skills sounds very accurate to how tech interviews are actually done, in the real world.
All the moral outrage everyone is having about this, seems to have nothing to do with the accuracy of the report, which is about measuring tech interview performance. And it instead seems to be regarding the tech interview process in the industry, in general.
Over the last 22 years I've interviewed hundreds, hired and directly managed probably close to 100 individuals in widely varying environments, ranging from a 25k employee state university, to startups of various shapes and sizes, to the hottest SV IPO of 2020.
The reason for interview practices to be the way they are is to raise the floor for FAANG, who have high internal complexity and high salaries, leading to a very fat hiring funnel with a high risk of turning into dead weight in "rest and vest" mode once they get inside.
However humans are smart, and interviewers are lazy, so inevitably the people who optimize for this process start beating out more able engineers who don't have the time or inclination to jump through these hoops. In my experience, the proportion of really good software engineers is roughly equivalent across all companies with baseline competent technical leadership. FAANG does have a lot of the outliers on the high end, but they also have a lot of folks who can't tie their own shoes without the support of world-class tooling, infra support, and technical design guidance that those companies surround them with.
All I can say is that in my experience the average programmer at a company with a highly selective FAANG-style interview process is far sharper than the average programmer in the industry as a whole. Additionally, managers at more selective companies tend to be less parochial and less micro-managing.
The process isn't perfect, and it has some type I errors and a lot of type II errors, but it's a lot better than just throwing darts at a stack of resumes.
We're not in disagreement, note I explicitly did not say "the industry as a whole", and I added the qualifier "competent technical leadership". There's a lot packed into those three words, and without it you'll steadily bleed your best talent.
Micro-managing, non-technical leadership is the failure mode you're pointing out, and it's definitely the worst of all worlds, far worse than any failure mode at a FAANG. But on the other hand there is also parochial leadership who knows what they don't know and how to trust talent. Those environments can actually be fine for technical people. Granted, they won't necessarily get exposed to the exchange of ideas and mentorship from FAANG, but that's not a deal breaker in the modern internet age, and autodidactism has its place in furthering the state of the art by side-stepping social convergence to "best practices".
And on the flip side, I agree FAANG people are "sharper than average", but there are also headwinds to retaining the best talent. One is that you have to have a tolerance for moving slow, jumping through hoops, and generally dealing with a whole class of friction which many high performing engineers consider bullshit. Some will suck it up and deal with it to get the fat comp packages, but there is now an entire generation of <35 engineers who have had expectations set on comp levels based on a decade+ bull run of tech stocks which I suspect is unlikely to repeat over the next decade. There's also the appeal of working on classes of large problems that is only available at the biggest tech companies, but the actual interesting work is much fewer than the number of engineers. The majority are just dealing with incidental complexity and requirements of scale itself which can definitely occupy the mind, but may lead to an itch for more tangible impact.
Finally I will say there's a middle-ground between FAANG-style interviews and "throwing darts at a stack of resumes". If you are a small to mid-size company without the brand appeal and top-of-the-funnel recruiting volume of a FAANG, then you are absolutely shooting yourself in the foot by cargo-culting the FAANG approach. You know what the alternative is? Have qualified people do traditional interviews, going deep enough to get a gut feeling on their technical competence. Of course you'll get some Type I errors here, so then you have to actually pay attention to what they're doing once they start working. If they are not able to ramp and be productive in a reasonable amount of time, then you have to let them go (or at least pivot them into a position where they don't do damage). Big companies can't do that because there's enough chaos, lazy managers, and HR legal fears that Type II errors are a material risk. In summary, FAANG approach is solving for specific circumstances that most companies don't have, and it leaves a lot of talent on the table which is an arbitrage opportunity for companies willing to do the hard work to think about their recruiting strategy from first principles.
It doesn't necessarily say anything other than "where you work now has some predictive value for how well you'll be perceived by interviewers" in which case these aren't "top performers", they're "people with jobs" and it's just regurgitating a truism about how it's easier to get a job if you have a job.
I'm not buying this as a moral outrage question, I'm wondering if this is adding anything meaningful for us to look at or if it's just a surface-level puff piece masquerading as an analysis.
Except that interviewing.io interviews are specifically designed to be anonymous. This is even stated on their website. The interviewers do not see a resume or job history. As a candidate who's done a couple of interviews through them, the interviewers never asked anything about my background either. I don't recall even uploading a resume.
Yeah, but there's in-group jargon and technical approaches that FAANG and FAANG-adjacent engineers will pick up on. Just because you're anonymous doesn't mean you aren't unconsciously signaling your background to your interviewer.
Do you honestly believe this jargon is so unique to FAANG and "FAANG-adjacent" (that's a new one) SDEs that it can be used to pick them out, yet so secret it's not yet become industry-wide jargon?
Are you claiming they have no validity at all? Like if you built 2 teams: one team with candidates that all got 0% on the test and another team with candidates that all got 100%, you'd expect no difference at all in their real-world performance on a difficult problem?
If you're claiming something weaker than that, can you state it more precisely?
> It perpetuates the myth that there's something really special about Silicon Valley engineers
Does this myth still have much traction? If anything, my general regard for engineers in the bay area has steadily declined in the last few years. There are so many really worthless folks who have only figured out how look like they have a clue, but go any deeper and they flail. I know I'm painting with a broad brush, and that's not fair, but most of the great engineers I work with are at various other places around the country, not California.
Personally, as a (now relocated) Bay Area native, I agree. Over, I think there's still a lot of prestige for large Silicon Valley firms, even if some of the gloss has justifiably started to fade.
This article certainly assumes that myth is still in place.
Hey, it's not our fault! Where else are posers gonna flock to to pose? I swear, even with all the remote work, more than 95%* of the people that worked here 5 years ago still do. We live here.
I realize I wasn't being very nice, and I apologize for that. I'm quite sure there are plenty of very smart people who live in the bay area and do great work. With such a high concentration of engineers it makes sense plenty will suck, too.
My experience is biased, of course. My company has offices all over the country and a couple years ago opened a new office in SF, and I'm 93.4% sure we don't exactly pay FAANG-competitive salaries there, which affects the quality of who we can recruit there. It's kinda like how we hire engineers in Hyderabad for 1/5 the US rate and then wonder why we more often than not get substandard performance.
I know a ton of engineers. Of them all, those working FAANG are profoundly less skilled than the others. It’s impossible to miss its so obvious. Maybe my social network is an outlier, but I really really doubt it.
Well I'm not a FAANG-er and I didn't downvote, but it seems weird that the ones at FAANG would be so obviously less skilled than the others, It would be believable that they were the same skill level but otherwise seems weird.
Unless of course the weighting of his groups is off somehow, which should have been noted.
Actually it doesn't seem like the statement was very informative.
I mean. It was a statement of fact, you can believe it or not. The engineers I know who went into FAANG (half dozen different people) are literally the worst engineers I worked with.
Author here. Yes, the skills are left to the interpretation of the interviewer, but most of our interviewers are senior engineers at FAANG. We've done quite a bit of work internally to make sure your interviewers are well calibrated, and we have a living calibration score for each one (calibration is based on how the interviewees they interview end up doing in real interviews).
The interviews in question are a mix of algorithmic interviews and systems design interviews.
Also, if I ever use "rockstar" or "ninja" in my posts, I hope someone finds me on the street and punches me in the face. I'd deserve it.
FAANG interviewer here. I've conducted many hundreds of interviews for multiple FAANG companies. The totality of my training in how to interview is about 4 hours (when I combine the training of each company). I have zero confidence in the usefulness of calibrating my answers and expect that neither I nor anyone else who does this would reliably score the same person with the same score most of the time, outside of a small percentage of outlier candidates.
Just to argue against myself, I do think that if enough interviewers interview a candidate, you will get an aggregate score that will give you a pretty good sense of how well that candidate is likely to do on other interviews, and I have found that candidates that do well on these interviews tend to make good employees, although that's based only on the fact that everybody I work with has passed one of these interviews and they've mostly been pretty good employees. I suspect a lot of people who fail these interviews would also make pretty good employees, but I'll never know.
I've been in interview rounds where if ONE interviewer doesn't like how you did on the stupid coding test, you're out. So what you are saying is pretty much BS.
I don't think I follow you. Your company has a hiring process where it rejects all candidates who fail any interviews. How does the existence of such a system invalidate anything I said?
>I suspect a lot of people who fail these interviews would also make pretty good employees, but I'll never know.
Why don't you take a leap of faith once in a while on someone who hasn't done well? Especially if you ever interview interns and have a bunch on that team, that's a near zero-cost gamble for a large corp.
Well, the answer to "why" is "they don't put me in charge of the hiring process."
Related, interns are a fantastic way to hire because you get WAY better information about them. Instead of an hour or two of riddles, you've got a three month work history which a couple of trusted employees have witnessed. That's WAY better signal than any whiteboard interview problem could possibly get you.
also, what you do with interns, could be done with any developer you hire. you just hire them for a few months and pay them. if you like their work, keep them. otherwise let them go. no need for coding interviews.
Is there any statistical reason I should assume an interviewer at FAANG has some kind of special insight into what good technical and communication skills are? Were any attempts made to adjust for demographic biases? Is it possible, for instance, that Dropbox engineers are disproportionately taller white dudes with nice hair? I know that sounds a bit unserious, but all of those factors would increase the likelihood of a higher score.
I appreciate that you never used "rockstar" or "ninja". That dig was a bit unfair of me.
no, we didn't adjust for demographic biases because, look, we're an anonymous platform. we periodically survey our users to get demographic data, but it's not something we ask in the app because we've never been able to resolve "tell me your race & gender" with "hey we're anonymous, and you'll be judged on your performance and nothing else".
last thing i want is to perpetuate stereotype threat inadvertently. it's possible to do this right, i think, but we haven't gotten there yet.
The point about stereotype threat is valid, although it also means that we're unable to get any insight into a major axis of interview performance.
I think what we have here is an attempt to imply, consciously or not, a causal link between interview success and previous/current employment. But without drilling into the other factors underlying their success, we get a lot of noise and not enough signal. Couple that with the continued mythologizing of FAANG greatness, and you get an article that perpetuates two of the more toxic notions in tech: FAANG is the top of a pyramid and talent is concentrated in a handful of companies. Neither are true, and neither are probably your intent, but that's how this reads.
> "but most of our interviewers are senior engineers at FAANG"
Wait, so the result is that "interviewers from FAANG companies rate highly interviewees from FAANG (and FAANG-adjacent) companies"?
Or maybe more causally: "those who have already passed FAANG-style interviews are more likely to pass interviews conducted by FAANG people"?
I appreciate the mission here - but if the idea here is to give people a fair shot even if they don't have the pedigree of FAANG, building FAANG interview styles into the system seems counter to your stated goals. If anything these results are concerning - you can interpret the findings in (at least) two ways:
- these companies hire or produce superior engineers, the results you got are indicative of a broader higher caliber of engineer in those companies.
- the interviewing exercise is optimizing for "people who can pass/have already passed FAANG-style interviews", which rolls in all of the myriad biases of FAANG hiring and perpetuates them.
> "It's a well known fact that SV titles hold next to no weight."
Except at FAANG; the rewards are so great that the competition is fierce. Whether they gained their levels through engineering ability or savvy politicking, you can be sure they are adept in at least one of the two.
While there may not be any reliable measure of communication skills across the industry, the fact that the data was based on scores given my a large amount of people, that by definition means that it's accurate.
Think about it carefully - if people rate people as being good at communication, then there is no reason to quantify it any other way. There are some obvious flaws here, like the quantity of data and it's normalization, but it's basically a tautology.
"Think about it carefully - if people rate people as being good at communication, then there is no reason to quantify it any other way. There are some obvious flaws here, like the quantity of data and it's normalization, but it's basically a tautology."
So if a lot of people rate you as a great leader, you are a great leader? Even if you lie to their face and delover terrible results? Objective reality doesn't matter?
So the greatest leader in the world is in North Korea?
Communication is a clearly measurable skill. Just because a lot of people have been sold a lie, that doesn't make it true
> It perpetuates the myth that there's something really special about Silicon Valley engineers
Why are you sure it's a myth? My prior would be to believe that engineers at the most exclusive companies with the highest hiring bars that pay 3-5 times more than average would be better programmers. The article is just one data point confirming what intuitively should be true.
If Silicon Valley engineers are no better than anywhere else then someone should notify the execs at FAANG, I'm sure they'd be interested to know they are dramatically overpaying for talent.
I don't understand what is so uncontroversial about this. SV companies recruit the best talent from around the world and it's where the best talent wants to work. Similarly, the best financial talents are in NYC and London, the best actors are in Hollywood etc.
There are capable people who don't want to live where 1000sqft home costs $1M. Never mind all of the other problems with the region. Selection bias doesn't prove anything about the people you aren't selecting for.
> It perpetuates the myth that there's something really special about Silicon Valley engineers
In my [disillusioned] experience, this holds true: Silicon Valley engineers are very good for building throwaway MVPs that they won't have to maintain more than 3 years.
I've been very disillusioned by the quality of the software written by Silicon Valley companies, but in hindsight it makes sense: "Run fast break things" development culture resonates with the "raise VC money every 18 months" business culture, and then look for an exit in 5 years tops. There is no incentive in Dev or Business to really develop good software.
While I agree that this particular exercise is riddled with problems, I simply cannot image Hacker News rolling over and accepting evidence-based answers to questions of this nature, regardless of where the data came from or what the methodology was.
There's no effort to quantify what "technical" or "communication" skills are - these are left to the interpretation of a the interviewer. It makes no effort to show where these engineers are going, what the interview process they're completing looks like, what impact demographics had on this, etc.
I find this stuff repugnant. It perpetuates the myth that there's something really special about Silicon Valley engineers, while making only lazy and perfunctory efforts to examine any alternative explanations than "this is where the rockstar ninja coders work." Shameful.