I wondered this too as an amateur. Although how about this:
Normally the emergence of a contradiction in the course of a proof serves as a red flag that something's wrong. When proving by contradiction, you're operating without the safety of these warning signs as you progress through the proof. If you reach a contradiction, you have to question whether or not you got there suspiciously easily.
If you're aiming for something more specific than just falsehood -- "not X", say, as in the contrapositive approach -- you might be less likely to get there incorrectly. If you reach a contradiction along the way, you don't go on to say "and I'm done since contradiction implies everything including not-P", (even though technically speaking this would be perfectly valid) instead you question where you went wrong in the proof, because this is too easy.
But yes this certainly feels like more of a fuzzy psychological argument doesn't it. I don't know how (or if) you could formalise a notion of "how susceptible is this proof tactic to accidents".
If we were discussing a machine-checked proof, then this would basically not be a problem. The correctness of an accepted proof would have little to nothing to do with its length.
Unfortunately, any proof about NP would be very gnarly and likely to be a mammoth task to formalize into a machine-checkable proof and so likely not done for a long time, if ever - the proof-checkers would be other human mathematicians. And humans are empirically unreliable. Invalid proofs have been accepted for years, decades, and even millennia (Bertrand Russell, IIRC, found a number of gaps and hidden assumptions in Euclid).
If you were looking through a program someone written, wouldn't you assume that there will be a certain bug rate per KLOC? Wouldn't you assume there is a certain bug-I-will-not-catch-reading-through rate per KLOC?
Now imagine you are looking at thousands of lines of the most fragile program ever written...
From a philosophical standpoint, I am reminded of Quine's famous attack on Popperian falsificationism - when we observe that Uranus is not on the orbit Newton would predict, we could reject Newton's laws of motion, or we could postulate some additional unobserved physical fact like the existence of a seventh planet called Neptune. Our observation has only forced us to reject the consolidated theory Newton+6-planets, it doesn't tell us which of the 2 to spare. Similarly, when our potential proof finally terminates in a contradiction, all it tells us is that somewhere we went wrong; it doesn't tell us which axiom or theorem we ought to throw away as false.
I believe it is David Hilbert you are thinking of, not Bertrand Russell. He found that Euclid's traditional 5 axioms needed to be 20. See http://en.wikipedia.org/wiki/Hilbert%27s_axioms for more.
Speaking as an almost not-amateur (finishing my PhD this year, and working on getting a faculty job), you're both right.
Logically, there's nothing wrong with proof by contradiction. As long the steps between "Assume P is true" and "therefore FALSE" are valid, you have a valid proof that "P is false".
The issue with proofs by contradiction is psychological, but it's important not to ignore such things when you are doing math. Proofs are written for people to read, and people make mistakes in reading, writing and constructing proofs. The longer and more complex a proof is, the more likely it is that there are mistakes. I can't speak for all fields, but in my field (statistics) proofs by contradiction are acceptable but not in style and constructive proofs are generally considered to be more informative.
Normally the emergence of a contradiction in the course of a proof serves as a red flag that something's wrong. When proving by contradiction, you're operating without the safety of these warning signs as you progress through the proof. If you reach a contradiction, you have to question whether or not you got there suspiciously easily.
If you're aiming for something more specific than just falsehood -- "not X", say, as in the contrapositive approach -- you might be less likely to get there incorrectly. If you reach a contradiction along the way, you don't go on to say "and I'm done since contradiction implies everything including not-P", (even though technically speaking this would be perfectly valid) instead you question where you went wrong in the proof, because this is too easy.
But yes this certainly feels like more of a fuzzy psychological argument doesn't it. I don't know how (or if) you could formalise a notion of "how susceptible is this proof tactic to accidents".