And when we moved away from C for a lot of really important things, we saw memory issues and buffer overruns replaced by other trivial mistakes. In the late 90s, buffer overruns became a joke bug/security issue. In the late 00s we saw SQL injection (and similar injection attacks), XSS and other parser confusion tricks become the joke bug/security issue. Using python or ruby didn't magically fix these, and there are still fairly regular issues in using the libraries that enforce input and sanity checking. Heck those libraries still get big holes - there were a few active record issues not that long ago where the tool to sanitize data actually opened a hole! (Not to pick on any tool/framework - that one was just well publicized.)
I think the main point speaks more to the old saw "If you make something idiot proof, someone will just make a better idiot". Tools that automatically "fix old problems" are generally complicated an imperfect, and it becomes easy to accidentally trust the system to do the right thing on an edge case it doesn't handle. Having the automation of as many cases as possible is good, but having a full system to do in-depth knowledge preservation and issue capture is even better. The two things complement each other well.
Going back to the OP's original example/analogy - planes these days are full of automated systems, higher reliability parts, better interfaces, and all sorts of other improvements and failsafes so some errors from the past just can't happen. That doesn't mean they still don't do checklists and other manual checks to prevent the new issues from happening, nor have they abandoned some of the old basic checks, even if they "can't happen".
tl;dr - better tools are good, but they don't fix everything
Gerald Weinberg put it this way: when you solve the most important problem, you've promoted the second most important problem into first place.
That doesn't change that problem #1 was worthy to solve. The middle road is that we should neither rest on our laurels nor give up because of the impossibility of the perfect world.
I think the main point speaks more to the old saw "If you make something idiot proof, someone will just make a better idiot". Tools that automatically "fix old problems" are generally complicated an imperfect, and it becomes easy to accidentally trust the system to do the right thing on an edge case it doesn't handle. Having the automation of as many cases as possible is good, but having a full system to do in-depth knowledge preservation and issue capture is even better. The two things complement each other well.
Going back to the OP's original example/analogy - planes these days are full of automated systems, higher reliability parts, better interfaces, and all sorts of other improvements and failsafes so some errors from the past just can't happen. That doesn't mean they still don't do checklists and other manual checks to prevent the new issues from happening, nor have they abandoned some of the old basic checks, even if they "can't happen".
tl;dr - better tools are good, but they don't fix everything