There is an easy way to increase your detection of redirects and parked pages: make two requests, one to the real URL and one to a URL which is intentionally broken. (example.com/i-am-a-link and example.com/fklsdfasdifo for example) Run a heuristic for difference on the resulting content. This won't catch all of them, particularly if you use a really naive heuristic that can't deal with e.g. ads changing, but it's a heck of a lot quicker than comparing manually.
if you see a lot of them go to webmaster tools, if you see them there too its not some kind of test but some other reasone, mostly their shitty js parsinf, which treats anything with an / as a relative url...
I had a project/startup working on dealing with link rot for a while. It would not just tell you a link was 404ing, but recognize when the content of the page had changed significantly and let you know. The fun part was to automatically recommend a good replacement page from within the site, nearby, or the internet archive.
Based on a quick test it seemed that it would take an site owner about 5-10 minutes per link to find a good replacement once they knew it was broken . That's fine for a personal portfolio with 50 links, but for a site like Boing Boing getting all the broken links working again looked like full time work for a year.
I'm curious if you have many outbound links in your 'scalable content'? Do you spend much time maintaining them?
I just want to point out that this won't really work if the site tries to redirect you to a search engine. Naturally broken links will come up with all kinds of search results, but deliberately broken links will come up with "None".