upvoted for "emacs still takes ages to boot" -- hit me right where it counts.
I wonder though, is reckless advancement better than taking time to do it right? A lot of people are finding success with the fail-and-iterate model -- isn't that essentially what we've identified as a problem now?
Incremental improvements are improvements, but one runs the risk of reaching a local maximum. As we see a significant change in the landscape (say, networking, SMP, or GPGPU, which haven't been prevalent since companies wrote their own minimal operating systems when receiving a computer), we build on what we have rather than breaking it down and factoring new capabilities into our way of thinking. We know it's a good idea to do so, but the advancements tend to be limited to academic or research oriented projects rather than widespread commercial pursuits.
There's a good reason for this, which is backwards compatibility, in terms of concepts, education, maintaining a skilled workforce, and retaining the value of prior investments in hardware and software.
I wonder though, is reckless advancement better than taking time to do it right? A lot of people are finding success with the fail-and-iterate model -- isn't that essentially what we've identified as a problem now?