Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is a difference though: when Firefox was launched, it was still possible to develop a browser from scratch (like Opera was doing back then), and hence lively competition. Nowadays, with WHATWG HTML's feature creep (the HTML "living standard" spec alone weighing in at 1250 pages and 13.3 MB as a PDF, plus tens of CSS specs all over the place, and JavaScript a moving target), that's infeasible for even a nation-state budget.


(I worked at Opera a decade ago.)

Opera wasn't a browser from scratch, though: Presto was older than KHTML (and WebKit, etc.).

And really, starting from scratch ten years ago you'd just run into Presto's big failing: site compatibility! Websites rely on all kinds of asinine edge-case behaviour, and if you don't match the majority behaviour, users will leave your browser for your competitors (and site compatibility was, along with crash bugs, always one of the top two reasons for users to switch away from Opera).

In many ways, the vastly larger HTML and CSS specs are a massive boon for minority browsers: when I started at Opera, a large proportion of the Presto team were QA staff who in reality spent almost all their time reducing site compatibility bugs and reverse-engineering other browsers. HTML5 and CSS2.1 made to a large degree that work go away: there was enough movement to converge behaviour (including from the larger browsers) on documented behaviour that reverse-engineering other browsers ceased being something consuming large amounts of resources on all browser teams.

What killed Presto is a variety of things, and the growth of the platform was only a small part of that.

And as mentioned in other sibling comments, all major browsers have rewritten major components at various occasions.


Major browser dev teams rewriting components over year-long periods of carefully planned integration points (with Mozilla even introducing a new programming language along the way) is hardly telling anything about the viability of developing a browser from scratch. Given the powers-that-be in so-called "web standards", by the time you've got anything to show odds are it'll be obsolete.

What would be helpful is if the whole web stack could be organized into profiles (eg. things working without JavaScript, without CSS "4" features, etc.), but WHATWG dismissed the idea of HTML/CSS versions or device profiles proposed by MS (hell they even can't be bothered to version their "living standard" specs). And W3C could start to give formal semantics for CSS (which is kindof "implementing" and verifying the spec) rather than prose about dozens of layout models in an untyped ad-hoc syntax. That is the role of standard bodies, not to lead us into a way-of-no-return for the benefit of very, very few people.


I keep seeing this come up, that it's practically impossible to write a brand new browser. This got me thinking, what would it take to make a better browser?

A browser is something that 1 out of every 2 people on earth [1] use frequently. That's a lot of people! All developers in the world use a browser. Lots of them really believe in open software. Some are 10x developers. A certain percentage are literal geniuses. Exactly one is the smartest developer alive today. I get that it's hard, but smart people mobilized at global scale hard?

Starting from scratch today developers would have better tools, modern languages, and a hindsight view of what worked and what didn't in previous browsers to work with. Wouldn't that make it somewhat easier?

Say the average person loads 10 websites per day, and a less optimized browser requires 100ms more to load each page. That's 415 million hours wasted per year. Say the average person makes $4 an hour, that's 1.6 billion dollars wasted!

If every browser user donated 50 cents, that would be 2 billion dollars. Would that be enough? There are 50 people worth more than 17 billion [2], would one of them bankrolling a new browser be enough? What would it take?

[1] https://en.wikipedia.org/wiki/Global_Internet_usage [2] https://www.businessinsider.com/richest-people-world-billion...


I welcome your attitude, bring it on and I'll be more than supportive of it. But still, I'm maintaining that you can't work against the likes of WHATWG and W3C churning out specs, when they are subverted and financially dependent on Google.

In other words, we're toast ;( But hey, that might be exactly the kind of situation that motivates developers after all.

With my project [1], I'm attempting something less ambitious: I'm trying to re-establish SGML as an authoring format (HTML is based on SGML, and SGML is the only standard that can tackle HTML), to at least bring back a rational authoring and long-term storage format for content that matters and that you'd like to be able to read in a couple of decades still without an ad company or even a failed, over-complicated all-in-one document and app format of the 2010's getting in your way.

[1]: http://sgmljs.net/blog/blog1701.html


IMO writing a better browser is not that hard. The trick is to just focus on reader view.

When you can click the reader button, it makes every website better. Reader view defeats modal dialogs and dickbars. Reader view renders faster than AMP, because it skips web fonts. Reader view always scrolls and zooms without jank.

Build a better browser by embracing the web as it was meant to be: the best document publishing platform. Let the other guys build the world's worst application platform with clippy and toast and the rest.


> What would it take?

All the developers in the world combined can not solve a legal problem. You can't implement technologies like Widevine without a license, and if they simply won't give you one [0], you're dead in the water.

[0]: https://blog.samuelmaddock.com/posts/google-widevine-blocked...


What would it take to reverse engineer Widevine?


I don't know much about the law, but wouldn't reverse engineering Widevine be illegal, as it is protected by DRM?


Firefox has rewritten its entire rendering engine, and its entire CSS engine. Writing a browser is incredibly hard, but not infeasible.


While writing rendering engine isn't easy to do, it's extremely easy compared to the HTML part. Even Mozilla isn't in a hurry to rewrite all that code.


Depends what you mean by the "HTML part". The HTML parser, which is the only HTML-specific part, was replaced in Firefox 4, with one implementing the algorithm defined in the WHATWG HTML spec.

The DOM code there's definitely less motivation to rewrite, in large part because there's a lot less benefit to be gotten from rewriting large parts of it (versus layout or style where there's much more entanglement across the codebase).


HTML parsing is not that hard compared to CSS/layout/fonts (or even figuring out layout), a competitive JavaScript engine, and the myriad of APIs and site compatibility problems OP talked about.

My HTML parser uses SGML which is more generic as it takes the HTML grammar (a DTD) as parameter and computes state machine tables etc. dynamically based on it, thus a bit harder, but still very much doable.


Does that HTML parser follow all the HTML5 parsing/error-handling rules, so that it conforms to the spec's behavior for random tag soup full of broken markup? Or are you assuming "clean" HTML?


No, it follows the normative description of HTML as specified in chapter 4 of the HTML spec. The redundant procedural spec for parsing HTML is strictly aimed at browser implementers, and in particular to reach same behaviour accross browsers in the presence of errors. Note that the covered fragment still contains the rich tag omission/inference rules for HTML and other minute details, based on formal SGML techniques, though.


Feature creep seems to be a common failure of many standards. Bluetooth or usb-c to name some others.

World is missing a standard, a good and lean one is created, multiple vendors implement it, there's an initial year of minor incompatibilities but otherwise all is gold and glory, everybody loves the standard. Standard becomes immensely popular so everything supports the standard and worse - the standard starts supporting everything because every vendor just has this one small extension they want to add. After adding thousands of such extensions, suddenly the standard isn't so lean anymore. Now the spec isn't just a single RFC that can be read over lunch. It's a whole collection of documents with thousands of pages, appendices, mandatory extensions and compatibility tests. You need few people employed just to keep up with organizing the documentation. Vendor implementations start becoming incompatible because of too high complexity. Only a few big players manage to stay afloat and they probably like it that way because it raises the barrier to entry and they get to keep their position.


I do not think it is impossible, Firefox alone has rewritten several of its major components at least once.

I see it more as a self-fulfilling prophecy and a constant stream of FUD from naysayers whenever such a thing is merely suggested (with popular topics being that it will never be finished, it will not be secure, if mozilla needs $500m/year how mere humans will ever be able to do it, etc).

I think that a lot of people nowadays forget that almost much every single piece of open source tech that existed since the 90s or early 2000s was started by naive young programmers trying to do something (have you seen KHTML's source code in KDE1?) without having assholes telling them they can't do it. Well, ok, they had some, but nowadays they are WAY more numerous and at the past they mostly came from (what was seen as) "evil corporations" so they were more easily dismissed. Today most people in open source (both users and programmers) dismiss most things that do not have some big commercial entity behind them.


This seems like a really strange position to fight over. OP is mainly complaining about the constant flux of the spec. At the time were KHTML was being implemented there weren't new features being released every week like we have now. In every aspect of the browser.

As a single developer it is impossible to implement a browser that is compatible with today's websites.


> As a single developer it is impossible to implement a browser that is compatible with today's websites.

Then don't make it "compatible with today's websites".

In fact, that should probably be the goal. That is, what should or could "tomorrow's internet" look like?

Think of the "time lag" between "The Mother of All Demos" and it's actual commercial realization: Arguably the Mac, but some might say the Apple Lisa, other's Xerox Star, and still others could pop their own in the timeline - but for the general consumer - that is "wide adoption" - it was the Mac in 1984.

That's a lag of almost 15 years - but one guy managed to see that future, and with some help, pulled it into the past (if you've never watched the demo, and put yourself in the shoes of that time, then you can't easily understand just what it took for it to occur; it's honestly awe-inspiring to me from a historical standpoint, I'm sure there were people in the audience who didn't understand they were seeing the future).

Try to do that, is what I'd propose.

And some people are. Where I believe that future lives is in the idea of the "distributed web" - which honestly is what the internet should have been all along, but apparently we're going to have to drag it back there. Part of the reason it didn't go that route was mainly because of "dial-up access" - the end nodes weren't looked at as "peers", when they should have been, just instead of "always-on" peers, as "ephemeral and temporary" peers. But they were kinda sold differently, and most people weren't made aware that they could be (and should be) peers. But rather, relegated to 2nd class "clients" and "consumers".

Now many people have the available bandwidth to be closer to real peers, run servers, etc - but are instead limited in a variety of ways (most notably by draconian TOS language, that while in many cases is "ignored" - it can be easily dragged out to deny service if and when an ISP feels like it).

I'm not sure the distributed web is the full answer (the full answer would include mesh networks - but there are logistical issues there with those, especially in the United States, that currently prevent them from transitioning beyond, at maximum, "city level") - but it's a start, I think.


> At the time were KHTML was being implemented there weren't new features being released every week like we have now. In every aspect of the browser.

KHTML was being implemented in 1999. That was an extremely fast moving and chaotic time in the development of the web! Browsers were shipping new features left and right, the specs didn't describe at all what browsers really did, and if you fell behind people would quickly switch to other browsers.

Even by 1999 you wouldn't have been able to make a competitive browser on your own, and especially not keep up with the rate of change.

(In the early 2000s, after Microsoft "won the first browser war" and disbanded the IE group, everything slowed way down, though.)


Not as a single developer, but i'm certain a team of developers can do it even without having some big corporation behind them.


> when Firefox was launched, it was still possible to develop a browser from scratch

Firefox began as a version of the Netscape Suite stripped down to just the browser. The Gecko rendering engine long predates Firefox. No one has launched a full browser "from scratch" in nearly twenty years.


Today is the last opportunity actually thanks to polyfills for IE11. Also it must be modular architecture using several independent libraries. Once one is done, others can use and aid maintenance. But it's still very difficult.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: