Killring

Rick Dillon's Weblog

Building an Ecosystem of Secure Browsers

Rendering a modern web page is a hugely complex task, and users expect it to be accomplished in milliseconds. It requires comprehensive support for encryption, parallel network connections, rendering each of text, images, videos in accordance with CSS, as well as just-in-time compilation and execution of JavaScript. With all this complexity comes the enormous security concerns of downloading and processing assets from remote, often untrusted, servers.

Ideally, the web would be an ecosystem built on standards, allowing anyone to build their own browser and read content on the web. In practice, the task of building (and maintaining!) a web browser is so difficult that only a few organizations can manage it: Microsoft, Mozilla, and a consortium of other companies and browsers that rally around WebKit/Blink (Opera, Vivaldi, Chrome/Chromium, Safari). This means that compared with other standards (calendars, text editors, image editors, and even spreadsheets), there are relatively few choices in the market for browsers, simply due to the overwhelming cost of building and maintaining such a hugely complex piece of software.

One way to increase diversity in the browser ecosystem is to target a subset of a standard browser’s functionality. By reusing standard cryptography, networking, and text rendering libraries, one can build a text-based browser that handles only basic CSS directives and ignores JavaScript entirely. This allows web content to be embedded quickly and efficiently in other contexts (command lines, text editors, diffing engines) and, because it focuses on text, trivially composed with other text-processing functions.

Such a text-based browser probably won’t be suitable for general web browsing, but can be very effective for large subsets of browsing activity. Reading the news, searching the web, reading reference materials, and doing research are all a good fit for text-based browsing, and text-based browsers can seamlessly integrate with existing workflows more readily than their graphical counterparts.

By lowering the barriers to entry and maintenance, targeting a subset of web standards allow more browsers be created that flesh out the browser ecosystem, which in turn embeds the web as a first-class citizen in our computing environment, rather than relegating it to the Chrome/Firefox/Safari sandbox.

Use cases vary, naturally. I’ve found that eww is very effective for searching the web, reading Ruby, Python, and Postgres documentation, as well as Wikipedia, Hacker News and most news sites (especially with eww-readable, which was added in Emacs 25). Blogging with it is particularly effective because of the fast context switching, unified killring and rapid search through web pages via helm-swoop.

Not everyone uses Emacs, though, and there are of course excellent text browsers for the command line, like w3m (which can render pages interactively as well as a one-off), elinks and even links.

These browsers all view the web as a set of documents, and elide JavaScript entirely, resulting in less cruft, fewer advertisements, smaller payloads, and higher speed, all without the complexity and surface area of much larger browsers. In short, you get simplicity, speed, security and composability in exchange for compatibility with many ‘web apps’ that simply don’t support a text-based interface at all.

Most sites don’t design with text interfaces in mind, but they should. Hacker News is a fantastic example of how sites should look in text browsers.