Killring Rick Dillon's Weblog

Using Profiles in Firefox

Since Google has released Chrome, its minimalistic, speed-oriented approach has attracted millions of users, both among neophytes and professionals alike. It's a well-designed browser with lots to love. But I still use Firefox, and many folks I work with don't understand why. One of Firefox's killer features is profiles. Many users are aware that Firefox supports profiles, but don't make use of them in their everyday use of the browser. But there are several instances where they can prove to be quite useful. <!--more-->

Working with Profiles

Creating a new profile is a simple matter. If you're invoking Firefox from the command line you can use the -p option to select a profile. For example, if I wanted to select the Personal profile, I would do so with

firefox -p Personal

If a profile named Personal didn't exist yet, Firefox would bring up the profile management dialog, which would allow me to create it.

One of common problems working with profiles is that Firefox tends to (by default) want to attach to a running session of itself when it is invoked. That makes it tricky to run multiple profiles simultaneously. You can avoid this behavior by using the --no-remote option, which prevents the new instance of Firefox from connecting to an already running process. So, the full command line to bring up Firefox with a given profile is:

firefox -p <profile-name> --no-remote

Maintaining Multiple Workspaces

So, what are profiles useful for? One situation I find them useful is at work. I often need to log into the same site, but with two different accounts. One excellent use of "Private Browsing" or "Incognito" modes is to provide a quick way to create empty profiles on-the-fly that are then thrown away when they are closed. This is great if someone is going to borrow your computer to log into GMail, for example, but if you are going to maintain multiple personas on a particular site, Firefox's profiles are often better. In my case, I like to keep my personal GMail account open in one profile and my Google Apps account for work open in another.

But logging into the same site with different credentials is only one benefit of this approach. You also get distinct sets of bookmarks, history and add-ons. So, for my work profile, my bookmarks and history all relate to things I do for work. All my GitHub projects for work are bookmarked and autocompleted perfectly, and there is no pollution of my GitHub history in my work profile with projects on GitHub I would check out for side projects.

Likewise, I use certain add-ons for my personal profile that I don't use in my work profile, like my TT-RSS add-on and LessChrome HD, which hides the address bar and bookmarks bar until I need it. Likewise, Firebug and other development-related add-ons reside only in my work profile.

Create 'Apps' From Websites

Sometimes there are sites that I use so frequently that they become apps in their own right. This idea has been explored extensively by all major browsers, and Firefox has made several attempts at streamlining the process of sandboxing sites into applications, all of which are now defunct as far as I know. Nevertheless, there are some sites that I treat like applications and like to launch and shutdown independently of all my other tabs. Often, I like to run these sites in fullscreen, or at least turn off the menu bar, tab bar and address bar to maximize my screen usage. By sandboxing these pages into a separate Firefox profile, each 'app' can have its own Firefox UI settings and add-ons and can be launched independently of whatever other browsing sessions I have going on.

My primary use case for this is my RSS reader (a private instance of TT-RSS), which runs pseudo-fullscreen and has its homepage set appropriately. To launch it (I use Linux), I simply wrote a shell script and put it in my path:

$ cat ttrss

#!/usr/bin/env bash
firefox -p ttrss --no-remote

This script could be attached to a desktop icon, menu item or other shortcut, but I launch it using Synapse. In the absence of Prism, and Chromeless (both of which are around, but unmaintained), I find that Firefox profiles is an effective replacement.

Google Supports The Web (Not The Internet)

For years I've held Google as the prime example of a corporation throwing its full weight behind the open internet. But it gives me pause when people start saying that Google is "rapidly turning its back on every single open standard".

It was only back in March, when Google announced that they were closing Google Reader that it became clear to me: Google supports the open web, but decidedly not the open internet. "Web" and "internet" are not interchangeable in this context. The internet is a network of networks that supports all of the computing infrastructure supporting many networks, including the web. The web is just one of those networks, characterized by its use of HTTP(S). The web is consumed using a web browser, but the internet could be consumed my many clients. My fundamental misunderstanding of Google's position all these years hinged upon my conflation of the web and the internet when thinking about Google's strategy. There are a few examples of the ways in which Google has chosen not to support open internet protocols.

Suppressed Standards

Google's motivations become clear when you look at the technologies Google has sought to suppress over the years. Google was instrumental in killing off native email clients by investing heavily in the development of GMail's web interface. Before GMail (and yes, even today), most other webmail interfaces were considered a weak alternative to using a native client. After GMail was revealed in 2004, the usage of native clients dropped dramatically. This was a major coup: Google managed to trade internet protocols like POP, IMAP and SMTP for the HTTP(S). To be sure, GMail still supports the other protocols, but the product is geared to push users toward the web-based alternatives.

Also on the messaging front, Google announced just yesterday it had made 'the difficult decision to drop the very "open" XMPP standard that it helped pioneer'. XMPP is what helped Google play in a larger ecosystem of messaging clients from other providers. Now, instead of reading your messages using a Jabber-based client, you'll read your messages over HTTP(S).

Usenet (newsgroups) is another example of a federated 'open internet' protocol that has been around for decades that Google does not support. Google chose to bootstrap their web-based Google Groups product with data from Usenet, but unlike Usenet proper, there is no way to connect to Google Groups with third party software to read content there. In the new Google Groups interface, you can't find links to RSS feeds for the groups like you could in the old interface. The result? Users are driven to use HTTP(S)-based protocols instead of NNTP (the Usenet protocol).

Google has backed away from open web technologies that operate over HTTP.

The biggest example is their fading support for RSS. Closing Google Reader and ending support for RSS in Google Chrome are the main examples here. RSS is a standard, and allows third-party software to interact with data from across the web seamlessly. It's very useful to users, but not all that useful to Google.

This mentality has permeated Google's offerings. In creating their own social network, Google disabled all support for RSS. RSS would have allowed Google Plus users to see updates from the network in third-party clients. That provides a lot of value for the user, since updates could then be filtered, analyzed and aggregated. Google went a step further, however, and decided not to support any write operations in its API for Google Plus. To the end user, this means that there is only one way to post content to Google Plus: through the web interface. Google Plus is essentially a "network" that supports only one client and one "server" implementation.


I know little about how these policies came to be, so discussing motivation is an exercise in speculation.

Google makes money from advertising, and the efficiency of advertising is coupled to how well you target potential customers. At the turn of the century, the best way to do that is by looking at search terms and displaying relevant ads. As computing power has increased, Google has pioneered "big data", which allows for much more sophisticated targeting.

Building Profiles

Facebook has shown that people will happily post lots of information about themselves online. Facebook also discovered a very simple abstraction for profiling users: the "Like" button. The genius of the "Like" button is that it is very low friction - it only takes a second for a user to register his or her opinion. It is also perfect for computers to analyze: it is simply the toggling of a bit (in concept, anyway). This makes much better profiling possible, and brings to the social web Amazon's "Users who purchased this item also purchased..." functionality.

So, where Amazon had browsing and purchasing history and Facebook had a history of "Likes", Google had a history of search terms. For a company in the advertising business, Google realized they needed to insert themselves into more user actions so they could build better profiles to target ads more effectively.

That meant building Google Wallet and Google Plus (along with the "+1" button) to get access to users shopping history and social interactions. It also meant building Android and encouraging users to give Google data about where they live and work.

If Google opened up the write API to Google Plus, it would open the door to automated systems posting to the network, reducing the signal-to-noise ratio of Google's data used to build profiles. It would allow clients to be build that de-emphasized the "+1" button that is so important to collecting data to compete with Facebook.

If Google supported XMPP or RSS, it would drive users away from Google's own web applications that gather data about what times people visit the site, and how long they look at particular items. Google Talk drives people to open GMail or Google Plus, and the big red button encourages people on GMail to open Google Plus. Google Plus is designed to gather data about user interaction on the network: who a user follows and chats with, what stories a user comments on or clicks "+1" on.

If users can access all the updates with third-party software, none of that can happen.

Delivering Advertisements

Collecting information useful to profile users is only one half of the equation. The other half is displaying advertisements to the user. Delivering advertisements via RSS is possible, but awkward. Delivering effective advertisements is very difficult over XMPP or NNTP or SMTP. In other words, every time a user opts to use a third party client connected to Google's services over an open protocol that is not HTTP, Google not only has trouble building a profile of that user, but Google also has trouble showing them advertisements.

Does the Internet Have a Future?

Some would argue that Google is the internet's biggest corporate champion (I would!). Despite the my tone thus far, I believe that Google is still an ally in the battle to keep the internet open. But there are other allies as well. The biggest tool we have in maintaining an open internet is the open source software stack. It allows users to pay a small amount of money to host their own sites on services like Dreamhost or EC2 and deploy open alternatives to corporate offerings. When Google Reader closed, I picked up TT-RSS and I've never been happier reading RSS. I'm looking to deploy RoundCube to my Dreamhost account as an alternative to GMail. There are open alternatives to social networks as well, like Wordpress and RSS, though none have gained much traction, yet.

On the client side, Firefox remains the best browser in many respects, including extensibility. It is important that in the realm of the web, we don't develop a browser monoculture, despite the support of at least one Google engineer. Standards that have only one implementation are subject to change much more readily than open standards implemented by a variety of parties, and one of the founding tenants of the web is open, standards-based access.

Even without Google's unequivocal support, the open internet is alive and thriving. To keep it that way, users need to understand that they give up a measure of choice and privacy when they use centralized, corporate products that force them to use only the web. Sometimes that is a sound choice, but perhaps it isn't always the best one.

Using Dreamhost DreamObjects with S3 Tools

S3 has become so prevalent as a data storage service that its API has become something of a de facto standard for data storage. While Amazon promotes Cloud Drive as the consumer-focused product based on S3, S3 itself is quite accessible to end users through various clients. Many of those clients are proprietary, but there are two that I use that are free and open source.

The first is part of the s3tools suite, and is called s3cmd. Given your access key and secret key, s3cmd gives you full access to the S3 service, allow creation, modification and deletion of S3 buckets. It even supports an interface for syncing entire directories to S3 in one command.

The other tool, duplicity, is frequently used as the basis for encrypted, incremental backup. One of its features is that it supports S3 urls for the destination of the backup. A single duplicity invocation will compress, encrypt and backup any directory you specify to S3.

In addition to supporting S3, a bit of tuning enables these tools to support DreamHost's DreamObjects service as well, which provides an S3-compatible interface, but is backed by the open-source Ceph object store/block storage/file system, rather than Amazon's proprietary store.

In the case of s3cmd, simply tweak a new .s3cfg file by:

  1. Adding your DreamObjects access_key and secret_key to the file in the appropriate spots, and
  2. Replacing instaces of the host with

You can then invoke s3cmd as usual (assuming your .s3cfg is located in your home directory).

If you would like to use both S3 and Dreamhost objects, you can rename your .s3cfg setup for Dreamhost to something like .dhcfg and then alias dh:

alias dh='s3cmd -c ~/.dhcfg'

So invoking s3cmd as usual accesses your S3 account, and using dh accesses your DreamObjects account.

In the case of duplicity, the customization is even easier. Assuming your DreamHost access key and secret key are in the AWS_ACCCESS_KEY_ID and AWS_SECRET_ACCESS_KEY variables, respectively, you can invoke duplicity like so:

duplicity --allow-source-mismatch /path/to/stuff/to/backup s3://<bucket-name>/backup-directory

Building a Library

Since childhood, I have been drawn to the idea of amassing a collection of literary cultural artifacts that I can share and discuss with my family, and pass down to them to expand. This was quite ambitious in the days when entire rooms of your house would have to be devoted to storing books, but with the advent of digital books, I can not only store as many as I could ever read and carry them with me most anywhere, but I can store them for all time without fear of the bindings breaking down or the pages yellowing with age in the sun. We live in a time in which we have unprecedented access to literature globally via the internet.

Of course, we face different challenges today in building a library. Some of these are new problems that come with the rapid pace of technological development. It turns out that there are some inherent difficulties in storing data in enduring formats that will be readable in the next few decades. Plain text seems to be a good choice, and I expect HTML and its derivative formats (like epub) will age well.

In addition to these new technical challenges, we also face challenges imposed by publishers, many of whom have historically demonstrated some resistance to adopting the benefits of instant global access to data via the internet because of the obvious implications to their business model based on scarcity. The most direct manifestation of this resistance comes in the form of DRM, which encrypts the data they sell you so that it is only readable on your devices. Most every major online bookstore makes it quite difficult to read content you buy with devices or programs not authorized by that store by encrypting it. Some have taken this a step further (like the Kindle Fire) and now make it quite difficult to read anything but the encrypted content they sell you.

But by and large, if you're interested in reading freely available classic books you can do so on most any device you choose.

In this spirit, I thought it was time that I seriously consider reading The Iliad and The Odyssey, so I looked for versions on Project Gutenberg, which is my first stop when looking for classic literature online. After downloading and viewing versions available there, I was disappointed with some of the formatting, and thought I might be able to find better versions elsewhere.

After only a little bit of searching, I ran across the library at the University of Adelaide, which offers a large collection for free classic ebooks. As I clicked through to the page containing the works of Homer, I found this in the short summary of his works:

The poems appear to go back to at least the eighth century BCE, and were first written down at the command of the Athenian ruler Pisistratus, who feared they were being forgotten. He made a law: any singer or bard who came to Athens had to recite all they knew of Homer for the Athenian scribes, who recorded each version and collated them into what we now call the Iliad and Odyssey.

This small historical note presented a sobering juxtaposition for me. This tyrant that lived almost 3000 years ago in Athens concerned himself with the preservation of culture and knowledge so it would endure for millennia, even as I live in a time where the role of government has primarily became to pass and enforce laws that make it ever more difficult to build a library that can endure through the generations. The industry responsible for producing literature today has made it both technically and legally difficult to build a library that will endure even a decade, as the encryption they impose is highly specific to the devices and companies available today. Millions of customers will find they paid for a few years of access to their books, rather than for a copy that they could pass on to a family member in a few decades.

Nevertheless, I think we will ultimately make progress. New business models will emerge, and just as is the case with music, we will shortly find ourselves able to purchase books in unencrypted, open formats readable across hundreds of devices. Some publishers, like Tor Books (announcement) and O'Reilly (policy) have already made this switch and seem to be able to make money from people, like me, interested in reading, learning and bulding a library to pass on to their children.

The Retro Gaming Fad

As 2012 draws to a close, I think its worth taking a look at retro gaming. Near the dawn of virtual reality and well into the teens for globally shared MMOs, it's popular to be playing and making "retro" games. What I've noticed, though, is that this is only true so long as the games aren't too retro.

Real Retro has given really good old titles new life, which is a good thing for everyone, as I see it. As Jeff Vogel of Spiderweb Software said in his 2009 blog post The Joy of Rereleasing Old Games:

One of the most frustrating things for me about video games as an art is that individual titles die out. The older a game gets, the better the chance it will stop working on new machines. ... The machines that will run it grow ever older and dustier. I think this is HUGELY wasteful.

Jeff makes his living writing fairly retro games, and his games probably deserve at least a post of their own. He's been making retro games for so long that his original retro games really are retro now, and he has to rerelease them to keep things up to date. I love playing his games, just as I love playing through the titles I've purchased from There are several reasons these games are appealing:

  • Time filters out most of the low-quality games, leaving the best to survive
  • Old games tend to work well under Wine, which I use to game under GNU/Linux
  • My laptop is a couple of years old and isn't high end, but it plays these games really well
  • Great titles can cost only a few dollars, giving you excellent return on investment

I never got to play through Baldur's Gate (one or two!) or Icewind Dale. Now I can. This is real retro gaming at its best. I also have Avadon: The Black Fortress, Avernum 6, Avernum: Escape From the Pit and Geneforge 5 (all from Spiderweb) on my Linux laptop in varying stages of play. Jeff doesn't make games for Linux very often, but they do seem to work great under Wine. But I digress. These are mostly real retro games from years ago that have been made available for play on modern machines.

Synthetic Retro

But there's another breed of retro gaming going on as well. New games are being made the look like games from 15 years ago, and they're making developers a lot of money. And the graphics really are pretty retro. Take a look at Minecraft, released in 2009. Very little effort was made to texture the game convincingly. Instead development effort went into gameplay, and the lone developer, Notch, who created the initial version made millions and started his own game company.

But Minecraft's paltry graphics weren't due to a lack of technology, they were due to a lack of development resources. Dragon Age Origins also came out in 2009, but had hundreds behind its development, and the differences are stark.

And that perhaps highlights the core of the issue: gamers like seeing games that don't necessarily fit the mold cast by the huge game publishers. That opens the door for indie developers to create cheaper games that can satisfy a niche market that seeks a particular style of gameplay. Minecraft, at its core, is an alternate world where gamers can build freely. There are other elements built into that base (like "zombie survival horror", go figure), but there's little doubt a major game studio would have ever funded its development. This is one of the reasons Kickstarter and IndieGoGo funding models are so popular now: there's a lot of the room for indie developers to make money on a minority of the gaming population by catering to niche markets.

Even before Kickstarter and IndieGoGo, there were companies doing this successfully, like Spiderweb Software and Introversion. Now that the funding model has become more popular, however, new games created in the "retro" style are popular, because they can experiment with the medium of games in new and novel ways, ironically taking risks that the huge developers and publishers never would. The Humble Indie Bundle has cashed in on the popularity surrounding these indie games in the last few years, and have just released their seventh bundle this month.

But the phenomenon isn't limited to only the graphical aspects of the games. Retro gameplay styles are coming back into vogue as well, with titles like Faster Than Light appearing in Ars Technica's Top 20 Games of 2012. FTL bills itself as a roguelike, not only sporting retro graphics, but also old-school hardcore gameplay, including randomly generated worlds and permadeath. There are other popular, more overt, roguelikes being made as well, such as Dungeons of Dredmor, which is a slightly modernized take on the classic Rogue games. These games all offer a sort of Zen gameplay where failure is all but inevitable, and you learn to simply begin again, hopefully wiser the next time through.

Too Much of a Good Thing

With all the attention on making, selling, bundling, reviewing and playing retro games, I sort of hoped that I could drum up some interest in really retro games. These games were originally designed to be played on a terminal, and include (among others) interactive fiction and roguelikes. I had a discussion last year at a party and brought up what an interesting art form interactive fiction was, since it gave one developer the power to create worlds of amazing depth and complexity. I ended up gracefully exiting the conversation after a few minutes of arguing that no, interactive fiction games were not the equivalent of choose-your-own-adventure books.

There are dozens of old-school roguelikes, and I've tried many of them. My favorite remains Angband, because it offers a depth of gameplay unparalleled in many modern games, is light on resources, and is heavy on strategy, but manages not to trip my "this game is impossible" sensor. Being turn-based, it not only allows, but requires that the player stop and think about how to best survive. It has many of the elements everyone loves about, say, Day Z (a wildly popular FPS mod that features permadeath) or Faster Than Light, but is playable just about anywhere (including phones and tablets) and is great for mobile gaming because it is turn-based, so can be paused at any time safely.

But really old-school text-based gaming seems to be a bridge too far for most gamers today. Even those that appreciate the quirky graphics of Minecraft and FTL, or trumpet gameplay over graphics as they play Gurk can't get into a game with nothing but ASCII for graphics.

Nevertheless, as we close out 2012, I'm pleased that so many elements of retro gaming have suffused the popular titles today. We're getting a chance to really explore video games as an art form due to the efforts of thousands of indie game developers all over the world, and it looks like the retro game fad is here to stay, well into 2013.