17 Feb 2014
In 2011, Canonical made Unity the default desktop environment for it's
market-leading distro Ubuntu. Unity has been in development since
2009, but remains the least sophisticated desktop environment
available for Linux, and not only fails to innovate in any meaningful
way, but represents a regression in the quality of software on Linux
with respect to stability and configurability. As a result of
Canonical's insistence on using Unity (which was developed in-house at
Canonical), entire Ubuntu spinoffs have been created with a goal of
allowing users to easily avoid using Unity. Distros such as Kubuntu,
Lubuntu, Xubuntu differ from Ubuntu only as much as necessary to
provide a different default desktop experience from that provided by
stock Ubuntu. Even the more distantly-related Linux Mint has taken it
upon itself to move away from Unity, creating not one, but two
alternative desktop environments, MATE and Cinnamon, based on Gnome 2
and Gnome 3, respectively. This has not deterred Canonical in it's
mission to push Unity as the de facto desktop interface in an effort
to unify the user interface for Linux across desktops, laptops,
netbooks, tablets, and phones.
I was reading a thread over on Hacker News in which Canonical was
getting praise for not actively fighting the community's decision
to switch from Upstart to systemd. In this discussion, past Canonical
projects that bucked the community were discussed, including Unity and
Mir. One comment read "[Unity] is a breath of fresh air compared to
most alternatives on linux."
That has not been my experience with Unity, and I commented as
such, but was immediately questioned as perhaps being part of a
community of "power users" that "never really used Unity". Au
Many of Unity's shortcomings stem from Canonical's ongoing proclivity
to attempt to reinvent common desktop interactions, regardless of the
cost it imposes on Ubuntu's least experienced users. Power users can
simply change environments, but new users are stuck with Unity's
limitations until they gain the expertise to switch away from it.
Caveat: I'm Using Unity 5.x
It's worth noting that the machine I've most recently used Unity on is
using the latest LTS release of Ubuntu, 12.04, which, as of this
writing, is still recommended on Ubuntu's site as the latest stable
release. Nevertheless, I realize that there's a good chance Unity 6
and Unity 7 have introduced improvements, and that not all features
have been backported to the 12.04 distro, so some of my comments may
be somewhat dated. That said, they do reflect the current state of a
fully-patched 12.04 system. With that caveat out of the way, let us
In writing this post, I fired up a Unity session on my Ubuntu box and
used it for an hour to refresh my memory on exact details of Unity's
behavior that disappointed me. In the first thirty minutes of usage,
Unity crashed twice during execution of routine operations (opening
the launcher to launch a program in both cases, actually). So Unity's
stability leaves something to be desired, even in 2014. I just moved
from Cinnamon to KDE4 a couple of weeks ago, and in that time, KDE
hasn't crashed even once. In months of Cinnamon usage on three
machines prior to that, I experienced only one crash. Having core
elements of your user experience crash regularly is undesirable, to be
Making Easy Things Difficult
One mistake Canonical continually makes is releasing beta software to
its user base, and making that software the default. Unity is perhaps
the canonical example of this (pardon the wordplay). The first commit
to Unity was made in October 2009, and it was made the default
environment in Ubuntu in the 11.04 release, after about 18 months of
development. Not surprisingly, the lack of maturity in the codebase
How Do I Add a Program to the Dash?
In most desktop environments, it's a common and simple operation to
create a menu item for an application installed outside of the usual
package management mechanisms. In KDE, for example, simply
right-clicking on the the menu icon and selecting "Edit Applications"
brings up an interface to add, remove and edit applications. It's a
common operation for many users.
Despite the utility of modifying entries in the Dash and Launcher,
Unity makes it difficult, simply by virtue of the fact that the
functionality is not included at all. Users that wish to change an
icon, description or simply add an executable that is not already
present have two options:
- Open a text editor and navigate to a hidden directory, creating a
.desktop file in a very particular format to make a new
program appear to Unity.
- Install a third-party applications like gnome-panel and
alacarte to allow programs to be added to the Unity
Launcher and Dash.
The fact the there is an extensive wiki page describing a series of
complex contortions a user must go through to access such basic
functionality is inexcusable. I don't mind steep learning curves, but
a product that makes simple actions time-consuming and complex doesn't
respect my time.
How Can I Resize the Launcher?
This is more of an illustrative point -- Unity absolutely allows users
to resize the launcher, even from within the GUI. But how? Can a
user just right click on the launcher and select the "Resize" option?
Or perhaps move the mouse to the edge of the Launcher and drag to
resize it? In fact, neither option works -- the setting is buried
inside of the settings application, under "Appearance". There, in a
panel that allows you to change the background for the desktop, there
is a slider that allows you to choose a launcher size between 32 and
64 pixels. That was only added in 2012, actually. Before that, it
was a hidden option, made available through custom configuration
editing or the use of the MyUnity tool.
Using KDE 4 as a counterpoint again, simply right clicking on any
panel on the screen pulls up a menu that allows the user to choose
"Panel Settings". From the settings interface (which is actually
attached to the panel being modified), panel position, width,
alignment, included widgets and auto-hide behavior are all easily
accessible. Compared to Unity, it is a triumph of design. For what
it's worth, similar functionality is available in Gnome 2, KDE 3,
Cinnamon, MATE, LXDE and XFCE.
In fact, Unity does so poorly at affording such obvious settings like
this that a whole ecosystem of tools has grown up around Unity in a
community-wide effort to add back all the basic features users expect
from their desktop environments, like resizing panels, adjusting
transparency, tuning auto-hide behavior and many others. It's worth
noting that features like this have been in Linux desktop environments
since the late 1990s, and having them missing in 2014 is simply
What's With Reordering Programs in the Launcher?
If you want to change the order that programs appear in the launcher,
you might think you could simply click on the icon of the program you
want to move and drag it to the desired position. Instead of moving
the program as intended, users will instead find that the entire set
of programs in the launcher is pulled in the direction the user drags,
only to snap back to the original position when the user releases the
mouse button. To actually rearrange the ordering of the icons, users
must first drag the icon out of the launcher (toward the center of the
desktop) and then back into the launcher in the desired position.
Making Advanced Features Impossible
OK, perhaps not impossible; just about anything is possible if
you're willing to write plugins or extensions. Unity relies on this
fact extensively, pushing basic functionality out of Unity and into
various plugins written by the community, each varying in terms of
quality, performance and maintenance. The result is a shoddy, ad
hoc ecosystem of spotty software each user is responsible for
cobbling together on a per-machine basis.
Hey, Mind if I Move the Launcher?
The Launcher panel is placed on the left side of the screen. That's
great for widescreen users, but it might be desirable to move it for
some users, either to the right side or even to the top or bottom. It
turns out that moving the launcher is impossible using the stock
software (one might imagine editing a configuration file somewhere).
Instead, moving the location of the Launcher requires an
unofficial Compiz plugin, not because of limited developer
resources, but rather by design. Here's
Mark Shuttleworth himself on this exact issue:
I think the report actually meant that the launcher should be movable
to other edges of the screen. I'm afraid that won't work with our
broader design goals, so we won't implement that.
Shuttleworth maintains this sort of an Apple-esqe attitude toward
dictating how users should use their computers. But it doesn't work
nearly as well in the Linux ecosystem as it does among Apple's users
that have come to expect a strictly controlled experience, from the
hardware all the way through the OS to the software (via app stores)
and into their cloud offerings and content store. Linux poses a
significantly different value proposition, and targets a different
What's My CPU Doing?
Every desktop environment on Linux supports adding some kind of system
monitoring applet that sits next to the system tray and task switcher.
Unity managed to not only launch without that one, but lacks most
other applets common to other desktop environments as well. Common
functionality that allows users to customize their environment is
completely absent in Unity, with right-clicks yielding identical
results to left-clicks on almost every visible UI element.
So, what does it take to get a simple CPU graph next to the system
As it turns out, users must install another custom piece of software
provided out-of-band from the main Unity development pipeline. Not
only does a user have to install the software manually, it's not even
included in the default repositories. Instead, the extension is
only available from a PPA (which must be added manually).
The situation with the CPU monitor is hardly unique. It turns out
that modifying practically anything about the top panel in Unity is
extraordinarily difficult, requiring research, custom hacks or entire
add-ons to obtain features built-in to KDE, Gnome, LXDE and XFCE.
Simple actions like adding another panel at the bottom of the screen,
adjusting auto-hide behavior, tuning transparency, and changing the
order of the icons on the right side are not possible in the default
In short, customizing Unity in common ways almost uniformly results in
a project. The sad part is that there are a lot of pieces of software
that afford the user a lot of customization and power, but it comes at
the cost of the learning curve. Unity actually is less powerful than
other desktop environments while simultaneously being harder to use.
It's the worst of both worlds.
The sad part is I can imagine how this all came to pass. The design
meetings for Unity must have been tough. A product manager was
charged with creating a unified interface suitable for both desktop
and touch-based devices. By that point, Unity already lost, simply
because of the design constraints imposed by such a goal. Consider:
- Right-click had to be removed from most elements, since
touch-based devices wouldn't be able to easily access the
- The launcher had to remain on the left side of the screen,
probably because different interaction expectations were designed
for the right side. You can see this clearly in their designs for
- Since the launcher could be packed full of the "favorite"
programs, it might overflow. But because it had to support touch,
it couldn't be shrunk to accommodate the program icons (as seen in
Apple's OS X). So instead, it had to allow the user to scroll it,
removing the ability to easily rearrange programs via dragging,
resulting in the contorted "drag out of the launcher and back in"
- Common panel widgets, like CPU monitoring, were put on the back
burner, if considered at all, since smaller devices don't have the
space to include them, or the battery life to constantly update
them. (See Dan Sandler's comment on Android battery life for
These are just examples, but the point is that by trying to present a
unified experience across all devices, Unity seriously compromises
quality as well. Touch devices don't feel quite right (why is there a
program launcher on the left side?), and desktops get a dumbed-down
version of a desktop (no right-click, for example).
So, What's Better?
The good news is that if you're looking for something better than
Unity, you don't have to look far: just about everything is better.
MATE (a fork of Gnome 2) is quite serviceable and I used it happily
for nine months. Cinnamon (a reskinning of Gnome 3) is equally usable
and, while light on features, perfectly serviceable. XFCE is my go-to
environment on my Linux gaming box.
I'd have to say the gold standard today, though, is really KDE 4. It
took years, but that team has taken all the lessons learned from KDE 3
and created a superbly powerful, well-designed and sleek desktop
environment. So, if you've got the Unity blues, KDE is just a
sudo apt-get install kde-full
11 Feb 2014
Back in 2012, when I joined the startup scene in San Francisco, I was
surprised to learn that so many took Klout seriously. They tracked
their Klout rating over time, comparing it with others, and even had
playful competitions to see who could increase their Klout score the
most over a couple of months.
When I first learned of Klout shortly after it came out, I didn't
think too much about it. It basically seemed like a one-number metric
to determine your influence online. As the years have passed since
Klout was launched, I see it more as an example of a how a deeply
flawed model of the internet has been popularized.
In the early- and mid-1990s, the internet was a loose federation of
institutions, mostly in the .gov and .edu spaces. Email was still
highly decentralized, since webmail didn't exist yet. If you were 'on
the internet', it was likely through your university or research
institution, and they might offer you 'web space' where you could
publish some HTML files that constituted your website (much like the
site you're reading right now). It was a beautifully organic system,
but was still nascent, reserved mostly for the technical elite.
Fast forward to today, and we find that almost all of the power on the
internet has been consolidated. Home pages have been replaced by
social network profiles on corporate-controlled sites like Facebook
and Twitter. Rather than providing space and bandwidth, these
companies are 'identity brokers', a much broader and more lucrative role
than a simple web host. I've never heard that term used before, but
it seems apt.
But with this inevitable commercialization of identity online comes
second-tier services that analyze the resulting structure. Where
Google sought to analyze the inter-connectivity of the sites on the
internet, Facebook sought to rebuild the internet from the inside out,
replacing home pages and web hosts with profile pages that Facebook
could track metrics on, like popularity. As competition grew and
other niche networks joined the fray, an opportunity arose for
companies like Klout to act as a sort of meta-analyzer, analyzing
identity and profiles across social networks, delivering a satisfying
number letting you know your worth online.
As I was sitting at lunch earlier this week, a coworker asked me "Hey,
what do you think of Klout?" I paused only for a moment before
replying "Honestly? I think it's bullshit."
There are a couple of problems with Klout, one flowing from the other.
Klout is built on the idea that a person's influence online is
governed by the profiles they keep in large, corporate-controlled
networks. If a person doesn't buy into the notion that social
interaction online should be mediated by these identity brokers, Klout
simply ignores them. It's not the worst assumption to make, however,
in an age where The Pope has a Twitter account.
The second problem, an inevitable outgrowth of the assumption that
social networks are the source of influence, is that Klout completely
ignores other sources of influence. If someone has a blog with 500k
subscribers via RSS, it is invisible to Klout. If someone is a top
commentor on Slashdot, a top poster on Reddit, in the top 1% of
StackExchange users for C++, or has 10k followers on YouTube, they
might be completely invisible to Klout, and at the very least, all
those contributions online won't contribute to Klout's notion of
And that's the crux of the issue. My identity online is a handle I've
spent years building across dozens (probably hundreds, to be honest)
of sites. While it's in Facebook's and Google's and Twitter's best
interest for my identity to be tied to their service, I believe that
identity and influence cannot be so easily corralled.
And that's the real problem with Klout: it reinforces this notion that
the identity brokers dictate reality. When I joined the startup, a
senior employee told me "You don't exist online!" I was confused,
since I have at least two different blogs, and have fairly active
accounts with G+, Youtube, Reddit, Twitter, StackExchange, Slashdot
and GitHub. When I asked her what she meant, she simply said "You
have no Facebook profile!" Even in my case, where I maintain multiple
profiles on sites controlled by identity brokers, it still wasn't
enough; they have to be the right identity brokers.
05 Jan 2014
Back in February 2005, SHA-1 was broken. The core of
what "broken" means in this context is described very well by
Bruce Schneier in his post announcing the attack:
If you hashed 280 random messages, you'd find one pair
that hashed to the same value. That's the "brute force" way of
finding collisions, and it depends solely on the length of the hash
value. "Breaking" the hash function means being able to find
collisions faster than that. And that's what the Chinese did. They
can find collisions in SHA-1 in 269 calculations, about
2,000 times faster than brute force.
This was a major concern to me. It turns out that it's best to avoid
SHA-1 in a variety of contexts,
including cryptographic keys. Nevertheless, a bunch of
systems that rely on cryptographic security use SHA-1, including
checksums for Git objects and OpenPGP key fingerprints.
Even prior to the 2005 attack, Schneier pointed out that "It's time
for us all to migrate away from SHA-1." It was on this basis that I
almost switched to using SHA224 in my latest software. Almost.
To understand why I stuck with SHA-1, let's take a look at how
cryptographic hashes can be attacked.
The attack documented in 2005, like most hash attacks, was a
collision attack. Schneier didn't use that phrase when describing
it, but that's what he describes when he talks about finding a pair of
inputs that hash to the same value.
Collision attacks are the most common kind of attack against
cryptographic hashes because they don't restrict the set of inputs over
which the search for such a pair might be conducted. The reason
collision attacks are relatively easy is because any pair of inputs
that hash to the same value are acceptable; no restrictions are placed
on what those inputs are, or what the value of the hash is.
Aside: The Birthday Problem
A related concept is the birthday attack, which is related to the
birthday problem. The birthday problem is simple: how many people can
gather at a party before it becomes probable that two share the same
You can find the number fairly easily by calculating the likelihood
that, as the size of the party grows, two people won't share the
same birthday. Your first guest can have any birthday at all without
fear of sharing that birthday with another guest. But your second
guest can have any day except the day of the first guest's birthday,
leaving 364/365 possibilities. The third guest only has 363/365
possibilities, and so on. In imperative Python 3 code, it looks
something like this:
# The number of guests
guestcount = 1
# The chance of all guests having unique birthdays
uniquechance = 1
# Keep adding guests until the chance of them all being
# unique is less that 50%
while uniquechance > 0.5:
uniquechance *= (365-guestcount)/365
guestcount += 1
This code returns 23, meaning that once you have 23 guests at your
party, it's more than 50% likely that two guests will share a
birthday. The number is surprisingly small because we haven't
specified any day at which the collision will occur, in the same way
that the 2005 attack on SHA-1 doesn't restrict collisions to any
Unlike collision attacks (and the birthday attack), for many practical
applications of hashing, the value of the input and/or the hash
matters. When the input to the hash function constrains the search
for a collision, the hash must be broken using a preimage attack.
There are two types of preimage attack:
- (Preimage Attack) Given a hash, generate an input that hashes to
- (Second Preimage Attack) Given an input, generate a second input
that hashes to the same value as the given input.
A preimage attack only requires that the attacker find a single input
that hashes to a given hash. Even so, it is a substantially harder
attack to mount than a collision attack.
A second preimage attack adds an additional constraint: given an input
and its hash, find a second input that hashes to the same value.
Because there are more constraints on the solution, this attack is
even more difficult.
So how broken is SHA-1?
So why did I stick with SHA-1? Well, the software I'm working on is
concerned with identifying public keys using a digest of the key
itself. When used in this context, such a digest is called a
OpenPGP uses a SHA-1-based hash to generate the fingerprint for a
public key. One of the worst-case scenarios is that a user, given the
fingerprint for key, attempts to retrieve it from a remote source and
receives the wrong key, but with a fingerprint that matches, allowing
an attacker to read encrypted messages intended for someone else.
Since the attacker is given both a public key and its fingerprint and
needs to generate another public key that has the same fingerprint,
such an attack is a second preimage attack.
So, while SHA-1 is technically "broken" because of the 2005 collision
attack that reduced the search space by a factor of 2000, there are
still no known feasible preimage attacks (or second preimage attacks)
on SHA-1 that weaken it when it is used for key fingerprinting. I was
particularly hesitant to move away from SHA-1 for key fingerprinting
using OpenPGP because it is part of the standard, and I don't have any
interest in rewriting well-established cryptographic routines. I'd
rather reuse existing standards with an eye toward making them
accessible in new contexts. So, for now, SHA-1 it is.
04 Jan 2014
BusinessInsider posted an article a couple of days ago entitled
GOOGLE'S DIRTY SECRET: Android Phones Are Basically Used As
Dumbphones. I'll ignore the linkbait title and just address the
content (though I won't be linking to the article). There is really
one fact that forms the gist of the article. Here it is, in my words:
As much as 80% of the smartphone market is Android-based, but roughly
80% of the purchasing on smartphones is from iOS devices.
BusinessInsider simply says "It doesn't make any sense.", followed
shortly with "What the heck is wrong with Android users?"
The piece goes on to compare the smartphone market with an island, 20%
of which is occupied by Apple's "gleaming steel and glass tower",
while the 80% Android controls is "undeveloped countryside".
The article concludes with this gem:
But in the short-run, it seems like the users on the majority of the
island aren't interested in modern life.
Stark Pricing Differences
I don't have an MBA, but BusinessInsider's analysis is lacking. In
mid-November, IDC issued a press release regarding smartphone
markets. The release was picked up by a number of trendy tech blogs,
including Mashable and ReadWrite, as well as reputable
analysis sites like Statista. The IDC report said a number of
obvious and reasonable things that BusinessInsider failed to point
out. As far as spending discrepancies, one need look no further than
pricing to account for the difference. Android is just cheaper than
every other alternative, while iOS is much more expensive than every
other alternative. And the difference is stark. In Q3 2013, the
average selling price (ASP) for an iOS smartphone was $635, while the
ASP for an Android smartphone during the same period was $268 (a mere
42% of iOS's price). In China, where Android is even more prevalent,
the ASP for an Android phone is only $233. In short, Android
devices are simply targeting a different demographic than iOS devices.
This is nothing new. Apple has always targeted the wealthy.
Back in 2008, Apple had 8.7% of the desktop market share.
At the same time, Apple controlled 91% of the market for desktops
priced over $1000. That outlines Apple's strategy quite well: Apple
targets demographics that have a lot of money. It allows Apple to
spend a lot of money on design, build luxury devices, and market them
heavily. Apple rarely has a majority market share, but that doesn't
matter, because there's plenty of money to be made on the wealthy.
Smartphone Market Growth
The biggest factor that BusinessInsider failed to mention was the
growth of the smartphone market. The IDC report had this to say:
Android and Windows Phone continued to make significant strides in
the third quarter. Despite their differences in market share, they
both have one important factor behind their success: price," said
Ramon Llamas, Research Manager with IDC's Mobile Phone team. "Both
platforms have a selection of devices available at prices low enough
to be affordable to the mass market, and it is the mass market that is
driving the entire market forward.
ReadWrite's coverage of the IDC report expands on this point. Global
smartphone shipments grew 39% YoY 2012-2013, but you'd be hard pressed
to notice that growth in the U.S., which is a mature market with high
smartphone market penetration. For that reason, the U.S. isn't really
representative of the global smartphone market. As ReadWrite points
Yet between the relative affluence of the U.S., the mass of marketing
dollars spent to get American consumers to buy gadgets and the carrier
subsidy model that makes it easier to afford a new gadget, the U.S. is
a misnomer in the world economy.
I'm not sure 'misnomer' means what they think it means, but the point
is sound: the smartphone market is growing dramatically, and that
growth isn't coming from the U.S. As ReadWrite points out:
In the rest of the world, smartphone adoption is very clearly tied to
price. The cheaper smartphones are, the more likely that consumers in
emerging markets will be to purchase them.
It's in those price-driven markets overseas that Android is gaining so
much marketshare. The smartphone market can't grow if you're selling
$600 smartphones in areas with an average annual
income of $3000.
Shock: The Wealthy Can Spend More
BusinessInsider is rightfully focused on the worth of various market
segments. Nevertheless, I'd assert that saying people in developing
countries "aren't interested in modern life" because they don't live
in a "gleaming steel and glass tower" demonstrates poor taste.
It's not hard to look at the data and see why iOS users spend more
money: they have more money to spend. This is a major boon if you're
writing apps that cost money or operate under a freemium model where
revenue is driven by in-app purchases. For companies driven by
advertising, however, the playing field is more even. Correctly
targeted advertisements, even to people in lower income demographics,
can be quite valuable.
That's why Google's strategy with Android is significantly different
than Apple's strategy with iOS. Apple makes money keeping users on a
hardware upgrade treadmill. The fact the they can make money selling
developers' apps in a closed ecosystem while doing it is gravy.
In contrast, Google has bet on the web as an advertising medium. They
are most concerned with maintaining the ability to reach every user on
every platform with advertisements, which often involves
promoting web standards. But they are also concerned
with growing the overall market, which includes bringing smartphones
to developing countries and increasing global broadband penetration.
That's why Android is based on open-source, Google is
providing wireless internet in Africa, and Google is
focusing on providing ultra-high speed internet even
in mature markets. When the internet grows, so does Google.
Some might consider that a 'dirty secret', but to me, it's just common
31 Dec 2013
Pebble is notable for being the smartwatch that started the trend.
Lots of news outlets are happy to report that the trend will only grow
in 2014, and the new smart watches will be better in every way. It's
fairly certain that many major technology companies, including Google,
Samsung, Apple and Qualcomm are already working on the next generation
of smart watch-like devices. Many folks think the era of the watch is
over, and wonder what the appeal of a smartwatch is, while others
think the idea is nice, but wonder how anyone could find something as
simple as the Pebble useful enough to bother with.
Back in October, Slashdot
latest in a series of questions over the last few months posing
exactly that question.
I'd like to hear from more people with smart watches who are happy
with them, to better understand the appeal.
Hopefully, I can help answer it.
Cell phones used to be just that, phones. When smartphones were
released, certain demographics (like myself), realized that 'phones'
could be less about talking to others, and more about having a
programmable, always-connected, touch-centric computer in your pocket.
With technologies like instant messaging, SMS and push notifications
for email, smartphones are wearable computers that recieve,
filter and alert based on realtime information gathered from
around the globe. In short: the first step to becoming a cyborg.
Social standards evolve too, but never quite as quickly as technology.
On my phone, for example, I get notifications for email from two
personal accounts as well as a business account, two instant messaging
accounts, text messages, and breaking news alerts. I throttle
notifications for each of these by time of day and day of week, but
the problem is clear. As the volume of data flowing through my phone
increases, so does the cost of accessing it. In short, every time a
notification comes in, I have two choices.
I can pull my phone from my pocket, unlock it, pull down the
notifications drawer and check to see what came in, perhaps tapping
through to the corresponding application for more detail. This is
cumbersome and error prone for a number of reasons. If I'm in public
(say, on a train), it means I'm constantly pulling out my phone and
unlocking it. It's not only sometimes awkward, but also presents
something of a security risk, since it reveals my screen lock code to
anyone who happens to glance in my direction. In a more private
setting, it might just be annoying. If I'm in a conversation, or
perhaps a meeting, pulling out my phone to unlock it and check it is
often rude, and really does pull my attention away from something I
should probably be paying attention to.
My other option is to silence or ignore alerts that come in. This is
what most people do. You check the phone when you check it, and if
notifications come in, they can wait. This approach is reasonable,
but also somewhat retro; it contradicts the flow of technology. I
carry multi-gigahertz devices in my pocket, and I'd like to use that
power to collect information from the internet, filter it, and feed it
directly into my brain as it becomes relevant. That's sort of the
ideal. Simply deciding to only check email every 30 minutes provides
a solution to information overload, but it's not much better than the
late 1990's solution of polling for email in Eudora. I think we can
Updates on Your Face
One solution that has everyone excited is the idea of putting
information directly on your face. Specifically, Google Glass. I
think Google Glass is a compelling idea, but is probably 10 years too
early. The current implementation only amplifies all of the social
issues of smart phones, and doesn't provide enough enhanced benefit to
justify the price. While the camera functionality is interesting, the
main feature of Glass is the ability to get information pushed into
your field of vision in a timely fashion.
Smart watches are a cheaper, more socially acceptable way to do just
Does that mean the Google Glass idea is fatally flawed? Hardly. The
idea is actually great. Ideally, you'd see it take the form of a
contact lens, or even direct optical nerve stimulation. Those
approaches are at least a decade off, however, and a watch is probably
a better approach today than a headmounted display, at least for the
average consumer. It's less invasive, easier technologically,
cheaper, and socially well-understood, since it reuses a technology
that's been around for around 100 years: the wristwatch.
The Security Problem
I mentioned this earlier, but it bears more discussion. Since so much
data is flowing to and from my smartphone, and since it is almost
always with me, it presents a particularly large security risk.
Losing my phone or having it stolen would be a Big Deal.
The security problem is an issue with any device, usually proportional to:
- How connected that device is to all your online accounts, and
- How much data it stores on it, and
- How often you have it with you.
According to these criteria, laptops and smartphones present the
largest security risk, and smart watches appear to be poised to join
them. How can we mitigate this?
The approach Pebble adopts is to attack (1) and (2). The whole point
of a smartwatch is to always be with you, right there on your wrist.
So (3) is hard to address. But if the smart watch is only connected
to your phone, then it need carry no special account credentials. If
it is stolen, your accounts aren't compromised.
The other problem is that all the updates that are pushed to the watch
could be stored there, readily accessible. Call logs, chats, and
email all flow through the device. Pebble made the bold decision to
store nothing; once an alert is shown and dismissed, there is no
mechanism to retrieve it.
The upshot of Pebble's approach is that if you lose the Pebble, or it
is stolen, security is simply a non-issue.
The temptation for a watch to do so much more is there, of course, and
we'll discuss that in a bit.
Updates on Your Wrist
If you can address the security scalability that goes along with
having a proliferation of mobile devices, prehaps the appeal of a
smartwatch becomes more clear: instantly accessible alerts. But how
valuable are such alerts? They matter enough to me that they changed
the way I think about alerts.
One key aspect of the pebble is that it is silent. Always. Because
it is strapped to my wrist, I always leave notifications 'on', that
is, the watch vibrates when a new notification comes in from one of
the apps that I've whitelisted. At that point, I can make a decision
about whether or not I want to look at the alert. The message will
remain on the Pebble's screen for a while. Maybe 60 seconds. So even
if that particular moment isn't ideal, I can glance at my watch
anytime in the next 60 seconds and find out what the alert was.
Since checking is so low friction, I get updates in a much more timely
fashion. Whether I'm at dinner, in a meeting, on a train or walking
down the sidewalk, my phone stays in my pocket, but I can triage
incoming messages in around one second, often without stopping
whatever I'm doing. The end result is that I have my phone out much
less that most folks walking down the street, in meetings or on
One of the upshots of these low-friction updates is that I'm more
tolerant of them, because they don't cost me much. Suddenly, apps
like the New York Times and Breaking News were hugely useful, allowing
my Pebble to intercept top headlines in real time and push them to my
wrist. Whether it's the latest information about the government
shutdown, the NSA surveillance, a major acquisition or a product
release, I know about it without having to check news sources.
The Pebble also simplifies management of my phone's state. Since I
get notified of everything via Pebble, my phone is in a perpetual
silent-no-vibrate state, so I don't have to worry about silencing it
during various events in my day, like meetings. If I want to tune out
the net, I just take off my watch and put it on my desk, and suddenly
I'm disconnected. When I want to reconnect, I put my watch back on,
and I'm back in the game. The entire system is incredibly simple and
Less is More (Battery Life)
One concern with yet-another-device is battery life. Another thing to
charge and/or sync is a big tax, so it's important that each
additional device be simple to maintain.
The Pebble requires no syncing, and does remarkable well on battery
life, especially when compared with other devices in its class.
Google Glass has a notoriously short battery life that tops out around
12 hours. The Galaxy Gear, Samsung's smartwatch offering, is reported
to have a battery with similar limitations, with numbers reported
ranging from 10 to 24 hours. The Pebble, in contrast, lasts anywhere
from 6 to 8 days. Not having another thing to worry about charging
each night in addition to my phone and tablet makes the offering much
But how does Pebble achieve such long life? The team building Pebble
were intensly focused on delivering just one feature: updates on your
wrist. They had the discipline to say "no" to dozens of features that
would dilute the utility of the device. While porting Android or iOS
to a watch might seems like an obvious move for Google, Samsung and
Apple, the Pebble team used a much simpler embedded operating system.
Rather than soup up the watch with dozens of sensors, they gave it
just three buttons and an accelerometer. No altimeter. No compass.
No wifi. No NFC. An argument could be made for each of these
features, but the right answer is to just deliver messages reliably to
your wrist. If you want all those other things, you've got a phone.
Enlightenment Through Minimalism
From the moment I backed the Pebble team on Kickstarter, I did so
beause I was impressed with both their experience and dedication in
the development of viable prototypes, and their intense focus on
delivering the core aspects of the smartwatch vision they'd refined
over years of testing. They had thought about all the problems with a
smartwatch, and rather than trying to stick every bell and whistle on
the device to add another bullet point to the feature list, they
developed a vision for what a smartwatch should be.
The discipline the team demonstrated in sticking to the vision is
admirable. And the results equally so. So when you ask yourself why
you'd get a Pebble instead of some other smartwatch, look at all the
bullets that aren't on the Pebble's feature list. That's how you
can tell they did it right.