Cards, code and wearables

Google has announced 'Android Wear', a new extension of Android to power smart watches (it also realised some teaser renders of Motorola smart watches that are due for this summer). The Wear concept is that smart watches are remote touch displays for an Android smartphone. They will show the time, accept touch and voice input, display the Google Now feed and they will display all the notifications that apps on your phone produce. Developers have options (which will be enhanced in future) to customise how the notifications their phone apps produce behave on the watch. But they don't get native code at all - the developer isn't running code on the watch, really. The device is really an extension of the phone's Android OS itself, not an extension of your app. 

In effect, the watch is a device for using Google Now and cards that apps on the phone send to it. 

Now contrast this with the rumours of a new Apple 'Healthbook' app. I hate speculating upon Apple rumours, because they could come true next week, next year or never, but they provoke an interesting idea. 

Suppose, for the sake of argument, that Apple does indeed plan a health app that's card-based, somewhat like Passbook. What would happen when you buy and turn on a blood pressure monitor that is certified for 'Healthbook'? Well, one would expect that Apple would use the Bluetooth LE auto discovery that's already in iOS7 to detect it automatically and tell you. And then, suppose it offers to install the Healthbook card to manage it (either from iTunes or from the device itself) - an HTML/Javascript package that runs in the Healthbook sandbox in some way. Suppose it does the same for any sensor you might buy? Then Apple has created a zero-setup platform for personal health devices. No apps, no native code, no app store, no configuration at all.  

This would be one answer to why Apple's recent hires of 'wearables experts' sound a bit like a team for a hospital device rather than a watch, measuring various quite technical things - because Apple plans to enable such devices, not try to pack every single one into its own device. That is, the straightforward sensors should live in the phone (like the pedometer that's already in the iPhone 5S) and the complex and demanding ones should be enabled by an Apple platform, not become part of an Apple device.

Today you can manage a bunch of heath sensors with a bunch of apps, but that seems less... obvious, to use an 'Ivism'. If I have a wearable sleep sensor, a pedometer in my phone and a wifi scale (without even getting into glucose metres and more specialised things) should that be three apps that I install separately and open separately? If I buy a small computer I wear on my wrist, should it run apps (especially given that with the current state of technology it'll need to use your phone to go online anyway)?  If you have multiple devices, where should the code live, and how do you shape the user flow based on what makes sense rather than on where you put the code? Does a sensor need a screen? Does a screen or a sensor need to be smart? Is the right UI something totally custom that's installed from a store, or something more standardised?

This question of where the code lives also of course applies to TVs and to cars as much as to wearables. With AppleTV and Chromecast and Carplay, Apple and Google are saying that though everything is becoming a computer, actually the 'smart' part should be concentrated in the smartphone or tablet - something that's easy to update, that's replaced every couple of years, and that has a rich touch interface, and everything else should be a dumb sensor or dumb glass or both. And so the apps should only be in one place, and whether it should be an 'app' in a strictly technical sense is also up for debate. 

During the 'apps versus HTML' argument of a year to two ago, someone said that the issue is not what coding language you use but how you get an icon onto the user's home screen and whether indeed they want  your icon on their home screen. The conversation more or less crystallised around the position that apps are for the head of the tail and the web is for the rest. But Android Wear is not the web or an app. Neither is Google Now, and neither is the Healthbook I just described. 

Now, suppose you hesitate outside a restaurant and look at your phone, and iBeacon has already activated a Yelp review card on your phone or watch, or Google Now has put a scraped review up, or Facebook tells you 10 of your friends liked it? Is that the web? Or apps? How do you do SEO for that? What's the acquisition channel? Some of that might be HTML, but you'll never see a URL. 

It seems to me that the key question this year is that now that the platform war is over, and Apple and Google won, what happens on top of those platforms? How do Apple and Google but also a bunch of other companies drive interaction models forward? I've said quite often that on mobile the internet is in a pre-Pagerank phase, lacking the 'one good' discovery mechanism that the desktop web had, but it's also in a pre-Netscape phase, lacking one interaction model in the way that the web dominated the desktop internet for the last 20 years. Of course that doesn't mean there'll be one, but right now everything is wide open. 

This thought, incidentally, is one of the things that prompted this tweet. 

Benedict Evans