Creation and consumption

There's a pretty common argument in tech that though of course there are billions more smartphones than PCs, and will be many more still, smartphones are not really the next computing platform, just a computing platform, because smartphones (and the tablets that derive from them) are only used for consumption where PCs are used for creation. You might look at your smartphone a lot, but once you need to create, you'll go back to a PC. 

There are two pretty basic problems with this line of thinking. First, the idea that you cannot create on a smartphone or tablet assumes both that the software on the new device doesn't change and that the nature of the work won't change. Neither are good assumptions. You begin by making the new tool fit the old way of working, but then the tool changes how you work. More importantly though, I think the whole idea that people create on PCs today, with today's tools and tasks, is flawed, and, so, I think, is the idea that people aren't already creating on mobile. It's the other way around. People don't create on PCs - they create on mobile. 

There are around 1.5bn PCs on earth today (using the term 'PC in the broad sense covering Wintel, Mac and Linux). Maybe as many as 100m PCs are being used for some kind of embedded product: elevators, points of sale, ATMs, machine tools, security systems etc. Setting those aside, the rest are split roughly evenly between corporate and consumer, and many of these (especially the consumer ones) are shared, such that there are over 3bn people online. But what are all those PCs being used for?

It's pretty clear that only a small proportion are actually being used for professional applications. Perhaps 50m people are using everything from Adobe to Autodesk to software development tools; adding in Office users is more complex, since there are notionally a billion installed copies, but ‘power’ Office users probably number a further 25-50m. So, there are perhaps 100m people who today engage in some form of complex creation using what one might call 'sophisticated professional software' on a windows + mouse + keyboard-based personal computer. (I’ve outlined my workings and sources for this at the bottom). 

If less than 10% of PCs are actually doing professional, precise, complex creation, what are the other 90% being used for, if not creation?

Well, they do email, and the web. Some of the consumer ones also play games - there are over 125m 90-day active Steam accounts (which would be under 20% of consumer PCs - one could look at this as an analogue of the professional creation app users, except that there's probably a substantial overlap in the two sets). They do Facebook and buy groceries. The corporate ones perhaps do accounts payable and customer support, and SAP or Salesforce or Success Factors or dozens of other vertical business process applications. Many of those applications will still be around in a decade or two (if they’ve not been replaced by machine learning) - they might move to SaaS web apps if they're not there already, and might be accessed on Chromebooks or Android tablets or iPads or just on $250 Windows boxes, but it doesn’t really matter. They don't need a (user-accessible) file system and they don't need a 'precision pointer', a complex multi-window interface and all the other things that separate ‘real computers’ from the new generation, any more than email or a web browser do. Quite a lot of them just need a Gmail box. They probably need a biggish screen and perhaps a keyboard, but that’s not what makes a ‘PC’. 

Conversely, what is being done on ‘phones’ - or rather, on these small touch-screen computers that we all carry around with us? We write - people have been writing more on phones than on PCs since the days of SMS - and we share, take pictures, create videos, play games and talk to our friends. That is, we do most of things that those 90% of PCs are used for, but we also do everything that you can do with a touch screen and internet-connected image sensor, and GPS, and all the other things a PC doesn't have, plus everything you can do with all of the billions of app downloads.  

The big difference on mobile is that now people know how to do this. In my first term at Cambridge, in 1995, I explained to a future president of the Union that though he had been told to ‘download Netscape’, clicking repeatedly on the ‘download’ graphic on Netscape’s site had merely put 15 copies of the installer file onto his desktop, and he would also have to 'double-click' on one of them. That was pretty typical - installing software by yourself, that added capabilities to your computer, or edited video (something every ten-year-old now does all day), was something for experts. More recently, I’ve seen data suggesting that a large proportion of people who owned digital cameras never loaded the pictures onto a computer (even if they owned one). They looked at the pictures on the camera screen, or got them printed at a kiosk - but didn't print them until the card was full, as they often thought that you couldn’t add more pictures to the card after you’d ‘developed’ it in this way. My father-in-law prints things out by taking a photo of the computer screen and then taking his camera to the kiosk in the supermarket.  This piece from NNG last year provides some handy quantification of what computer literacy really looks like. These kinds of questions start to go away with mobile.

It seems to me that when people talk about what you ‘can’t’ do on a device, there are actually two different meanings of ‘can’t’ in computing. There is ‘can’t’ as meaning the feature doesn’t exist, and there is ‘can’t’ as meaning you don’t know how to do it. If you don’t know how to do it, the feature might as well not be there. So, there is what an expert can’t do on a smartphone or tablet that they could do on a PC. But then there are all of the things that a normal person (the other 90% or 95%) can’t do on a PC but can do on a smartphone, because the step change in user interface abstraction and simplicity means that they know how to do it on a phone and didn’t know how to do it on a PC. That is, the step change in user interface models that comes with the shift from Windows and Mac to iOS and Android is really a shift in the accessibility of capability. A small proportion of people might temporarily go from can to can’t, but vastly more go from can’t to can. 

Meanwhile, while there are 1.5bn PCs, many of them shared, there are today around 3bn smartphones, and this will rise to 5bn or more in the next few years, out of 5.5bn people on Earth aged over 14. There is a meaningful grey area around what some of these people pay for connectivity and pay to charge their phones, but the price and distribution of smartphones means that billions more people will use smartphones for something than ever used a PC for anything at all. 

So, 100m or so people are doing things on PCs now that can't be done on tablets or smartphones. Some portion of those tasks will change and become possible on mobile, and some portion of them will remain restricted to PCs for a long time. But there are another 3bn people who were using PCs (but mostly sharing them) but who weren't doing any of those things with them, and are now doing on mobile almost all of the stuff that they actually did do on PCs, plus a lot more. And, there's another 2bn or so people whose first computer of any kind is or will be a smartphone. 'Creation on PC, consumption on mobile' seems like a singularly bad way to describe this: vastly more is being created on mobile now by vastly more people than was ever created on PCs. 


Notes

  • Adobe reported 4.25m Creative Cloud subscribers a year ago, and has a roughly equivalent base of legacy non-subscribing users

  • Autodesk has 2.5m subscriptions and 2.6m non-subscribers

  • Matlab has 'millions' of users

  • In 2014 Apple said it has "more than 1m installs of Final Cut Pro"

  • Stack Overflow estimates 16m ‘English-consuming’ developers and 40m people who look at code, but on the other hand the USA Department of labour says 1.1m in the USA, which suggests Stack Overflow's estimate is a (very) upper limit

  • Apple has 13m registered developers

Some of these numbers are a little old or fuzzy, while some user bases will overlap and some won't, and obviously people use other applications that aren't on this list. But taken together, they suggest that the number of people who use professional, complex PC-based applications to do 'real work' is something around 50m. Meanwhile, Apple said recently that around 15m Macs are used at least weekly for 'high-performance' applications (software development, video editing, 3D graphics etc) and 30m used occasionally for those tasks, which tends to confirm this number. (Note that though Macs have a very small share of the overall PC market they have a much higher share in creative industries and software development.)

The outlier is Office, for which Microsoft has reported an installed base of around 1bn. However, as we all know, OEM bundles and corporate deals mean that Office is on an awful lot of computers on which it isn't really used (when a university with 20k people or company with 250k employees takes out a license, those people are all counted as Office users), and more importantly, a huge number of Office users actually use it in pretty limited ways, which is why things like Google Docs exist. Microsoft doesn't release the telemetry that would tell us how many people could be called power users, but we can take a proxy by looking at some of the professions that imply power use:

  • There are 660k CPAs in the USA and perhaps 150k ACAs in the UK.

  • 100k MBAs are awarded in the USA each year - so one might propose something over 1m people in the USA have an MBA and at most several times that globally (this is of course a good MBA interview question itself)

  • 16k people passed the CFA Level III last year

  • There are only 1.3m lawyers in the USA (actually, this is often a disguised MBA) and 120k solicitors in the UK

This gets us to millions, or low tens of millions at most, of people who actually know how to make a chart in Excel without touching the mouse, or who understand how track changes works.

Meanwhile, there are all sorts of PCs being used for embedded applications of various kinds. Some of these are very visible but actually quite small as a percentage of the total PC base: PCs are inside pretty much any ATM or elevator, but there are only about 5m ATMs on earth, and perhaps 10m elevators (there are 1m in the USA). Points of sale are a much bigger use-case: 45m merchants accept VISA cards and the top decile of these (Wal-Mart etc) have tens or hundreds of thousands of units, mostly PCs, though the tail often has only a terminal. Adding all of these use cases together might get you to 50-100m units. These are neither creation nor consumption, and over time a lot of them will convert to Android.