The path for technology for the next decade? Here's 5 guesses.

Apple isn't known for being humble in its keynotes. It modestly grades its own products on a scale from "amazing", "phenomenal", and "magical" all the way up to "revolutionary" and "insanely great". Still, phrases like "setting the path for technology for the next decade" are reserved for big occasions.

Steve Jobs used such expressions in his comeback period when talking about NeXTStep that became Rhapsody that become OS X. He called the PowerPC to Intel transition a "brain transplant" to "set Apple up for the next decade". And he used it in when introducing the iPhone ("5 years ahead of the competition"). But that's about it. "The next decade" remains an expression that hasn't been watered down yet by overuse. And when Apple does choose to use it, it's usually justified in hindsight.

And still, The New Yorker called the iPhone X novelties mundane. Forbes agreed with Apple in a narrow way, saying iPhone X would change the way we use a smartphone. Others have called it underwhelming compared to the latest Samsung. You're not seeing the big picture here, Miss Lane.

So what is Tim talking about?

If you look at the novelties in iPhone X vs. iPhone 7, the most heavily marketed elements are the lack of Home Button, the introduction of Face ID, and Animoji. These three are hardly decade-defining stuff. Other smartphones have shipped without home buttons. If anything, Apple is late to that particular party. FaceID looks great, and although it is an impressive piece of technology, it's arguably just a marginal improvement over TouchID in terms of end user value. It doesn't enable many new use cases, rather it shaves a second off the existing ones. And Animoji may end up being popular, but they may also end up like the Digital Touch heartbeat on the Apple Watch - a try-it-twice gimmick.

So no, the big picture is not the Home Button, FaceID or Animoji.

The thing in iPhone X that sets the path for technology for the next decade is the array of sensors that enables this all. And by extension, the A11 Bionic processor (with its "neural engine") that processes the sensor input, although the A11 is also in iPhone 8. And I would also include related iOS frameworks such as ARKit, CoreML, and Vision, even though these were announced at WWDC. Together, these form a cohesive larger vision where the whole is more than the sum of the parts.

In a way, the iPhone X serves Apple like the Apollo program - a moonshot. The big return isn't so much in the achievement itself, but in galvanizing contributors from different fields around a clear and tangible use case. The introduction of depth sensors and infrared sensors, combined with the neural networks to be able to interpret all that data, the silicon to run it on, and the API's that expose all of this to developers, are a very powerful step forward indeed. Apple can deliver that in a vertically integrated way that Google and Samsung may have a tough time competing with.

That competitive advantage is what Tim Cook is betting Apple's future on. Apple has accepted that the next 2 decades will be dominated by advances in AI, a game that many had assumed Google would win easily. But while Google may be better positioned on deep learning, i.e. training a network to recognize a face, Apple may be better positioned to apply an already-trained network on new input in a fully integrated device. Which is more important, being able to better train the model or being able to better apply it? Time will tell.

In any case, the iPhone X is the first step in delivering on Tim Cook's earlier comments that the AR was "big and profound" and that the car project was very interesting "from a core technology point of view". Just like "Hey Siri" and the HomePod, FaceID forces us to shift our perception of our digital devices as being tools that we pick up and put down, to recognizing them as tools that are permanently aware of their environments through their own eyes and ears.

Where might this new iPhone X tech go from here, in the next decade?

First, one logical step would be to apply the iPhone X tech to the iPad. Getting an iPad X with FaceID at WWDC 2018 wouldn't be unexpected. There is a caveat here though - the way people use an iPad is very different from the way people use an iPhone. The Animoji and Snapchat use cases that Apple demoed for the iPhone will feel less at home on an iPad - leaving us with only Face ID.

But Apple wouldn't be Apple if they wouldn't come up with a way to make this Kinect-like technology feel right at home on the iPad. One such approach might be to introduce air gestures - just flick your fingers in front of the camera to scroll through lists or pinch the air to zoom. There should be little doubt that Apple experiments with such stuff, the question is more whether, having tested it in real-life interactions, they consider it good enough to release.

Second, if it is indeed something they choose to release, a logical next step could be the Mac. iMacs and MacBooks have screens that could easily accomodate the iPhone X front-facing sensors to allow Face ID on macOS, and the way people normally interact with Macs make them much more suitable for air gestures than for touchscreens (a concept that Apple has always resisted).

Bringing Face ID to the Mac might require an embedded ARM chip running a version of watchOS or iOS, like the hybrid security approach they took for the TouchBar. If they do indeed decide to bring some type of Air Gestures to the Mac, that might even be a good moment to simultaneously introduce a UIKit derivative to replace AppKit, something that I've written on before.

Mac Pros would be a bit of a different story given the prevalence of 3rd party displays - perhaps some type of Kinect-like sensor bar could be an option there? Such a Kinect-like sensor bar for Mac Pros might also be a way to bring air gestures to the Apple TV. It wouldn't be that much of a stretch from the focus-oriented tvOS controls with Siri Remote. And it just might (you never know) even be what Steve Jobs was referring to years ago when he told Walter Isaacson he "finally cracked it".

A third step would be to bring the same sensors to the back of the iPhone. People have been asking on Twitter why Apple didn't just put the sensors in the back to begin with. I think the reason for that lies not with the sensors themselves, but with building the machine learning framework to recognize everything that could be at the back of the iPhone. You know what the selfie camera will see - a face. It's not easy, but it's a narrow use case to work on. The back can see a room, a landscape, a car, a tree, somebody else's face, ... That's a lot of things to recognize and turn into a 3D model for developers to interact with. Apple did the smart thing by starting small.

Fourth, the car. It's been clear for a while that Apple is building something there, and sensors play a big part in that. But they won't be reliable enough from day one. They need to get their feet wet with face recognition, and then maybe gradually increasing the scope with hand gestures, and then get towards full real world modelling. Once the models are mature and reliable enough, and after they've had enough failures in safe conditions, only then can they apply them to what are essentially life-or-death situations. They can't afford to launch a car if the software can't reliably distinguish a moving car from a parked one. I would expect scaled-up versions of those same sensors that are now on the front of the iPhone X to appear at every corner of an Apple car.

And finally, glasses. This has been discussed less frequently than the car project, but there seems to be little doubt that Apple is working on glasses. It would be no surprise at all if the sensors that line the front of the iPhone X lines the front of their AR glasses, feeding the apps inside with a continuously updated model of the real world for developers to interact with. Hello, WorldKit.

One might think that glasses should be a smaller project than the car - after all, they're a much smaller object physically, and a lot cheaper. But I think the glasses are actually the bigger platform - they enable much more use cases, while the car is focused on bringing someone from A to B. On top, Apple has a bigger competitive advantage in glasses than it does in cars. Glasses play more to its consumer marketing strengths, and to it supply chain and manufacturing expertise. And while there is no clear market leader in smart glasses, there is loads of competition in cars.

The glasses are Apple's blue ocean strategy. And they just dipped their toe in the water.

Just my two cents.




Comments

Popular posts from this blog

Maybe 2018 will be the year UIKit replaces AppKit on the Mac

Why losing some customers may be good for the Mac