skip to Main Content
Apple’s Core ML Brings AI to the Masses
Amazon, Apple , Artificial Intelligence , Google , Meta

AI is one of the core themes on which we focus at Loup Ventures. And as analysts, we heard Google, Facebook, Amazon, and Apple emphasizing their focus on AI over the last several years. Google CEO Sundar Pichai has commented on each of the past three Google earnings calls that Google is transitioning from mobile-first to AI-first. Facebook has recently spent a lot of time and resources developing chat bots on its platform, and has utilized AI to create a better news feed, and improve photo recognition. Amazon uses AI extensively with recommendations, and is integrating third-party AI models into AWS. While Google, Facebook and Amazon are each making significant progress as it relates to AI, it’s worth noting that Apple was the first company of the four to embrace it.

Apple’s AI roots date back to the mid 1990s with handwriting recognition on the Newton. In June Apple announced Core ML, a platform that allows app developers to easily integrate machine learning (ML) into an app. Of the estimated 2.4m apps available on the App Store, we believe less than 1% leverage ML today – but not for long. We believe Core ML will be a driving force in bringing machine learning to the masses in the form of more useful and insightful apps that run faster and respect user privacy.

Apple’s history in ML. Apple’s history in ML dates back to 1993 with the Newton (a PDA Apple sold from 1993 to 1998) and its handwriting recognition software. While not a complete list, Apple has since used AI in the following areas:

  • Facial recognition in photos
  • Next word prediction on the iOS keyboard
  • Smart responses on the Apple Watch
  • Handwriting interpretation on the Apple Watch
  • Chinese handwriting recognition
  • Drawing based on pencil pressure on the iPad
  • Extending iPhone battery life by modifying when data is refreshed (hard to imagine that our iPhone batteries would be even worse if not for AI)

Core ML. Core ML was announced at Apple’s June 2017 WWDC conference. It’s a machine learning framework that sits below apps and third-party domain specific AI models, but above processing hardware inside of a Mac, iPhone, iPad, Apple Watch, or Apple TV.

Source: Apple

Core ML allows app developers to easily incorporate third-party AI models into their apps. App developers don’t need to be experts in AI and ML to deliver an experience powered by AI and ML within their app. In other words, Apple will take care of the technical side of incorporating ML, which allows developers focus on building user experiences.

At WWDC, Apple outlined 15 ML domains that can be converted to work on apps:

  • Real Time Image Recognition
  • Sentiment Analysis
  • Search Ranking
  • Personalization
  • Speaker Identification
  • Text Prediction
  • Handwriting Recognition
  • Machine Translation
  • Face Detection
  • Music Tagging
  • Entity Recognition
  • Style Transfer
  • Image Captioning
  • Emotion Detection
  • Text Summarization

What’s different when it comes to ML between Apple vs. Android? Google provides developers with TensorFlow compiling tools that make it easy for Android developers to integrate ML into their apps. Developer blogs suggest that Core ML makes it easier to add ML models into iOS apps, but we can’t compare the comparative ease of adoption. However, we can say they are different when it comes to speed, availability, and privacy.

  • Speed. ML on Apple is processed locally which speeds up the app. Typically, Android apps process ML in the cloud. Apple can process ML locally because app developers can easily test the hardware running the app (iOS devices). In an Android world, hardware fragmentation makes it harder for app developers to run ML locally.
  • Availability. Core ML powered apps are always available, even without network connectivity. Android ML powered apps can require network connectivity, which limits their usability.
  • Privacy. Apple’s privacy values are woven into Core ML; terms and conditions do not allow Apple to see any user data captured by an app. For example, if you take a picture using an app that is powered by Core ML’s vision, Apple won’t see the photo. If a message is read using an app powered by Core ML’s natural language processor, the contents won’t be sent to Apple. This differs from Android apps, which typically share their data with Google as part of their terms and conditions.

AI for the masses. In the years to come, iPhone users updating their favorite apps will experience a step function improvement in utility, but may never know that Core ML is behind the curtain making it all possible. We can all look forward to continually improving apps thanks to Core ML.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Back To Top
Search