Apple Core ML: Machine Learning for Everyone

Page 1

Apple Core ML: Machine Learning for Everyone

Author Name: Rajiv Kumar Email: rajiv.kr.147@gmail.com Date: 14th August 2017

1


Apple Core ML: Machine Learning for Everyone Machine learning is needed for tasks that are too complex for humans to code directly. Some tasks are so complex that it is impractical, if not impossible, for humans to work out all of the nuances and code for them explicitly. So instead, we provide a large amount of data to a machine learning algorithm and let the algorithm work it out by exploring that data and searching for a model that will achieve what the programmers have set it out to achieve. “Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that machines should be able to learn and adapt through experience.�

Source: Simplilearn

2


Machine Learning Methods Machine learning tasks are typically classified into following categories: 1. Supervised Learning: Supervised learning algorithms make predictions based on a set of examples. The computer is presented with example inputs and their desired outputs. Supervised learning is hence more appropriate and commonly used in applications where historical data predicts future events. There are several specific types of supervised learning: classification, regression, and anomaly detection. Example: Prediction of occurrences of fraudulent credit card transactions.

2. Unsupervised Learning: Unlike supervised learning, unsupervised learning is used against data that has no historical data. The goal of an unsupervised learning algorithm is to organize the data in some way or to describe its structure. Example: self-organizing maps, nearest neighbor mapping, singular value decomposition, and k-means clustering.

3


3. Semi-supervised Learning: Semi-supervised learning is a bit of both – supervised and unsupervised learning and uses both labeled and unlabeled data for training. This type of learning can again be used with methods such as classification, regression, and prediction. Example: Face and Voice recognition techniques.

4. Reinforcement Learning: A computer program interacts with a dynamic environment in which it must perform a certain goal (Machine is trained to make specific decisions). The machine is exposed to an environment where it trains itself continually using trial and error. This machine learns from past experience and tries to capture the best possible knowledge to make accurate business decisions. Example: Robotics and Internet of Things (IoT) applications.

4


Common Machine Learning Algorithms Here is the list of commonly used machine learning algorithms. These algorithms can be applied to almost any data problem: •

Linear Regression

Logistic Regression

Decision Tree

SVM

Naive Bayes

KNN

K-Means

Random Forest

Dimensionality Reduction Algorithms

Gradient Boost & Adaboost

For more Info: https://www.analyticsvidhya.com/blog/2015/08/common-machine-learning-algorithms/

Apple Core ML Overview Machine learning solutions have been available for a while in the cloud, but these systems require a constant Internet connection and oftentimes have a very noticeable delay on iOS for obvious reasons. This also creates a security risk for sensitive data. With iOS 11, Apple finally introduced a native machine learning and machine vision framework. This opens up opportunities for creating new and engaging experiences. Core ML is a brand new machine learning framework, announced during this year’s WWDC, that comes along with iOS 11. Core ML is a new framework, which allows developers to use machine learning models in their apps without the knowledge of neural networks or machine learning algorithms. Core ML can help you make your app more intelligent with just a few lines of code.

Understanding the Apple Core ML Core ML supports a variety of machine learning models, including neural networks, tree ensembles, support vector machines, and generalized linear models. Core ML requires the Core ML model format (models with a .mlmodel file extension).

5


“Core ML is optimized for on-device performance, which minimizes memory footprint and power consumption. Running strictly on the device ensures the privacy of user data and guarantees that your app remains functional and responsive when a network connection is unavailable.”

Apple Core ML Architecture and Components Core ML is an Apple framework, which allows developers to simply and easily integrate machine learning (ML) models into apps running on Apple devices (including iOS, watchOS, macOS, and tvOS). Core ML introduces a public file format (.mlmodel) for a broad set of ML methods including deep neural networks (both convolutional and recurrent), tree ensembles with boosting, and generalized linear models. Models in this format can be directly integrated into apps through Xcode. Core ML is the foundation for domain-specific frameworks and functionality. Core ML supports Vision for image analysis, Foundation for natural language processing (for example, the NSLinguisticTagger class), and GameplayKit for evaluating learned decision trees. Core ML itself builds on top of low-level primitives like Accelerate and BNNS, as well as Metal Performance Shaders. Tool for Apple Core ML Models Coremltools in a python package for creating, examining, and testing models in the .mlmodel format. In particular, it can be used to: •

Convert existing models to .mlmodel format from popular machine learning tools including Caffe, Keras, libSVM, scikit-learn, XGBoost.

Express models in .mlmodel format through a simple API.

Make predictions with an .mlmodel

6


Key benefits of Apple Core ML: •

Core ML supports a wide variety of machine learning models. From neural networks to generalized linear models. Here’s some models to try it out: o

Places205-GoogLeNet CoreML - Detects the scene of an image from 205 categories such as an airport terminal, bedroom, forest, coast, and more.

o

ResNet50 CoreML - Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.

o

Inception v3 CoreML - Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.

o

VGG16 CoreML - Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.

Core ML facilitates adding trained machine learning models into your application. This goal is achieved via coremltools, which is a Python package designed to help generate an .mlmodel file that Xcode can use.

Core ML automatically creates a custom programmatic interface that supplies an API to your model. This helps you to work with your model directly within Xcode, allowing you to work with it like it is a native type.

7


Interested in seeing how you can integrate Apple’s Core ML into your own apps? A simplified demo app that allows user to pick an image from gallery and the machine learning algorithm will try to predict what the object is in the picture. Create Your Xcode Project Open Xcode and create a new Single App View project. Go to your Main.storyboard and setup the UI you would like for the app.

Adding a Model to Your Xcode Project We use the Inception v3 model but feel free to try out the other model. Once you have the Inception v3 model downloaded, add it into the Xcode Project and take a look at what is displayed.

8


In the above screen, you can see the type of data model, which is neural network classifier. The other information that you have to take note is model evaluation parameters. It tells you the input the model takes in, as well as, the output the model returns. Here it takes in a 299Ă—299 image, and returns you with the most like category, plus the probability of each category. The other thing you will notice is the model class. This is the model class (Inceptionv3) generated from the machine learning model such that we can directly use in our code. If you click the arrow next to Inceptionv3, you can see the source code of the class.

9


Creating the Model in Code Now, let’s add the model in our code. Go back to ViewController.swift. First, import the CoreML framework at the very beginning:

Next, declare a model variable in the class for the Inceptionv3 model, and initialize it in the viewWillAppear() method:

Now if we go back to Inceptionv3.mlmodel, we see that the only input this model takes is an image with dimensions of 299x299. So, we convert an image into these dimensions. Using the Model to Make Predictions We use the Inceptionv3 model to perform object recognition. With Core ML, to do that, all we need is just a few lines of code.

That’s it! The Inceptionv3 class has a generated method called prediction(image:) that is used to predict the object in the given image. Here we pass the method with the pixelBuffer variable, which is the resized image. Once the prediction, which is of the type String, is returned, we update the classifier label to set its text to what it has been recognized.

10


Building and Running a Core ML App Xcode compiles the Core ML model into a resource that’s been optimized to run on a device. This optimized representation of the model is included in your app bundle and is what’s used to make predictions while the app is running on a device.

References 1. https://www.sas.com 2. https://www.simplilearn.com 3. https://docs.microsoft.com 4. https://hackernoon.com 5. https://www.alexcurylo.com 6. https://www.bignerdranch.com 7. https://pypi.python.org 8. https://www.appcoda.com 9. https://medium.com

11


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.