Apple has introduced mechanism to train an ML model with CoreML 2. The required model to process images based data is called image classifier model.
We trained a model using CoreML 2 to recognise black and white photos of Einstein available on internet. The model currently tells that if the person in picture is Einstein or is not Einstein. In order to train our model, we created a folder with name of training. Inside that folder we created two folders on with name of Einstein containing all photos of Einstein’s face, and other with name of Not Einstein containing photos of random human faces. The names of folder “Einstein” will be assigned to all images that will be recognized as Einstein and “Not Einstein” will be assigned to all faces that are not recognized as face of Einstein. All images are obtained using STS Central.
Using image classifier builder in Playground
Create a new Playground using Xcode. Replace the default code in playground with following
import Cocoa import CreateMLUI let builder = MLImageClassifierBuilder() builder.showInLiveView()
Run it and launch assistant editor.
You will see an image classifier. Drag and drop your training folder in the section with “Drop Images To Begin Training”.
The training process will start. During training, you can see the progress of training process. It will show images from available data set.
Once training is completed your model is ready.
You can test it by dropping a folder with name of either “Einstein” or “Not Einstein”. It will show both predicted and expected results based on name of the folder with accuracy confidence percentage of prediction.
You may rename your model and save it at desired location. The classifier will be saved as ImageClassifier.mlmodel. This model can be imported in an iOS project and used for prediction as given in implementation details of STS Central. This is one of the methods to train an ML model. The second method is to do it programmatically.