What’s new in Firebase Machine Learning and MLKit
In recent weeks, there have been some changes to the Firebase machine learning products. Firstly, we had a name change. Firebase MLKit is no longer, it is now known as Firebase Machine Learning. MLKit does still exist, but it is now its own product as is known as MLKit. So you are probably thinking what is the difference between these two products?
Well, MLKit now is the solution for on-device machine learning for mobile apps. This is a standalone SDK for Android and iOS. There is no requirement to have a Firebase project anymore in order to use these offline models.
Firebase Machine Learning on the other hand is all the machine learning SDK’s for mobile that require cloud based API’s in order to achieve predictions. This includes text recognition, image labeling, landmarks recognition as cloud API calls through an SDK. AutoML Vision Edge to create custom machine learning models and serving your custom .tflite models dynamically to your app are also still available in Firebase.
For both Firebase Machine Learning and MLKit, you will need to migrate to the new SDK’s as the gradle dependencies/pods have changed. Your current solutions will still work but they will not receive any updates as these SDKs have been deprecated. Let’s look at this migration process in more detail.
Migration
Migrating to the new SDK’s is very simple and it is really well documented by the MLKit website. It is mostly gradle/pod changes you will need to make to your app and changing some class names. Something new I did see when migrating an app was that there are now bundled models and thin models.
The bundled models are bundled as part of your app, and will allow for immediate usage when called. This will result in fast inference but will increase the size of your app.
implementation 'com.google.mlkit:barcode-scanning:16.0.0'
The thin models are not bundled with your app and are downloaded using Google Play services to your app when you try doing inference for the first time. This is great as it reduces the size of your app but there is a bit of work from the developers side to add some metadata into your manifest in order for your app to download the model on first install.
Dependencies:
implementation 'com.google.android.gms:play-services-mlkit-barcode-scanning:16.0.0'
Manifest:
<application>
<meta-data
android:name="com.google.mlkit.vision.DEPENDENCIES"
android:value="barcode" />
</application>
I recently migrated this demo app that uses AutoML Vision Edge from MLkit to Firebase Machine Learning which proved to be very simple.
There are also a few new things in MLKit to explore for Android.
New in MLKit
With the new SDK there were a few other updates to MLKit. Firstly, MLKit is now lifecycle-aware, helping it work a lot better with CameraX. All detectors, labelers, translators and detectors will automatically invoke the close method when no longer being used.
Object Detection and Tracking
The Object Detection and Tracking library now supports custom models. This is great as previously you were stuck with the base model that Google provided. This base model is still available but if you would like to add objects that it does not pick up, you can now do this.
MLKit also has two new products which are Entity Extraction and Pose Detection for both Android and iOS. These models are part of an early access program that you would need to sign up for.
Entity Extraction
The Entity Extraction model allows you to extract different entities such as : addresses, phone numbers, currencies etc out of a paragraph of text. The model also supports multiple languages which is great if you have an app that supports localization. This allows you to give your users a better experience in text that you might be getting back from your back-end service, such as displaying an address on a map, enabling them to click on email addresses to send a mail and phone numbers to make calls instead of just displaying plain text.
Pose Detection
The Pose Detection model allows you to track physical actions of different subjects and display these data points in an augmented reality view through your app’s camera. This will work well for apps that are trying to track people for fitness or dancing applications. At Google I/O 19 there were some great examples of pose detection applications, you can see this in the video below.
A great repository was also released along with the new MLKit SDK, that shows Material design and MLKit working together to give a great user experience.
People +AI
Another great resource is the People +AI user guide, which provides great guidelines on how we should be thinking and building ML enabled features in our apps. I definitely recommend reading through this guide book if you are looking at building any ML features with Firebase ML, MLKit or Tensorflow lite.
Final Thoughts
If you are using Firebase MLKit in your app, make sure that you migrate to the new standalone SDK so you get updates to the SDK in the future. If you use any of the cloud models, AutoML Vision Edge or Tensorflow custom model serving, make sure you update your dependencies.
If you’re new to these SDK, the documentation can be found below.
If you have any thoughts about MLKit and Firebase machine learning, comment below.
Stay in touch.