ML Kit: Machine Learning SDK for mobile developers (Google I/O ’18)
Articles Blog

ML Kit: Machine Learning SDK for mobile developers (Google I/O ’18)

August 15, 2019


[MUSIC PLAYING] BRAHIM ELBOUCHIKHI:
Hello, everyone. My name is Brahim Elbouchikhi. I’m a product manager on ML Kit. We as a team, and many of
us are actually right here, are very excited to
tell you about ML Kit. You’ve heard about it from
Dave at the main keynote. And you probably even
looked at our documentation. But in this session, we’ll
tell you a little bit more, some of the behind
the scenes stuff that we’ve been working on. So let me get started. I think it’s important to
look back a couple of years and look at machine learning,
and what’s been happening, the context. And I try to quantify
this the best way I could. And one way is to look at
Google Trends, the most reliable source of data. And in this case, you could see
that over the past eight years we had a 50x increase in
interest in deep learning. Right? And deep learning is
not a new technology. It’s been around since the 70s. What’s changed is
that we can finally kind of deliver on the
promise of deep learning. We have enough compute,
memory, and power on some of these
devices to actually run those models that have been
developed for quite some time. And so I wanted to give you– as a product manager– a very simplified view of
machine learning, a one on one. And in this case, you could see
that essentially deep learning uses these layers and tries to
mimic how the brain functions. Right? And it tries to activate
different neurons based on the specific kind of
input it’s getting so that it can ultimately arrive
at that answer of whether this is a dog or cat, or
a hot dog or not. And in this case, as you
could see on that image, the output of that
particular outcome is dog. And what we had before
these sorts of algorithms was rule-based
engines, where you’d have to actually configure
the rules that will if this and this and
this and this and this, that’s a dog, which
does not scale. So because of
that, deep learning has allowed us to get into
so many more use cases and solve so many
more problems that we had than we could before
purely with rules engines. So that’s cool. In particular, over the
past seven years or so, our ability and the
machine’s ability to perceive the world around
it has gotten incredibly good. So in this case in 2011,
we had about a 26% error rate in identifying that
animal as a cheetah. On the right, and as
of now essentially, the error rate is
less than 3%, which is better than what
a human can do. And that’s pretty awesome. The fact that an
algorithm and a machine can now perceive the
world in that way opens up lots of new use cases. But of course, as
a team, our mission is about bringing machine
learning to mobile devices and mobile apps. So when we started
researching this product, we went out there and
talked to many developers, both internally at Google,
as well as externally to try to understand what are
you doing with machine learning on devices. And how does it work today? So I’m going to tell
you a bit about that. The first team, one of the
first we have talked about, is Google Translate. And of course, you hear a
lot about Google Translate because it’s such a
delightful experience and incredibly useful. But what I like the most
about Google translate is the fact that
it strings together multiple types of deep
learning and machine learning models and technologies
to deliver this experience. So it does the onscreen
character recognition to extract the text
that it’s looking at. It does the actual
translation itself. And then, ultimately, it
could do text to speech so it can speak the
results back to the user. And we think that when you can
string these things together, when you can use machine
learning in multiple ways within a single experience,
where relevant obviously, really cool stuff happens. So that’s one example. The other one is this
app called Yousician. Yousisian is a
music learning app. It essentially
allows you to play an instrument in an analog, like
an actual analog instrument, and then the device
listens as you play and tries to interpret
how well you’re doing. So it’s listening to the
specific notes you’re playing. It’s listening to how
well you’re playing them. It’s time stamping them. It’s doing echo cancelation. And it’s also doing
noise cancelation. And finally, personalizing
the learning experience. All of this is done with an
on-device machine learning model. And in fact, we’ll be talking
with this team that built their own c++ runtime to
actually do the inference on-device as
efficiently as possible. This predated TensorFlowLite
and other things that we do have now
that could have helped with that particular process. The other folks we’ve talked
about, among a few others, are Evernote. So Evernote launched a feature
called Evernote Collect. And the insight behind
Evernote Collect is the fact that we collect
so much of our information now in a visual manner. We’re taking screenshots of
things that we care about. We’re taking
pictures of receipts. We’re taking pictures of
whiteboards after a meeting and asking someone
to transcribe them. Well, Evernote
tries to avoid that. It tries to extract the
text and tag the image with that relevant text
so you can search for it and find it and
make it more useful. So that’s super cool. And so the overall
theme from what we heard through
many conversations was that it’s doable. On-device machine learning is
doable, but it’s really hard. And it’s hard for
three specific reasons. The first is acquiring
sufficient data in both the quantity and
the quality that you need. Think about it. So let’s say you’re
training an OCR model. You can label that data
for your own language. Right? You could get an
actual training set and say I’m going to label what
this data says myself because I understand that language. But in a global audience,
when you have users all over the world, how
do you create an OCR model that actually works for
all those languages. That’s really hard. Or even harder, if you’re
a music learning app, you need to hire world
class musicians to record the perfect note so you can
then actually train against it. That’s expensive. The other aspect is
developing models that are optimized
for mobile inference. And this has many dimensions. This could be in
terms of battery life, in terms of compute, and in
terms of size of the model. And what I’ve learned
in machine learning is that these are always
constant tradeoffs. You can improve on one, but
then decrease on the other one. And so it’s a really
hard challenge to solve. And finally, deploying an
ongoing experimentation is essential to
machine learning. You can’t do one
without the other. But however, there aren’t
very many tools today to help you do that. So we set out ML Kit to try to
address all of these issues. Of course, it’s the beginning
of a long road and journey. But we think that we have some
exciting stuff for you today. I want to first talk about
our machine learning stack. And in this case,
the very bottom of it is an Android neural networks
API on the Android side. And then on iOS is Metal. On the neural network side,
we launched it with OMR1. And it’s essentially a hardware
acceleration interface. Since launch, we’ve been working
with every one of our top SOC vendors to build drivers
for the neural networks API. And I’m excited to
show you some results. With Wuawei’s P20
series device, we’re seeing a 10x
improvement in latency, on inference with Inception v3. And what’s cool, if you
don’t know Inception v3, it’s a really large model. It wasn’t built for
mobile devices at all. It was built from
server side inference. And the fact that we can’t
run that kind of model at 10x the performance,
and actually run it quite efficiently
on a mobile device that’s not connected to a power plant,
is actually super exciting. This means that we have
a lot more headroom to do more with machine
learning on a device. Of course, there’s
another dimension of this where we build
models that are inherently built for mobile, that are
from the ground-up built to be highly efficient
and all of that. And that work continues. So when you pair these
two things up together, we think there’s going to be
a lot of cool stuff happening here. So we’re continuing to invest
on Android neural networks API and of course, reliant
on Metal on the iOS side. The next aspect here
is TensorFlowLite. TensorFlowLite, which was
announced last year at I/O and then shipped
around November, is a lightweight
machine learning set of tools and library. It works on both mobile
devices and embedded devices. And it was built
from the ground-up, as it says, to be lightweight. Now, I’m not going to steal
any of the team’s thunder. They have a session fully
dedicated to TensorFlowLite tomorrow. And so I highly
recommend going to see it if you’re at all interested
in on-device machine learning, which I assume
you would be if you’re here. OK. Then we get at the
application layer. And this is where
we looked around, and we said there isn’t really
an easy way to access machine learning technologies
at that layer. You had to either go
an interface directly with the runtime and
build your own models, or you really had to
build your own stack. And so that’s where ML Kit
comes into the picture. ML Kit is in beta
as of yesterday. So you, obviously, all
can go use it today. And it’s essentially Google’s
machine learning SDK. Our aim is to bring Google’s 15
plus years in machine learning, all of the technology
we’ve developed, bring it to mobile
developers through this SDK. So let me tell
you more about it. Well, first, let me
show you the stack again with ML Kit at the top. And in fact, this is
our on-device machine learning stack. OK. Now, I hope I get to
tell you about it. OK. So the first thing
that’s really important is that ML Kit is both
on iOS and Android. And this was really
important to us because when we
talk to developers we don’t think about
Android machine learning and iOS
machine learning. We think about machine learning. We want to deploy similar models
to users on both platforms. There is not a fork
there, so it was important that we have a
consistent SDK for both. And in fact, every
one of our features is available on both
Android and iOS. We offer two types of
rough buckets of features. One is what we call base APIs. These are APIs that are
backed by Google models. And so as far as you’re
concerned as a developer, there is no machine
learning involved. The other is a set
of features that help you use your own
custom trained models. I’ll tell you a lot more
about that in a bit. ML Kit also offers both
on-device and cloud-based APIs. Again, this was important
because on-device APIs give you the realtime and
offline abilities, but they have limited accuracy
in comparison to Cloud. However, on-device APIs
are free of charge. But we also wanted to give
you a consistent interface for the Cloud APIs
because in many cases, you do need that
level of precision and that level of scope. And we’ll talk about the
distinctions between the two in a little bit. And finally, ML Kit is deeply
integrated into Firebase. And this was really another
important point for us. We aim to make machine learning
kind of non-exceptional. We don’t want it to be special. We want it to just
be yet another tool, just like you use analytics
or Crashyltics or performance monitoring or Cloud storage. Just like you use any of
those parts of Firebase, we wanted machine learning to
be right there and right then. And what this also
does is it works well with other features on Firebase. Again, we’ll tell
you a little bit more detail about that
in a few minutes. So that’s the high
level about ML Kit. So what base APIs
do we support today? First, text recognition. This is available both
on-cloud and on-device. The second is image labeling. Next is barcode
scanning, face detection, and landmark recognition. The four APIs on the left
are all available on-device, meaning you can use them
for free and in realtime and offline. We also have two super cool
features coming up soon. One is a high density
face contour feature. This is over 100
points in real time. And the other is
a Smart Reply API. And this is where we like
to talk about how at Google will work together really well. So as part of Android
P, there is a feature where you can now insert
response suggestions within the notifications
shade directly. And ML Kit Smart
Apply API could be something that would be useful
and populate it those chips. This is the same
API used in Where O/S, the same technology,
not the same API, the same technology
used in Where O/S and/or messages, et cetera. So that’s really cool. But of course,
sometimes you simply need to build a custom model. If you’re trying to detect
that particular type of flower, it’s hard to build
a generic model. Obviously, you could
build one, but it will be super large to actually
detect every flower, every dog, every species of everything. So you have to use based
custom models sometimes. And we want to help
with that as well. The first feature we have here
is dynamic model downloads. What this means is you
can upload your model to the Firebase console, and
have it be served to your users dynamically. You don’t have to bundle
the model into your APK. This has a bunch of benefits. First, it reduces your
APK size because now you don’t have to put that
five or 10 megabyte model into your APK,
meaning that you have to take the head
in the app stores when the user’s trying
to install your app. But the other really
important insight for us was that we decouple the
process, the ML release process from the software, kind
of a traditional app release processes. We’ve learned that these
teams are typically slightly different teams. Your machine learning
team is probably a different set of
people than the ones that build in your core
software experience. So this gives you
the flexibility to deploy each at
different times. Now, a really cool
benefit of this also is you can now do A/B
testing on different models with literally a
single line of code using Firebase’s Remote Config. This is the coolest part for me. Literally, if you were to do
this today, as in maybe two days ago, before
ML Kit launched, you’d have to bundle two
models into your app. You’re stuck with those two
same models for the duration of the app’s lifecycle. And you have to then
upload all the metrics back and do all of that work. This makes it trivial. And given how important
it is to experiment as part of machine
learning, this is, we think, a real
game-changer for our ability to use machine learning models. And finally, we talked about
that optimization challenge of building models that
are made for mobile. We’re excited that we’ll have
a feature that’s coming soon that will allow you to convert
and compress full TensorFlow models into lightweight
TensorFlowLite model. And Wei is going to
get on stage here a bit and tell you all
about the magic, we call it also technology,
behind that compression flow. So that’s ML Kit,
Google’s machine learning SDK available
on Android and iOS. I want to take a moment, as
always, to thank our partners. We’ve worked with every one of
these partners and many more to launch ML Kit. They worked through so many
bugs, so many challenges. They’ve given us
so much feedback. And the product
wouldn’t be where it is today without their help. So I want to thank them a lot. And in particular, I want to
highlight a couple of things. So we worked with PicsArt. And PicsArt uses ML Kit to
deploy a custom model to use their magic effects experience. And what’s cool about
PicsArt is they’re using ML Kit both on Android and iOS. We also worked with Intuit. And if you know US
tax dates, tax day is around April, and so they’re
really pressed with time to get the feature out. And so we worked with
them to integrate ML Kit in record time. So that was super
awesome as well. All right. So before I hand
it over to Sachin to tell you more
about ML Kit, I wanted to make kind of a
commitment to you. We’re going to go
out there, and we’re going to knock on Google’s
Research Team’s doors, every single one of them. And we’re going to
go out there we’re going to ask them to bring
their technologies to you to be part of ML Kit. We’re going to focus on vision,
speech, and text models. And we’ll also continue to
make use of custom models as easy as possible. So that’s it. I’m going to invite
Sachin to come on up here and tell you more
about how ML Kit works. [APPLAUSE] SACHIN KOTWANI: Thanks, Brahim. Hi, everyone. My name is Sachin Kotwani. I’m a product
manager in Firebase I was practicing my presentation
at home with my three-year-old. And every time I finished she
would say, again, again, again. And I’m not sure
if she was trying to tell me that I
needed to practice more, or if she really
enjoyed the content. I guess we’ll find out. When we set out to build ML
Kit, we had two main objectives. The first one was
to build something that’s powerful and useful. And Brahim talked about that. The second one was also to
make it fun and easy to use. And I’m going to tell you a
little bit more about that. So if you’ve used
Firebase before, you’re probably already familiar
with products like Storage, and Real Time
Database, Firestore, Remote Config,
Crashlytics, Analytics, EB Testing, and more. And now, there’s a new
addition to the family. Starting this week
with our launch, if you head to the
Firebase console, you’ll see ML Kit
on the left nav. Clicking on that will take
you to the main screen with our Firebase APIs that
Brahim just introduced you to. As you can see, they are
mostly vision focused for now. But we intent to add
to it in the future. Let’s look at one
specific use case. Let’s say that I’m
building an app, and it needs to determine what’s
the content of an image, what the theme is, what
things are it. I would use the
image labeling API. As you can see, there
are two icons here. This indicates that this API is
available to run both on-device and in the cloud. On-device is free. It’s low latency, no network
required because everything runs on the phone. It supports roughly
400 plus labels. Now, if you need
something more powerful, something that’s going to give
you more high accuracy results, you want to use the
cloud-based API. That is free for the first
1,000 API calls per month and paid after that, but it
supports over 10,000 labels. Let’s look at an example. If you were to feed this
image to the on-device API, you’d get labels like fun,
infrastructure, neon, person, sky. If you feed it to
the cloud one, you’d get Ferris wheel,
amusement park, night. You can see that’s
more accurate, right? OK. So remember I also told you
that it wasn’t just fun. It’s also easy to use. So you have to hold me to it. Let’s say that I want to
implement this API on iOS. I would just include these
three libraries in my Podfile. Similarly, if I’m
doing Android, I would put these three libraries
in my build.gradle file. Next, if I’m doing iOS
on-device image labeling, I would instantiate
the label detector, say labelDetector.detect, and
then I get the results back. And I handle iterate and
handle the extracted entities. On Android, the pattern
is very similar. You instantiate the detector. Detector.detectInImage. On success, handle the
extracted entities. Now, pay attention to
the highlighted boxes in gray over there. This is on device. Now, if I want to
do the same thing but call the cloud API
instead, not much changes. It’s just a few class names. The pattern is very similar. The detector,
detector.detectInImage. On success, handle
extracted entities. Pretty cool. All right, demo time. I was warned not to do a demo. But you know, my wife
says that I don’t listen, so here’s me not listening. And let’s see if this works. OK, so I’m going to show
you the image labeling API. This is typically used for
things like tagging photos if you want to know what’s
in the content of a picture– a still picture, usually. But I thought it’d be
pretty cool to show a live demo with a live stream. So I’m going to take
a picture of this toy. It says toy, car,
vehicle, tire, bumper– wow, it’s actually picking
out all the pieces there. Oh, it says crowd, too. Event. I didn’t get to test this
because when I was practicing, there was no crowd. It was just empty chairs. OK, so face detection. Switch to this. OK. So there is a box around
my face, as you can see. There’s the left eye, right eye. The numbers next to the
left and the right eye are how open they are. So you can tell that I’m awake. Happiness is a
detector for the smile. So look at how that changes. And this works with
multiple people, actually. Again, you shouldn’t trust me. You should ask me
to prove it to you. So I need a few volunteers here. See? Multiple faces detected. Eyes for everyone,
smiles for everyone. Huh? Pretty cool? All right. [APPLAUSE] I have a couple more things. Lose It, as Brahim mentioned,
is one of our partners. And they worked on this
really cool feature. So I’m going to enter here. Let’s say I am logging
what I had for breakfast. And normally you
could select foods that are already
in the application, or you could enter it manually. But let’s say that you
want to enter a new food. Apparently, this is
not considered food, so I don’t think it’s
in the application. But I’m going to try it. So, like I said, you
could enter it manually, but let’s try what happens. Oh, it detected it faster
than what I expected. Let me try that one more time. There you go,
nutrition label found. And here’s all the information. The calories, fat,
saturated fats. You want one more? OK. This is stuff that’s
not available yet, but I think it’s pretty cool. This is our face contours demo. It detects over 100 points,
processes them in 60 frames per second. And you can see my lips, my
eyes, the entire face contour. So this will be
coming pretty soon. There will be a sign up
link if you’re interested, and we look forward to
having it in your hands so you can play with it. All right. [APPLAUSE] Thank you. So that was pretty cool. And hopefully you find
those base APIs useful. But there are
obviously use cases that you might
have that are very specific to your application. What if you wanted to detect
different types of flowers? Or like a musician, maybe you
want to extract a musical note out of a sound stream. You would want to use
your own custom model. And ML Kit helps
with that as well. So there are three
things, three benefits, bringing custom
models to ML Kit. The first one is
that the ML Kit SDK provides an API layer that
interacts with your TensorFlow Lite model. So you can provide
inputs, get outputs, just like you would
do with the base APIs. Second, you can upload
the TensorFlow Lite model to the Firebase console. And that provides two benefits. First, and Brahim
alluded to this earlier, is that you can bundle your
model with your application, if you choose. But if it’s big, and you want
to reduce the install size, you can actually just
leave it in the cloud and download it dynamically
so the initial install size is smaller. And the third one, because
it lives in the cloud, you can dynamically
switch the model. You don’t have to submit a
new APK or bundle to the App Store or the Play Store. Here’s a quick snippet on how
you would load that model, how you would refer to it. So let’s say I called
my model my model v1. I would put this snippet,
and it would retrieve it from the cloud. Now, let’s take a step back. When I started,
remember, I mentioned that there are a lot of
other Firebase products that are very useful. And one of my favorite
ones is Remote Config. And what it does is allows
you to dynamically switch values inside your app. It’s typically used for
things like switching the color or the background. You can also use it to switch
call to action strings. It’s really useful for
that sort of thing. But it turns out that it’s
also very useful for ML Kit. So here’s what I did. I went to the Firebase
console, and I created a parameter called my model. Then I created three different
target populations– one for people who speak English,
another one for people who speak Spanish, and
then a default value. So what I’m trying to do here is
I’m targeting a different model to those different populations. And once you do that,
instead of hard coding a model name like
is here, you just change that static string
for a call to Remote Config. And then every device,
depending on what population they belong to, they’ll
get their respective model. And this is just a
very simple example. You can think of using
A/B testing and analytics. So you can test out
different models, and you can pick the one that
performs best and choose that. So experimentation, as
Brahim said earlier, is important in
machine learning. All right. Before I wrap up and
hand it over to Wei, I want to talk about this
compression and conversion of TF Lite models– of TF models. So let’s say you have
a TensorFlow model, but what you need is a
TensorFlow Lite model to run on device. We have a feature for that, and
it’s coming soon, as I said. You would upload your TensorFlow
model along with training data. And once it’s done
processing, you’ll get a bunch of TensorFlow
Lite models to choose from. As you can see, these
would be compressed. And they have
trade-offs, so we have different accuracy,
different inference latency, and different sizes. So depending on what
you’re most sensitive to, you can pick the one
that suits your needs. This flow is of be only for
image classification models, but we look forward to
adding more in the future. OK, so you’d pick the model
that works best for you, you publish it,
and then it’s just available like any
other custom model that you would
upload on your own. I know this seems super
easy with a beautiful UI and just three steps, but this
is actually really hard to do. It’s an active area of research. It’s almost like magic. And to tell you more
about that magic, I’d like to invite
our resident wizard. Wei, please come up on stage. [APPLAUSE] WEI CHAI: Thank you, Sachin. Hi, my name is Wei. My team– actually, right here. My team has mixed
machine learning experts and mobile developers. It’s been a lot of
fun to be a part of it and build something that we
all believe can be useful. For example, model compression. Now, I’d like to go
deeper into the technology behind the magic. But first of all, let me
explain why we want you to support model compression. Running machine learning on
the cloud versus on mobile, one big difference is that the
mobile environment has very limited computational resource. This makes the model size
and the inference speed extremely critical. For today’s hardware limit,
most mobile applications require very small models– ideally less than
a couple megabytes. So, on the other hand, if we
look at the model architectures proposed by researchers for,
say, image classification, to obtain higher accuracy,
the machine learning models can go much deeper and larger– sometimes hundreds of megabytes
for certain applications. After talking to a lot
of mobile developers, we realized that how to make
machine learning models small and efficient enough
to fit on mobile phones is one of the big pain points. With MK Kit, we’d like
to address this issue by providing model compression
tooling and support. So a model compression
service or tool takes a large trained
model as input and automatically
generates models that are smaller in size,
more memory efficient, more power efficient,
faster in inference speed with minimal loss in accuracy. As Sachin just mentioned, this
is still an active machine learning research area. And our compression
service is based on learn-to-compress technology
developed by Google Research. And it combines various
state-of-the-art model compression techniques. For example, one method called
pruning reduces the model size by removing the less
contributing weights and operations in the model. We found that for certain
on-device compositional models, pruning can further reduce
the model size by up to 2x without too much
drop in accuracy. So another method
is trying to reduce number of bits used for
model weights and activations by quantization. For example, using 8-bit
fixed point for model weights and activations
instead of floats can make to model inference run
much faster, use lower power, and reduce the model size by 4x. Specifically with
TensorFlow Lite, switching from MobileNet
to quantized MobileNet can speed up inference by
2x or more on Pixel phones. So the third method trains a
compact model called a student model with distilled knowledge
from a larger model– a teacher model. So the student
does not only learn from the [INAUDIBLE] labels
but also the teacher. Typically, the student
models are very small in size with much less
weight in the model, and use more
efficient operations for the benefit of
inference speed. For example, for
image classification, the student models can be
chosen from MobileNet, NASNet, SqueezeNet, or any other
state-of-the-art model architectures compact enough
for mobile applications. We can further extend
this distillation idea to simultaneously
train the teacher model and multiple student
models with different sizes in a single shot. One thing to mention is
that very often for all these techniques we need a
fine tuning step for the best accuracy. So in this case,
we do not only need the original model for
the compression process but also your training data. So for ML Kit, we’ll
provide a cloud service for model compression. For now, we only support
image classification use cases but will soon expand to more. So the reason that we
support model compression as a cloud service– as I just mentioned,
model compression is still an active research
area with new techniques and new model
architectures specifically for mobile applications
invented very fast. For example, from MobileNet
v1 to MobileNet v2, it took less than
one year to invent. So our compression
service will automatically incorporate the latest
advances in technology for you. Another reason. The compression
process typically takes quite some
computational resource. It can run multiple
hours on GPUs. So we will run out of cloud
service on Google Cloud to use the computation
power there. So what we need
from the developers include a pre-trained
TensorFlow model in saved model or checkpoint format, and your
training data in TensorFlow example format for the fine
tuning stuff I just mentioned. What we generate will be a set
of models with different size and accuracy trade-offs
for you to choose from. Since ML Kit is running
on top of TensorFlow Lite, all our generated
models will already be in TensorFlow Lite format
for you to download or serve through our model
hosting service. So with ML Kit model
compression service, we’re aiming to compress a
model up to 100 times smaller, depending on your use case
and the original model. To give you a real developer
use case as example, Fishbrain is a
local fishing app. They already have their model
to identify fish species, and the model is currently
running on the cloud. With our model
compression service, the original model
provided by the developer was 80 megabytes and 92%
top-three accuracy can be compressed to
much smaller models with different size and
accuracy, shown here. As you can see in
this particular case, the accuracies of
the generated models were even higher than
the original model, which is not always the
case, but it’s possible. And it’s great. To summarize, with
ML Kit, we’d like to make machine
learning accessible to all mobile developers. To achieve that,
we’d like to help with every single step in the
machine learning workflow. Not only how to use the
model, but also how to build and optimizing our model. Now, I’d like to conclude the
talk with a summary of what we’ll provide. We are launching in beta
the base APIs for both iOS and Android, including text
recognition, image labeling, barcode scanning,
face detection, and landmark recognition. We are also
supporting custom APIs with TensorFlow
Lite model serving. Please check out these
features at Firebase website. Meanwhile, we’ll have a set of
new features coming out soon, including high-density face
contour API, Smart Reply API, and model compression
conversion service. We’ll soon start to whitelist
developers to try them out. If you are interested, please
use this link here to sign up. We are super
excited about ML Kit and how it potentially
can help developers build cool machine learning features. Look forward to your
feedback, and we’re committed to making it great. Thanks for coming. And if you have questions,
we’ll be available right after this talk at the
Fair Bits and Bobs Q&A area. And we are also having some
relevant sessions and talks for you to check out. Finally, please
leave your feedback about this session for us
to improve for the future. Thank you. [MUSIC PLAYING]

Only registered users can comment.

  1. Is it possible for one device communicate to the other nearby device for relevant informations using ML?
    Eg:- I am searching for "sell product A" and a person in my locality search for "buy product A"; could be somehow alerted to meet each other.
    there will be other applications like going to a same destination, looking for guide nearby, or to an extent whome I am talking to (in real conversation- may be their profession, company etc)

  2. Been waiting for ML to become a tool for programmers vs a project for Maths. It’s happening? Now?

  3. I have seen Google vision is very low accuracy compare to other ML vision provider, CloudSight is the best one which I have researched .

Leave a Reply

Your email address will not be published. Required fields are marked *