ML Kit: Machine Learning for Mobile with Firebase (Google I/O’19)


[MUSIC PLAYING] CHRISTIAAN PRINS:
What is ML Kit? Well, before we decided
to work on ML Kit, we talked to a
lot of developers. And we asked them how they
were using ML in their apps. And they told us it was
actually quite hard to use. You need to have machine
learning expertise to build models. You need to have a
lot of training data. You need to collect the data. So it wasn’t as easy. So as a team, we made
it our singular goal to make ML easy to
use for developers. We looked around Google
and saw that there were a lot of powerful
ML models being developed as part of Google Research. And so the next step was
actually rather obvious. Like, why not take these
models, wrap them in a nice, easy to use SDK, and
provide them to you? So that is how ML Kit was born. ML Kit is Google’s machine
learning SDK for mobile. It is built on top of the
TensorFlow Lite runtime, and is available both
for Android and iOS. That makes it easier
for you, because you don’t need to integrate
an ML solution twice, one time on Android,
and one time on iOS. Since we started ML Kit last
year, exactly a year ago, we have grown the set of APIs. We started off with APIs for
vision-based applications. Examples are text
recognition, barcode scanning, and image labeling. Earlier this year, we extended
that to natural language processing, with APIs like
language identification, which identifies the
language of a string. And also, Smart Reply. Smart Reply gives you
suggested responses as part of a text conversation. We also have model serving. This allows you to serve a
custom model from the cloud and keep your app
small, and also allows you to do
experimentation. So you can do A/B testing
of two versions of a model in the field with real clients. So have we been doing? Well, we saw a lot of strong
interest from developers. More and more apps
each day use ML Kit to create powerful new
features in their apps. And it’s not just apps. We also see strong
engagement from actual users that use these features. And as we are extending
ML Kit with more APIs, we see this engagement
accelerating. And this is not just on Android. We’re really happy to see that
about a quarter of the apps using ML Kit are
running on iPhones. Here’s a sample of
companies that use ML Kit. And as you see, there’s
a big diversity here. We have very small, mobile-first
companies, as well as larger, well-established retailers
that make use of our APIs. So over the last year,
we learned a lot, not just about building
ML Kit, but also by talking to developers like
you that give us feedback. And in that feedback, we
saw three clear themes. The first theme was, you
really liked the base APIs, but please give us more. The second theme was that
although the base APIs are nice, they help me
directly solve a problem, building a rich and intuitive
user experience around it is rather tricky. And lastly, we realized
that the base APIs help you with certain common challenges. But a lot of times, you
need to solve use cases in your app that are
more niche, that are more tailored toward your app. And for that, it’s still quite
hard to build custom models. Since we built ML Kit
for you, we listened, and we built some cool stuff. And here, we’re really
excited to present that to you today in this session. So let’s get started. We’re adding two APIs to ML Kit. Let’s talk a little bit
about Google Translate. Google Translate is the world’s
number one digital translation solution. It is used by more than 1
billion users each month, and all those users together
translate about 85 billion sentences each day. And of these users,
more than half are running on mobile devices. Now, mobile machine
learning has not been applied to language
translation for a long time. We only introduced neural
machine translation models at the end of 2016, but
this led to a huge boost in translation quality. And this was also
available to you. We provided the
Google Cloud Translate API that allows you to integrate
translation into your apps. Now, we did realize that
users are not always online. That’s why the
Google Translate app introduced offline translation. And it took these NMT models
that were running in the cloud and optimized them
for use on mobile. However, these models were
not available to developers. And that’s what we
want to change today, by offering offline
translation as part of ML Kit. So that is the first API,
on-device translation. It translates
between 59 languages and uses the exact
same models which we use in the Google Translate
app for offline mode. They fully run on device,
and are available to you at no cost. Now, to show you how
this works in a real app, I would like to invite
Shiyu on the stage. SHIYU HU: Thanks, Christiaan. I’m Shiyu. I’m tech lead for ML Kit. So to demonstrate the
translator feature to you, we thought about it for a
while, saying, how to do it. Since I do not want
to type on the stage, and I only know
about two languages. So instead of doing the
manual language input, why we cannot use some existing
ML Kit features to help us? So we’re building this demo app. It’s using the camera
streams as input, and then it will translate. Please come back to
the slides first. And then so it’s using
camera stream as inputs. And then we’re using
the text recognition to recognize the
text in the images. And then we’re using the
language identification to identify which
language it is. And finally, we’re using
translate to translate that language into English. So let’s switch
back to the demo. Let’s take a look. So, this is the demo app. And then we want to scan
some text on the paper. And this one, I do
not know what it is. I don’t know what
language it is. I didn’t know what that means. And it’s translated to English
as, how are you doing today? I’m doing good, thank you. So let’s try another one. So the next one,
I actually know, from my friend Christiaan,
who is from Holland, is Dutch. And it translates
into English as, when does the train go to Amsterdam? OK. For this question, I’d
probably need to check it out. So since all the three features
are all running on the device, so it’s super fast. So if we’re moving from
this two lines of code, you can see how fast the
translation, and also the text recognition and
identification, happens. Thank you. We’ll switch back to the slide. [APPLAUSE] Thank you. So, how does it work? Let’s take a look. So Christiaan mentioned,
in a couple of years, the model got improved a lot. Before, we were using
the phrase-based machine translation. That model can translate
phrase by phrase. So every phrase may make
sense, but the whole sentences may not. Then we switched to a
neural machine translation model, which will translate the
whole sentence with one piece, and then the whole sentence
will make more sense. And the ML Kit model
in the translate are all using the neural
machine translation models. Let’s look into it more
deep for this models. Translation is based
on language packs. And each language pack
has 25 to 35 megabytes. So it doesn’t make sense to
bundle all these language packs into your app. That will be so big. To do that, we’re providing
a downloading API for you, so that you can download
the needed language packs as you go. So that can help you to
shrink your app size. Further, we have 59 languages. So the total language
pairs will be almost 3,000. To reducing the number
of language pairs we’re using English as
the intermediate language. So here’s an example. On the top, as my friend
Christiaan told me, is like [SPEAKING DUTCH]. It means, the food was
delicious in English. So it translates
to English for us. And then it translates to
Chinese as, [SPEAKING CHINESE].. So that is how it works. OK. Let’s look into
some code sample. To use in translation, we’re
starting with an option. So the option we’ll set up,
what is the source language, and what is the target language. With this option, we’ll
construct a translator just by passing the option. With the translator, before
we do the real translation, we’ll download the model first. So the APIs download
the model if needed. So that will download
the model for you, and then call has a callback. When it’s [INAUDIBLE]
successfully, it will use the translate. You can use the translate
to translate the text to the target language. And then that is the
code for the translate. And for that, I will hand
it back to Christiaan. CHRISTIAAN PRINS: Thanks, Shiyu. OK. Let’s talk about the second
base API that we are launching. This is the object
detection and tracking API. Let’s walk you
through a scenario. I recently visited
Shiyu’s house. And it looks a
little bit like this. He really likes bright colors. And I saw a couple of
items in his living room that I actually quite
like and may fit nicely in my own living room. And in the afternoon, I
happened to be visiting my favorite Swedish
retailer, and I would like to see if they
have similar items available, and if they are in stock. Now, I don’t really want
to bring up a website or open an app and then
browse through a catalog. So how can we use ML to make
this a bit more magical? Now, funny enough, IKEA
had the same question. So we decided to
partner with them. So let’s show you
what we came up with. Please go to the demo. So we’re going to
launch the IKEA app. And then we’re going to
see the live camera view. And it allows us
to scan objects. So let’s try a first object– this lamp, for example. As you can see, we find the
lamp and several other lamps that look like it. OK. Let’s try another object– this clock. We scan the clock. We find the clock and other
items that look fairly similar. Please switch back
to the slides. SHIYU HU: OK. Let’s talk about how it works. CHRISTIAAN PRINS: Please
switch back to the slides. Thank you. SHIYU HU: How does it work? So this is the end to
end walking through of the demo you just saw. It contains two parts,
the on device part, and the cloud part. For the on device part,
we’re using the camera stream as an input. And the camera images will
go into the ML Kit ODT. ODT means object
detection tracking. And the ODT will try to find
the objects inside these images. And then optionally, it will
also do a coarse classification to classify this object
into some categories. And once the user
is using the camera to move around, to
find different objects, and then they find, OK. This is the object
I have interest in. Then, we’ll crop that image to
have like a smaller image that only contains that object. With that object image, we’ll
send it back to the cloud. And in the cloud, we’ll do
the cloud visual search, which basically does
the image matching. It will match the uploaded
image to their products image, and then finding the
similar products, and then send it back to the
device and the folder cloud visual search. For this demo, we’re using
Google Cloud Vision Product Search. But you can also others,
third-party solutions. OK. So let’s take a
deeper look at the on device part provided by ODT. So we’re starting these
features with two models. The first model is a localizer. The second is
called a classifier. So the localizer is
localized to finding objects instead of images. And the classifier to do the
optional classifications then. And then we run these two
models into the livestream. The results doesn’t look good. It can feel like the image
is a little bit choppy, and the bounding box
is a bit delayed. The reason is it’s too slow. It totally takes
100 milliseconds to process one image. So it can only process
10 images per second. It’s not good. Then we think, do we really
need to process every image with these two models? Then we introduce a new
model called the tracker. The good thing about the tracker
is it’s running pretty fast. It can run in less
than 10 milliseconds on a typical Android device. And once the localizer
is finding the object, it will hand over that
object to the tracker. So the tracker will take care. So we only need to run the
localizer and the classifier once for new objects. Here’s how it looks like. So with the checker, we can
process 30 frames per second. So it’s real time. And this in real time, is
not only in high-end devices, but also in low-end devices. For example, like the Nexus 5. So this is making the
feature much more useful. OK. Let’s also look at some code. So, similar to the
translator, we also set up, starting with the option first. And then for the
ODT, the ODT pipeline is highly configurable. Right now, you can use streaming
mode, single image mode. You want to enable
classification, disable classification, or you want
to use [INAUDIBLE] objects or [INAUDIBLE] objects. For details, you can
check to the dev dock. With this option, we’re
creating the detector instance. It’s simply just sending
the option to construct it. Then we want to get
the input image. The input image is using
FirebaseVisionImage. So FirebaseVisionImage
image can be constructed by different formats. So for here, we’re using bitmap. It actually can also take
the camera by the buffer by the rate. Once we have the image, we send
it to the detector to process. And it will return a
list of detected objects. For every detected
object, it will first provide a bounding box. So see, the bounding
box, you see the demo. And if you enable
classification, it also will provide
you a category. And if you are
streaming mode, we also provided the checking ID to
make sure this object will be the same object in the stream. So that is the code for ODT. And then one more thing. We’re also working with
Adidas to demonstrate the power of ODT. So in the next demo,
we want to show you Adidas’ in-store
experiences with Adidas app. And then for that, for
example, imagine we’re actually inside Adidas store. And then we’re finding
shoes that we really like. And I want to try it on. I want to see whether
it has my size, or whether it’s
available in the store. So for that, let’s
cut over to the demo. Yeah. That should start it. That’s the Adidas app. OK. So I think we could
use the normal search. OK. So– CHRISTIAAN PRINS: One second. SHIYU HU: Let’s try it again. Seems there are some
internet issues. OK. Here we go. So there is a product scan. Let’s scan the product. OK. So these are the shoes
I’m talking about. I like the shoes. The reason is, it actually
has ML Kit patterns on it. And actually, the name
is called the ML Kit. So I already want to try it on. So I want to see
whether it has my size. My shoe size is a 9. See whether it’s available. Let’s try. Oh, great. It has 9. I want to try it on,
since I’m in the store. And then, OK. So I have that size. And then bring it up to me. Cool. Thank you. Yeah. Thank you. [APPLAUSE] So, yeah. I will actually try my
shoes on, and then I will hand back to Christiaan. Thank you. CHRISTIAAN PRINS: Thanks, Shiyu. OK. Let’s talk a little
bit about UX. As we were developing the object
detection and tracking API, we realized that just providing
an API is not sufficient. Actually giving users
a good experience with live visual search
is pretty tricky. And so we reached out to
the material design team. And as we were
talking with them, we realized that we had
fairly similar goals. Their goal is to
democratize digital design. And ours is kind of to
democratize machine learning. So we thought, why
not work together to help developers build great
solutions that make use of ML? As part of this
collaboration, we have launched some extensions to
the material design guidelines. These guidelines can be found
on the material.io website. And there we provide
guidance on how to use material design
components to build experiences like visual search
or barcode scanning. We assist you with
building a flow that helps you tackle challenges
that only come up when you integrate ML. And we also help you to do
theming to use color, shape, and typography to ensure that
your brand identity is not lost, is represented in
the app, but you still get a great user experience. A second outcome of
this collaboration is that we actually built
a couple of showcase apps. These are real, polished, end
to end experiences that have been tested with actual users. And what is great
is that we’re making these apps available on GitHub,
so that you can also integrate this in your app very quickly. So with this experience,
some of our partners could integrate this
whole experience within a day or less. And it really sped
up development. And it saved them many months
of design and user testing. And with the launch
today, this is something you can do as well. OK. The last topic that we
want to discuss today is about custom models. And how we’re going to make it
easier to build custom models, I want to invite Ann
and Sonakshi on stage. SONAKSHI WATEL: Thanks. Thanks, Christiaan. [APPLAUSE] Hey, everyone. I’m Sonakshi, and I’m a
designer on the ML Kit team. So when we got
started with ML Kit, we’d identified two
kinds of developers. On one end, you
can see someone who needs a more turnkey kind
of solution, something more common. They don’t know too much ML. They don’t have access
to too much data. And on the other end,
there are ML experts. They have lots of data. They know what they’re doing. They might come in to
host a custom TensorFlow models on ML Kit. And we thought there were
these two kinds of developers, and that was it. Well, we identified another
kind of developer, someone like me– someone who needs
a more common solution, has access to some
data, and knows some ML, maybe took a few
online courses, or just studied some courses in college. And today, I want to share with
you a really serious problem that I want to solve. I want to identify dog breeds. Well, I find dogs really cute. And there’s so many
amazing dogs at Google. And one such amazing dog is Une. And that’s Christiaan’s
dog, actually. So today, if I want to
know what breed Une is, I might use the on device
image labeling API. Let’s see what that tells us. Well, that just says
that Une’s a dog. But I guess all of
us already knew that. So, then how do I
identify dog breeds? ANN ZIMMER: Well that’s a
great question, Sonakshi, and one a lot of our
developers have been asking us. So we came up with
a solution for you. Hi, everybody, I’m Ann. I’m a back end developer on
the Firebase ML Kit team. We came up with a way of
incorporating AutoML Vision Edge into the Firebase solution. So, you can take
your own data set. You can put that into ML Kit. We’ll train the model and we’ll
export for you a TF Lite model. And then that has an
optimized solution that you can use
and run on device. Why do you want to run
it on a mobile device? Well, there’s lots of reasons. We’re going to talk about
a couple of them today. Latency being the first. It’s faster. There’s no back and forth,
talking to the network, waiting for results. That means that
you get the ability to run on device in real time. You can use your video camera. You can use real time processing
to get a better experience for your users. Also, user data. You can do it all
on your device. You don’t have to upload that
image, send it across the data. You can do everything
right there. So, how does it work? We’ll switch to
the demo, please. So brand new inside of
Firebase, the ML Kit. You’ve got the new AutoML tab. We’re going to click on that. You get the ability to
create your new data set. So once it finishes
loading, we’re going to click on the
Create Data Set button. And we’re going to give our
data set a name, Dog Breeds. We only want to do single
dog breed classification, so we’re going to
click Single Label. And then we’re going
to create our data set. You have a lot of different
ways of uploading your images. All the standard ones
are usually supported. We recommend that if
you have a big data set, you put it in a zip file with
a proper folder structure. You’ll get the labels
automatically applied as you upload your images. So let’s browse for a
file and drop it in. We’ve got our data set. And it’s going to start
uploading our images. Then it’s going to do
a bit of validation. And then it’s going
to import our images. This will take a while. We’ve got 20,000 images
in this zip folder. So we’ll actually send you
an email when we’re done. So I’ve done that today. You guys don’t want
to wait an hour. So let’s switch to
the next tab, where we have our images available. So as you can see, we’ve
got our labeled images tab. We’ve got a lot of
labels down the side. But you’ll see a little warning,
that little yellow sign. And it tells us
that we recommend that you have 100 images. So I noticed that. I uploaded a few extra images. So let’s switch to
the Unlabeled tab, and we’ll select a few
extra ones so that we can get rid of that warning. It’s really easy to
label your images. You click on them. You hit the Manage Label button. And you can type in
the label you want. In this case, it’s the
Miniature Leonberger and so we’ll apply that label. Very quickly, that
warning is gone, and we’re able to
train our model. So inside the
training model, we’re going to give it a better name. And then you have a couple
of options for your latency and package size. So depending on the type
of users that you have, you’re going to make the
appropriate decision. If you have a lot
of clients that use smaller devices
without a lot of memory, you’ll probably want to
pick a smaller package size. And you’ll choose low latency. It might take a little longer,
but you’ve got a smaller size. If, however, your model
needs a lot of accuracy, you’re going to pick that
higher accuracy model. And it’s going to be bigger. For most people, the
general purpose solution is appropriate. So then the final option
is to the training time. So we normally recommend
two to three hours for every 1,000 images. So we’re going to
pick eight hours here. The good news is that once
your model is optimized, we stop training it,
and we don’t charge you for any extra time. So we’ll train the model. And again, we just
told this model that it could take eight hours. You’ll get an email
when it’s done. So we’re going to
switch to a tab where I’ve already trained
this model for you. So you can see here, the
training is complete. We’re going to go inside. And you’ll see that we
actually only took five hours. So we weren’t charged for the
full eight, only for five hours for training. If you look at the evaluation
details, you have a threshold. This threshold is a number
that you, as the developer, will set in your code. It will tell the
model what accuracy, what confidence level,
you want for your results. So if we moved all
the way up to 0.9, it would only give
you back a label if it was confident
with a 90% confidence. Now, that might be important
to you in some cases, it might not in others. For example, in our dog
breeds, what if we’ve got one that’s coming back at
48% this and 54% that? Would we want no
label, or do we want to lower that
threshold down to 25%, and you just give
me the best guess, which would be the one
with the 48% confidence. Scrolling down a little further,
we have our confusion matrix. And this identifies
areas in your model that have the potential
for confusion. So here we’ve got the Siberian
Husky and the Eskimo dog being identified at the top. Now, as a human, I get
these confused all the time. So I’m not too
surprised that the model has some problems with that. You as a developer have
a couple of choices here. You can add more
images and retrain, hoping the model
gets more accurate. Maybe you want to merge them
into a Husky-Eskimo hybrid. It depends on what’s
important to you. You might also find that if
you look through those images, some of them are
actually mislabeled. I actually had that when
I trained a flower set. So this confusion
matrix will give you some information that’s making
your model more efficient. And we’re going to go
back up to the top. And we did this
because we wanted to find out what breed Une is. So let’s take a picture
of Une and test our model. So Une is– Une is a Staffy. And I asked
Christiaan and Une is in fact, about a third Staffy,
so fairly accurate guess here. Now, that’s great. We’ve got our model. It’s really easy to use this,
just like all of the other APIs that we give for you. You just put your
model into your app. So we actually have a demo app. If we could switch to
the demo app please. To the phone. Thank you. So the other day, Christiaan
was given a Christiaan and Une look-alike. So let’s scan across
and see what happens. Oh, it’s not quite
Christiaan, but close enough. Is this Une? Is this Une? SONAKSHI WATEL: Oh, OK. ANN ZIMMER: There it is. We had it for a second. Siberian Husky is
pretty close too. SONAKSHI WATEL: That’s OK. ANN ZIMMER: It’s OK. Back to the slides please. [APPLAUSE] As you can see, we were
able to use our model. Oh my goodness, I
skipped a whole section. So normally, what
you would have had to do to get to this stage
was, traditionally, you would’ve had to build your
model out by training your data, developing your model,
training and tuning, deploying, and then giving it to
your developer to predict. This would have involved a
lot of people, traditionally. And I’ve skipped ahead
and I apologize for that. So now you get to
do it all yourself. So you saw the model. How does it compare
to what’s out there, those handcrafted models
that have experts? It’s 1.8 times faster. How did we do that? We incorporated speed into
our search algorithms. And that meant that the
models are faster and just as accurate. The beta is now available. You can start at no
cost and pay as you go. For most developers,
that free trial is enough to generate a model
that you can deploy and use in your app and give your
customers a good experience. SONAKSHI WATEL: Thanks, Ann. So as you saw, Ann had a really
comprehensive data set of dogs. It had around 20,000 images. But if you’ve done machine
learning in the past, or tried, like I have,
you know how difficult it is to just start by
preparing your training data. And getting a comprehensive
data set can be really tough. And that’s why we have
Custom Image Classifier. It’s a sample app that we
made to collect, label, train, and test all from
within one app. And what’s even more amazing
is that it’s open source and you can find it on
GitHub at this link. So let’s just get into a demo. And I’m going to show you a
flag classification sample app. So if we could go back to the
demo, switch to the phone. Cool. So let me just tell you
what I actually want to do. I want to set up a use
case to classify flags from different countries. So if we go into the app
and open the data sets page, we can actually go
and create a data set. And let’s create a
data set called flags and describe it as
flags of the world. As Ann is typing that out,
you can see on the screen that you can actually share
your data set if you wanted to, and make it public to
share it with other people. We really encourage
collaboration through this app. And as your data set is
created, the next step is to create labels for flags
from different countries. And this will be faster
in real life, trust me. Demos sometimes do weird things. But as that’s
happening, you will be able to see our data
set with labels on it. And in those labels– let’s just
switch to the world flags data set, where you can see these
labels that we’ve already created for different countries. So you could create a new
label, let’s say, for Canada. I’m not sure if we already have
that, but let’s try it out. And you can see Ann
making this new label, saving it, and adding
more images to it. So Ann is from Canada, so she
has a Canadian flag with her. And she’s going to just
take images of this flag. What’s really cool is that you
can actually even take a video and it will break it
down into images for you. And that’s really important
because you can take images from different angles, and
that helps when your model is identifying images. Let’s go back. And you might be
able to see that you need at least 100 images
for improved accuracy. But for now, we
have at least 10. So we can begin training. And we’ve already trained
this model for you. But if we had to go in and
train this model again, we would click Train Model
and decide how many hours we want to train it for. Like Ann said previously,
the recommendation is two to three hours
for every 1,000 images. We’re not going to do the
training now, because that’s going to take a long time. But let’s go back
and actually try to see if this
model really works. So Ann has a flag from
the country I’m from, but if some of you
in the audience know it, don’t say it out yet. Let’s see if our model
can recognize it. Let’s take a picture– and it’s trying. And it knows that it’s
India, with a 0.57 accuracy. Thanks Ann, for that demo. [APPLAUSE] And if you all are
as excited as we are when we created
this product, please do come stop
by the ML AI sandbox and check it out for yourself. Can we switch back to
the slides, please? And now I’m going to hand
it over to Christiaan to take us home. [APPLAUSE] CHRISTIAAN PRINS: Thank
you, Ann and Sonakshi. OK, after a bit of a whirlwind
tour, let’s do a quick recap. So we reiterated what ML Kit is. And we showed a bit on
what our adoption was. And we talked a bit about
the APIs that we already had. But then we walked you
through the new features that we’ve been adding
at I/O this year. We added two new
base APIs, which are Object Detection and
Tracking, and on device translation. We talked a bit
about material design that has extended the
guidelines to include ML. And we also brought
up the showcase app that now allows you
to quickly integrate these rich experiences
into your app. And lastly, we talked
about AutoML Vision Edge, to make it easier for you to
build your own custom image classification model. And then we also
talked about how to make it easier to collect
data with a custom image classifier. Now the great thing is that
this is available today for you to use and use in your apps. Please visit us on
g.co/mlkit to learn more. [MUSIC PLAYING]

You may also like...

15 Responses

  1. dnvdk says:

    primero! 😀

  2. Coco says:

    first again lol

  3. Tymon Rybak says:

    Still watching the video – excited for the future!

  4. jojo padlan says:

    Great Video… Cheers !!!

  5. Manzur Alahi says:

    where can i find the github repos?

  6. Narendra Vaddi says:

    please provide links to on device translation sample application.

  7. Yago Zapata says:

    Amazing! Thanks!

  8. Abhishek Kumar says:

    Thank u for this awesome video

  9. Elvis Sun says:

    Great talk!

  10. Linus Schmid says:

    Is the code of the "dog recognicer" app public somewhere?

  11. nuniezjorge says:

    if it runs offline, why do i need firebase?, it feels to me as if i needed to connect to to wifi in order to use the microwave, doen't firebase know what it wants to be?

  12. Simon Ho says:

    i wish mlkit-custom-image-classifier was written in Java for android!

  13. Oliver Ellison says:

    Awesome presentation! I had a lot of fun using ML Kit to develop an app (Don't Smile Challenge) in a mobile application development course at BU! ( https://youtu.be/1M6hUx2BkHI )

  14. Mpombenyiru says:

    Its real awesome, developer its time to improve user experience in our apps we develop..Thanks google

  15. Shivam Kumar says:

    Thank you for this awesome intro.
    I was able to implement ml kit in my own app.
    app link : https://play.google.com/store/apps/details?id=com.developersk.firebasemlkitdemo
    very helpful ❤

Leave a Reply

Your email address will not be published. Required fields are marked *