Articles

Livestream Day 2: Amphitheatre (Google I/O ’18)

August 10, 2019


>>Welcome to Google I/O 2018. We’re happy you joined us to
celebrate platform improvements at Google. Right now you
checked into registration to see your badge. It must be visibly warn at all
times and you’ll need it to enter the
I/O shuttles and enter the end-of-day activities one and
two Make sure to visit Google.
com/I/O. Or download the Google app for
the most up-to-date information
Chat with any staff member wearing a yellow badge. Sessions will take place in the
aphitheater as well as the eight stages throughout the venue. If
you haven’t already reserve a seat for your favorite session on the
app. If you miss a session, don’t worry recordings will be
available shortly after they end. Be sure to visit the codelabs
codelabs Google staff will be online for helpful advice and
provide direction if you need support Make some time to visit the
office hours to meet one-on-one with Googlers and ask your
questions and get feedback on your projects. If you’re
looking to network with Googlers and fellow developers, join one of the several meetups hosted in
the lounge Finally visit the sandbox domes
where you can explore, learn, and play with the latest demos, physical
installations and more. After sessions are done for the day,
stick around for foods, drinks, music and fun and be prepared
for some surprises along the way. We would like to take this
opportunity to remind you that we are dedicating to providing
an inclusive event experience for everyone, and that by
attending Google I/O you agree to our code of conduct posted
throughout the venue Your opinion is valuable to us.
After the event, look out for a feedback form to share your
experience. Thanks for attending and have a wonderful time exploring Google
I/O. (music)
>>At this time, please find your seat. Our session
will begin soon. (music) >>Come on up, guys. (Applause) >>Hello everyone. Good morning and welcome to
fragment in an architecture components world. In the case
of architecture components a lot of people have been asking
questions, like how do all of these things fit together.
Hopefully we’ll able to answer some of those questions today.
First, just a little bit of history. The Fragment APIs go back to
2010 is when they were first initially written and 2011 when there was a little
more widespread availability. At the
very beginning everybody just started with the activity when
you were writing a Android app. The the main method with the
lifecycle. The entry point to the application. The launcher
launches activity, you get the create, start, resume,
et cetera, events that you all know and love
It will create views for you, it creates the window,
gives you a place to view your view hierarchy where you do most
of the UI work. It binds the consent from the app state, or
at least this is where people tended to do it. Keep in mind
this was fairly early in the Android public lifecycle.
Everybody had a habit of piling everything into one activity
because apps were a lot more simple than they are today
Of course, you listen to UI events and update that app state
from there as well. So there are so many incoming events, it
really kind of became this plumbing exercise to try to
break it up yourself, and so people ended up with
these very monolithic activities. And
Android tablets became a thing, and we thought how do you make a
tablet UI? You take one phone UI, you take another phone UI
and smack them together and you have a tablet UI, right. What
could go wrong? So we needed to answer the
question of how to stick two phone UIs together and make sure
it still works on a phone. And fragments were kind of our
answer to this question. So as an example here, this
approach actually worked pretty well for some applications. Gmail is a great example. If
you take a look you have the list of conversations to your
current conversation that you’re taking a look at, and there is
not a whole lot of difference here between the phone version
that you drill into versus something that you can show side
by side in a two two-pane UI. fragments were allowed to split
up the two UI classes. We were seeing the pain developers were
feeling and activity classes were getting really big and hard
to manage, so we wanted to see, could we allow for you to
decouple some of the various things that your activity
classes are doing to sort of relieve a little bit of that
pressure. So that means that anything a
fragment can do, or anything an activity can do, a fragment had
to be able to do too, and that guided a whole lot of the API
design from the beginning, so that meant that you needed to
have lifecycle events, and that meant that you needed to be able
to manage a view hierarchy It meant that you needed to be
able to deal with a saved instance state, so packaging everything up into a
bundle or parcellables so if your application got killed in
the background you could restore from that in a clean way. And
other things as well like nonconfiguration instance object passing, and that’s always kind
of a a mouthful but it allows you to
pass across that doesn’t kill the
idea If you were navigating between a
single opportunity we want to make sure you preserve the same
backstack activity from just calling activity from screen to
screen So, along the way we asked the
question, can we fix some other APIs that were really kind of a
pain in the neck in Android to use? Unretained, nonconfiguration
instance, again kind of a mouthful. This had some
problems because you only got one object that you could pass
from activity instance to activity instance
We didn’t give you any easy or standardized way to multiplex
different objects together so for example if you were
composing multiple libraries, we didn’t give you a whole lot of
help. This was all boilerplate that you had to write yourself. Activityshowdialogue. Anybody
remember this? A few people. Okay.
>>IAN LAKE: Please forget it immediately.
>>ADAM POWELL: Yeah, please forget it. This was
essentially a way to get the activity to do something that we
knew was missing and kind of a pain in the neck. In allowed
you to make the activity on the other side after a
configuration change, the new instance of your activity reshow
what is semantically the same dialogue.
We also had this thing, local activity manager, tab
host, does anybody remember this one? Wow! That is a lot more
hands than I thought I would see.
So some of you might remember this from say the
example of the original dialer and contacts app back in kind of
the Android 2.0 days. And as you’ve swopped between tabs,
between the normal number pad and your contact list or call
log, et cetera, those were actually separate activities
showing within a smaller window within an activity, this idea of
nesting activities was something that we started offering very
early. It had a lot of other issues though. Activities made
a whole lot of promises that local activity manager had a lot
of trouble trying to uphold so we tried to tighten that
interface a little better for something composable like
fragments that would be able to do the same thing but with a
little more expected behavior. >>IAN LAKE: Right, so
this whole idea of being able to break up your activity and make
it into kind of composable pieces, you know, we have this giant monolith of activity
and other methods of factoring it out into different piece, but for
fragments our approach was to move the loosely associated
code, the view and all of the state around that view, into a
separate fragment. And you basically just keep
doing this until you’ve kind of segmented out your activity into
something that looks slightly more reasonable, right. You may
have a couple of fragments in here that kind of have their own
specific requirements and their own specific UI, and your
activity just becomes more of a shell for this.
But soon after we started getting into this and when it came up to
around Jelly Bean, people were like but this isn’t enough.
What if one of my fragments has a view pager in it that also has
fragments as each one of its pages, so we get into this, all
right, well now let’s break up the fragments into smaller
piece, and so of course, you know, the only way if fragments
aren’t enough, then you just add more fragments, in this case
child fragments became a thing. And so now each one of
these fragments could then be broken up into smaller pieces,
so this led us to kind of again kind of decouple things
and we built a lot of code around trying to make this nested state work
really well. Two varying levels of degree of
success, but it’s gotten a lot better over time. So, with these things in mind,
every one of those fragments, whether it’s a small thing just managing
a view or if it’s a retained instance
fragment, every single one of these fragments give you
lifecycle hooks, every single one of them has back stack
management associated with them, every single one of them has a
retained object that it can do across configuration changes,
and all of them are stored statefuly in your fragment
manager. That means that the existence of
that fragment is actually part of the state of your app, so when your app
dies and comes back, that existence in
the Fragment Manager is actually a really important part, at
least from fragments perspective, that we’re going to
do on your behalf, and of course, they all have their own
ability to have a view subtree so that piece of your UI that
they manage themselves. And of course, we also allow
things like fragments being inflatable and put right into your XML files and be
able to reuse them across multiple layout, and again this
is also kind of hooking into these same processes, but the
biggest issue we found is that all these fragments do all of these things
at once whether you want them to or not. And most of the time
you’re only actually using one or two of these things and not
the full 16 things that fragments do that you get as
kind of a bundle of deal. So that means that some of these
things like the stateful restoration means you have a lot
of patterns that look like this where you have to make sure that the Fragment Manager is in the right
state. In this case we’re actually checking, there no
saved instance state? Ie, are we in a fresh run of
this activity, and oh, now I need to add my fragments to my
layout. If you’ve ever gotten this approach, where you have
one fragment over another one, yeah that is this kind of
stateful restoration actually doing
something that was a useful bit but at the
same point, like maybe not exactly what you were expecting
or something that you even wanted in some cases. But for
fragment, you kind of got all of them together in a package
deal. But last year, we kind of took a more holistic approach holistic approach on how we do
API, and so all of the architecture components have
really tried to focus in on doing one thing well rather than
being kind of the kitchen sink kind of
approach. So for example, lifecycle
observers, the ability to have lifecycle and that unresume, unstart as
kind of a piece that is completely
independent from other things, something you can register just
if you care about lifecycle. Similarly, for view model,
we created a new primitive, a new object specifically for that
kind of retained instance state across configuration changes,
and of course this year, just yesterday, we announced a new
navigation architecture component, kind of
working at the much higher level architecture on how you build
your UI and put these things together and really trying to tackle just these specific
problems of how do you move from one screen to the next screen.
All of these things were really kind of focused on doing one
thing well, and we’re trying to bring a lot of that to
fragments, but that means that, you know, we kind of have a lot
of legacy to go along with them. >>ADAM POWELL: That’s
right. So part of this is that fragments were really designed
around this idea of very lose about loose dependencies and
activities that go around behind them. Question for the
audience, how many know the question, does a child uncreate
method run before or after the parents uncreate method? How
many of you are confident you know the answer? Not a whole
lot. How many of you —
>>IAN LAKE: No idea? Is that an option?
>>ADAM POWELL: How many of you thought you knew the
answer before I asked the question and then you started
questioning yourself? Yeah. Okay. I’ll fully admit that
sometimes I have to doublecheck this as well because the answer really kind of of on
whether the parent is a activity or fragment. Android.app fragments or support
fragments? The answer is different all the time. We
changed this around. If you were trying to design a fragment that worked across all
versions of Android dealing with the fragment, you had to deal
with these rules changing under you a bit. You couldn’t count
on anything. So one of the things we allowed
for lifecycle observers is that we really wanted to give a very
strict ordering of the callbacks. You have a strict
last in and first out callback ordering for all lifecycle
observers in any given lifecycle. This is important
because it means you can set up dependencies between lifecycle
observers and use lifecycle observers as the implementation
details of a particular activity instance or fragment instance as
the case may be. You can make sure to maintain
guarantees between certain lifecycle events so that any
other lifecycle observers or other libraries that you may be
using can take advantage of those guarantees that you’ve
configured. Lifecycle observers are created
by you. We don’t try to recreate them via reflection,
which means that you can use any sort of creation pattern
that you may find useful within your app.
So this seems like a really small thing, but it was a big
piece of feedback that we got with fragments. The implication
of that though is that you no longer get stateful
restoration by the system. In some cases this is exactly what
you want. You want to make sure that you have a single control
flow path as you’re configuring what your lifecycle observers.
You don’t want to be checking, hey, is my instance state going
to bring this back for me? I don’t want to end up with two
of these things and so on and so forth.
>>IAN LAKE: Right. One of the really common patterns we
saw a lot of app developers using is a headless fragment, a
fragment without a UI. That was used basically only to
receive on start, on stop lifecycle events and so for a lot of these
kind of approach, lifecycle observers are going to give you
the same kind of API service but with a lot smaller kind of
API that you don’t actually need all of the things of fragment.
So what does this actually look like? You might have just a simple
lifecycle observer that does analytics calls and your
analytics library might want on-start, on-stop of a
particular lifecycle. So the thing here is that this
is an independent piece from fragments, from activities. That has a
couple of super important things in that one, you can test it
independently, you don’t have to spin off the whole world just to
test this piece. You can test it in isolation.
Also, because you can create this observer, you can
use whatever methods you want if you want to use a
dependency injection kind of model and inject in your
dependencies, like that is really easy to do in this model
rather than trying to do it in the scope of an activity or
fragment or doing that kind of thing. So this gives you a lot more
power to kind of split out this into a
composable piece, so if you’re building a library, you can just use this
same lifecycle observer across all of
your fragments or all of your activities and not have to
duplicate that data or worry about, like okay, what state is
my instance state in. You can always say, add the observer,
and we’re done. There is no more work you need to do here,
so this offers a lot simpler alternative for this specific
use case. >>ADAM POWELL: Okay.
We’ve got some glowing slides to get through. Okay. So re retained instance
fragments were designed to outlive an activity. These were
basically just a setter on a fragment to say, hey, retain my instance state across
configuration changes. So it’s a pretty useful thing
because it meant that you could safe handles to expensive data
or expensive operations. If you start some sort of query that
might take a bit to complete, you don’t want to restart
because something rotated the screen or something got resized.
It meant that you could reconnect to these same
operations after that configuration change, but really
please don’t do UI during this. This was another one of those
kind of headache pain points sort of things, fragments do
everything all at once, but these were two things that just
really did not taste great together. The reasoning for
this is, of course, that views hold on to the context that
created them. Well, what context do you use to create
views most of the time? It ends up being the activity. You
don’t want to hold on to a reference after an old activity
after a configuration change and end up using the same views.
Technically it’s possible to do this right and drop all of
your references on destroy view, recreate the view after you come around the
other side, but this was way too difficult to get right all the
time, just because references to the views end up leaking out
and getting everywhere no matter what you try to do, so this was
something that was really just kind of a foot gun waiting to
hurt people. >>IAN LAKE: But of course
in the architecture component world, there is a different
solution for this retained instance pattern, and that comes
in the model of ViewModels, and this is a new
class, and to be honest it doesn’t really do a whole lot.
It’s how to connects to the overall system. Instead of
being, again, part of the Fragment APIs, this is a more
general purpose thing, something that you can attach to any
activity or any fragment or any Viewmodel store owner if
you build in your own framework. The main thing that’s
different here, just kind of like the
lifecycle observers, is that we are not
the owner of creating those object, so by default we provide
a factory that just creates a new instance or creates a new
instance and attaches your application context to it, but the same point, this is
a super powerful hook for again kind of adding in those extra
dependencies, so if you’re using dependency injection, like again
you have kind of the hooks you need to hook into the Viewmodel
creation and pass them in as a
construction parameter, there doesn’t need to be a more
complicated structure than that. So but you get into the
same problem of like, okay, well how do I connect my UI? And the
other piece of the puzzle here is a different architecture
component called Live Data and Live Data
is all about kind of a lifecycle aware
observer, so in this model, in this world, your Viewmodel is the
one who is storing the data, right, and
it’s the LiveData that it has that it can pass to the activity
or fragment as they get created and recreated, and that
fragment or activity can just observe the LiveData that the ViewModel own,
and so this separates it out so instead of being the ViewModel
that needs to hold on to a reference to view, the activity
can then pull the information out of the ViewModel.
So I think for a lot of use cases, where you’re using a
restained instance state specifically to kind of store
this expensive data and do that kind
of work, ViewModels offer really good
alternative to retained instance fragments, and so what’s a a
super simple one look like? Well, in this case we’re going
to use an Android ViewModel so
here we get a reference to your application, your application
context, so this is really useful if you’re doing things
like a service locater kind of pattern, but the important thing here is that the ViewModel stores an instance of
a LiveData in this case a LiveData
has a list of our fancy named expensive data class. Normally
you put this in line, but slides are hard so here we’re using it and we’re getting our data from
a room database, a simple RM database.
But you can really get this data from anywhere or set up
kind of the correct lifecycle events here so
that your LiveData can kind of get
into the right state. But that means that your actual fragments
or activities don’t need to know where this data came from,
right. Its encapsulated in the
ViewModel and again we’re in the same thing where we can now test
this ViewModel in isolation and we don’t need to create
activity, we don’t need to create fragments to get to this
same view model. We can just use them in isolation and make
sure that all of the work we need here to get our expensive
data and make sure that it’s in the right state can all be done kind of separate
from the actual UI pieces of the
framework.>>ADAM POWELL: All right. So
everybody’s favorite topic when it comes to the Fragment API is
fragment transactions because this is a
statement that is said by pretty much no
one ever. (laughter). Fragment transactions are
asynchronous by default. If you call it it gets posted for later
and it will occur on kind of the next handler post on the main
thread, or if another lifecycle event is coming up we’ll execute
the pending transactions, bring it to a consistent state before
we move to the next state. Now, this actually has a few benefits to it, believe it or
not, for as much as everybody gets kind of frustrated about it
and the biggest wasn’t is there is no re-enter
ree fragment operation, if you manipulate fragments within say
the on-create of another fragment you don’t start getting into a state where you have a
half-initiallyized frackment while you’re waiting for bring
another fragment up to speed. This is a source of
countless bugs are and I think it was earlier this week or last
week that we were helping developers troubleshoot some
other re-enterrant behavior bugs
coming from view callbacks, for example. This is the sort of
thing that can be really subtle. The asynchronous things sidestep
all of it and that was kind of convenient. The drawbacks are, of course,
that the observed state at any given point in time doesn’t
actually reflect any of the queued transactions, and so this
is a huge problem. This basically means that if one
piece of your code submits a fragment transaction, another
piece of your code tries to read something out of state that that
fragment would have set up for you, whether that’s another view
hierarchy that would have gotten attached to a subhierarchy of
the UI, or whether it’s just some data about pending
operation for one of the restained instance operations,
this means you can’t trust anything
about the current state of the app. It means you can have
various data races within the main part of your application. We’ve even seen people run into
layered parts on top of each other because of this issue.
They were checking the fragment manager to see what is my
current UI pane for example that’s currently there. It said
oh, look nothing, I don’t have anything to do, let me make a
new fragment transaction and sick
stick something there, but this was already in flight. This was
a pain in the neck and everybody loved the commit-now API was
added because it lets you do something immediately as long as
you promise not to do anything with the current fragment
manager as part of that transaction.
>>IAN LAKE: But we kind of get to this point where,
okay, now we have all of these fragment transaction codes in
your app and very subtle changes that you may not pick up in a simple
CL are all of a sudden changing the whole behavior of your app.
And we really took a look at this and we said, well, like
should you really have to be writing all of these fragment
transactions by hand? Like, for so many of the use
cases we can do a little bit better here. We can do a lot
better here, especially for the really common cases. So we had built navigation, a
brand new architecture component that’s really kind of focused on
the screen-to-screen transitions between fragments or
other UI pieces because we really wanted to make it work
well with fragments. Fragments are a really great way of getting access to Livecycle
observers and view models and also, you know, being an owner of a view and
those are really kind of the main pieces you need when you’re trying to build a
good Android app. At the same point, we realize
there are a lot of things that fragments do that we don’t
really want you to have to deal with yourself, so one of
the things with navigation that we really focused on was actually being able to
take ownership of the back stack and make sure that the
navigation component can actually handle that back stack
for you, whether you’re using fragments
or something else, the ownership with that back stack
goes with navigation, and that gives us a lot more flexibility
and a little more ability to be a little more prescriptive and opinionated on what is a good
thing to do and what is a not so great thing to do and kind of
give you a lot of the same power of fragment transactions without
actually having to touch one with your bare hands. Because
again, we’re trying to make things so that, you know, you
have all of the power without necessarily
the headache and the problems that run across some of the
things that Adam pointed out. So we’re going to be
talking in detail about navigation tomorrow morning at
8:30. I know you’re all early risers, but I wanted to give you
a little bit of a preview of kind of how simple this could
be, so we actually offer a helper method where if you want
to click a button and do a full screen transition to
another fragment, we have a create
navigate on click listener that literally does just that. All
you need to do is pass in the IED of one of those destinations
that you’ve set up in Navigation.
Now, this looks a little bit magical, and so what is this
actually doing under the covers? Well, really what it’s doing is
using this Navcontroler object. And
one of the things that I find super useful about Navigation is
you can find the Navcontroler from any view or any fragment
that’s been created by Navigation, so here we’re using
a column extension which makes this a lot easier. If you’re in a Java land it’s
just a static method, but here we can find from any view, we
can just say find Navcontroler and get an instance of the
Navcontroler and then just call Navigate. And this Navigate
call knows how to do fragment transactions so our
Navcontroler in this case is set up with a fragment navigator, our own
class that knows all about the correct behavior of doing
fragment transactions and how to do all of that work for you so
that from your codes perspective, you can just call
Navigate and we’ll do the right thing for you.
So obviously there is a lot more here, like if you want to pass
arguments, like you can do a bundle approach and we also have
a lot more things around that specific case that we’ll be
talking about tomorrow as well, as well as setting things like
separate animations and other objects in here. So you have a lot of the same
power, but you can also make it really easy to use and move a
lot of the code that used to be a fragment transaction that
you’d have to make really sure that you got right into
something that Navigation can understand and help with. So Adam, we’ve talked about a
lot of things like why not fragment, but this is kind of a
fragment talk, right?>>ADAM POWELL: I think that’s
why everyone is here.>>IAN LAKE: Yeah, so why
fragments? It’s 2018.>>ADAM POWELL: It is 2018. So
what are fragments still good for after we’ve sort of carved
off some of these extension points?
To before we talk about that, I think it’s kind of more
than to go into a little bit of the package layering involved in
the Android framework, specifically Android.widget
versus Android.app. Now Android.widget is really kind of
designed to hold the mechanism for your UI. That means that
this is all the stuff that shows state to the user and things
that reports user interaction events back to other higher
level portions of your app. Really, that’s it. Everything
that’s in the widget package is really meant to only do these
two things, whereas Android.app where fragments live, activities
live, so on and so forth, that’s where your policy goes. This is
what defines what state to bind to widgets. This is what
defines how you should respond to those user interaction events
and how to issue changes to your data model, anything kind of at
that layer or above, no matter how you may factor it, this is
kind of how it breaks down. So this also brings us to the
idea of inflatable components which is something that comes up
quite a bit. Since you can inflate a fragment from a
layout, people found out really quickly that you can use a
fragment to compose some of your views together and create,
basically like a custom view but with a fragment.
And really some of this kind of comes in because people
start asking, you know, just if this view had a lifecycle I
could….something. But really you want to kind of use that as
a stopping point and just kind of ask yourself, like what is it
about in particular view? Remember, mechanism only, what
is it about this particular view about
the lifecycle. It should be only seeing or
publishing user action if the user clicks or performs anything
themselves. None of these need lifecycle events to accomplish
that goal. So you should watch out and see if you need some
policy layer that sits above that.
This is where fragments are really useful because you can
compose them of higher-level controls. It means that they
can be self-sufficient, they have their own lifecycle, they
know how to hook into some of these things. It means
that inflated attributes can actually become fragment
arguments so you can make these things super self-sufficient and
implement cross-cutting UI policies that you don’t have to
worry about in terms of like the over arching top-level parent
context. This ends up being really useful for things like, I
mean even just something as simple as ads, something where
you really do need to control for a lifecycle about when a
user comes and goes, perform a request out to the network to
fetch the don’t that you’re going to show, but really you
don’t want that code getting all over throughout the rest of your
application logic. Can you use this for other sorts
of like independent info cards, things that are, again, fully
self-contained. The parent doesn’t actually need to be
involved in any of the data routing from your central
repository or whatever other sort of model you’re using under
the hood in order to get that on screen and get the user interacting
with it. So one example of using the
fragment arguments for this is on the screen here. This is the
sort of thing we can probably benefit from some extensions or helpers, but overall it’s not
too bad if you define a stylable to go ahead and inflate some of
those argument, just assign the values to the bundle, set it
there, and then you can route all of this logic through your
normal fragment argument handling that appears throughout
the rest of your fragment implementation.
>>IAN LAKE: So one of the other use cases we found really
useful for using fragments is this kind of app screen kind of
model, so where a fragment is taking the vast majority of the
screen. It’s really the main content of
your app, and this is really kind of
where Navigation comes in and really kind of takes away a lot
of the rough edges of fragments so you’re just left with the
architecture component hooks and a view. So in this model, it’s a lot
easier to move towards a single
activity app. If you listen to Chet yesterday, this is kind of
our recommendation going forward to kind of use activities just
as an entry point into your app, and then your content, the
actual things that the UI — the pieces that
user actually work with, are the fragments of your app, right,
the actual content is in fragments. So with this model, the only
thing the activity needs to do is handle kind of common app chrome. So if
you’re using an action bar still or if you’re using like bottom
Nav or side Nav, these would be guide things that the activity
can manage, but the rest of it could be decoupled destination,
decoupled screens, all of these fragments can be much more
independent from each other rather than being tightly
coupled together. So this allows us to kind of
move a lot of the things that normally you would have to deal with manually
into a system that we can help you out a lot more on, so for navigation, for
example, we’re going to give you the hooks that you need to set up the
transitions and animations needed to transition between these screens, and it’s
something that can be done in Navigation rather than something
you have to do by hand. Of course, like these aren’t
neutrally exclusive, right. You can still add a fragment to
your layout or have nested fragments underneath these
things, so it’s not an all or nothing kind of approach. It’s just one method that, yes,
this larger kind of content-based
fragment is still a really good way of kind of bridging the gap
between the Android. app world and the widget world.
Another use case that is still super useful is around dialogue
fragment. Again, we talked about kind of the activity of showdialogue. I
hope everyone forgot that now like I said, but dialogue
fragment is really kind of the same approach where instead of it being something that you
need to manage and make sure is visible or not visible, it really kind of helps
encapsulate the interaction between a floating UI, like you have a
really important question to ask the user, a dialogue is probably
a useful case in some manner, but it’s also something that
we’ve started to see a lot more. In Android P, for instance, the
biometric prompt for fingerprint is now something that is in a
dialogue kind of model, so this is actually something that we’ve found really useful.
And part of it is that we can leverage some of that instance
state restoration that you get for free as part of fragments
with your dialogue. This means that you’ve asked a really
important question, otherwise why would you show a dialogue and
interrupt the user? But you really don’t want that to disappear when you resize your
screen on a pixel book orrotate your
device on the phone. It’s an important question and we want
to make sure we do some of the restoration state for you.
At the same point, dialogues aren’t just a floating
UI. Right. There are also bottom sheets and
bottom sheet dialogue fragment specifically built around having
different kinds of floating UIs, UIs that live above your main don’t
without necessarily being just the like dimmed background kind of traditional
dialogue of it. So we’re really trying to get to
this point where, yes, uses a dialogue fragment is great if
you have this UI that you really don’t want to lose. It’s really
important for the user, obviously for things that the
user could use or could lose, but it’s
probably not a dialogue you want to use, it’s probably a snack
bar or another approach here. This is my favorite part
about dialogue fragments, there is
literally nothing more you need to do. You call show or shownow for the
synchronous method and that’s it. It’s very much this kind of
model where we’re going to do the fragment transactions for
you. Again, like we don’t want to be in a model where you have to do a lot
of complicated work yourself.
For a dialogue fragment you can literally just call show and
it will do the right thing and get you back into the right
state.>>ADAM POWELL: All right. So
one of the other things that people tend to use fragments for quite
a bit is managing options menus and
specifically merging menus into a common toolbar or action bar
that’s global to the activity. So fragment support options
menus because activities support them. This goes back to the
design goal, where fragments had to do
anything the activity could do. The most common way is set
support action bar for your toolbar which kind of binds whatever your current
context is to a particular toolbar
within your UI. This is create if you have a
fixed common chrome. In the navigation world, if your
toolbar never changes between screen, then this is really
handy because this means that you can populate that with
whatever actions are relevant for what’s in your main don’t.
Fragment pager adapter is another really great one that
people use this for quite a bit. If the actions change across
pages, then all of this is just kind of handled for you, the
system takes care of it. Now, the alternative to this is
that you can go ahead and directly manage the menus as Toolbar View
Data. Most of the time today your UIs are such that the
toolbar or other sort of menu management is really kind of
part of the content. I mean, this was something that you saw
back in a lot of hollow UIs where the action bar was always
fixed common chrome, but now you have the collapsing app bar
layouts and things that are much more sophisticated to the point
we’re doing the global wire app is almost jumping through hoops
that you don’t really need, so don’t make it more complicated
than it needs to be. If you don’t have common chrome for this, then directly managing
some of the toolbar menus as View Data is as simple of this.
It’s a couple lines of code. There is not a whole lot to it.
You inflate the menu, attach a listener, and keep it more
self-contained rather than leaking out into the whole
options menu at that point. One of the other things we get
asked a lot is about testing fragments. Testing fragments is
a whole lot easier than testing activities. You don’t have to
spin up the entirety of the world, entire new activity with
sort of the back and forth in an instrumentation test to the
activity manager to make this work. The nice part about this
is that from quite a few years ago, we’ve had
this glass fragment controller that is able to drive the
lifecycle, so you can go ahead and create just a test fragment manager or test fragment
controller that can test those larger fragment components in
isolation away from everything else that might be happening in
the activity manager. We use this pretty extensively
for the existing support library tests ourselves. Now, we make
this possible but truth be told, this is really not the best
interface for doing it. It ends up looking a little bit like
this or you could go through some of the support library tests in AOSP
and see kind of the example of some of the utility methods that
we’ve set up for this. We should really probably wrap
this in some some type of move to state type of method. This
is a case where we fell into a trap ourself, something is
possible, it’s maybe not great, the rest is kind of left as an
exercise to the reader and we just kind of need to do better
on this part.>>IAN LAKE: Yeah, so one of
the other pieces that kind of came along side of fragments
more from a convenience method than anything else were loaders.
A kind of precursor to a lot of what we’ve done in architecture
components on doing retained instance states, and one of the
things we’ve done recently is actually tried to decouple them
from fragments. So how do we actually do this?
Well, instead of being their own special thing, we actually
rebuilt them on a lot of the architecture components using these new primitives we
have of LiveData and ViewModel to make
these a totally independent thing. Now you can actually use
loaders in any lifecycle owner or view
store owner class, activity, fragment, if you’re building
your own kind of thing, it works equally well in all of those
cases. What does this mean? It really
just means that you have the same loader you always do, but
instead of calling getsupportloader Manager, can
you just call getinstance. We’ll do all the hookup for you.
We know you’re in a lifecycle observer and so we know how to
hook all that stuff up for you, and we really were able to kind
of cut out a lot of the stuff — cut out a lot of
the custom behavior here so that, you know, we can kind of
rely on these new things to give you stronger guarantees around
lifecycle. So we’ve talked a lot about
fragments and we’ve talked a lot about fragments in the past and
we’ve talked about fragments in 2018, but
there is another question. Where are we going with
fragments? What do we still have to do? So some of this is just coming
down to building in those stronger guarantees, trying to
separate the desired behavior, things that we specifically set out to put in
versus incidental behavior, more behavior that has just been around for six
years and is not necessarily something
that is a codified standard. We’re trying to move more toward
a model where you can be very confident in exactly what’s
going to happen when you’re using fragments.
Another case is trying to do the same thing that we did with
Loader Manager in reimplementing some of these existing APIs on
top of these new primitives. For example, like being able to
rewrite a lot of the internals of fragments so they’re actually
using lifecycle observers under the covers, right. So these
kind of things allow us to kind of make sure that all of
our components are on the same page and really doing the same
thing. So a few other things. Again, trying to move towards a
world where we have the correct set of
signals and activity callbacks so that any interested component can be it.
That means that fragments are not a special snowflake anymore
and they’re just another component, another tool in your toolbox using the same
composable hooks that you always have. So really trying to play
well with other components so that it’s not an either/or and
it’s a both. We can work together, that you can use fragments a I long long side other components and
it works.>>ADAM POWELL: So options are
of course one other thing that we still don’t have a great
answer for this in terms of how we want to decouple this. In in
a lot of ways I would like options menus as a concept to be depricated but we have to make
sure you have them for when you’re using common chrome or
paging through some don’t. Next one? Maybe? There we go.
Android.app.fragment manager. It’s deprecated. This works a lot better in the
Android packages and support before it because this means you
no longer have to wore bee all the bugs that were fixed over
time and allows us to make larger changes to the behave in
API. Send us your wish list for the things you would like to see
us do for some of this too. We are going to be in the sandbox
area after this so please come and say hello and tell us what
you would like to see next. Thank you very much for coming.
(Applause). (music) . >>Thank you for joining
the session. Ambassadors will assist with
letting you through the designated exits. We’ll be
making room for those that registered for the next session.
If you registered for the next session, we ask that you please
leave the room and return via the line
outside. Thank you. May 9, 2018 9:30 a.m. PT What’s New in Android Things? >>Welcome. Please fill in
the seats near the front of the room. Thank you. >>At this time, please
find your seat. Our session will begin soon. >>Good morning and welcome
to the second day of Google I/O. Thank you all who made it here
bright and early and a big hello to folks on the live stream.
This session is about Android Things, the version of Android that was
specifically designed for IoT products. My name is Vince and
I lead product management for Android Things. Yesterday you
heard about incredible momentum on Android. Over the years we
have extended Android to many different types of devices
beyond phones and tablets, wearables, TVs, auto, and last
year we announce add new version of Android called Android
Things, specifically designed for embedded devices in
IoT products. I know many of you have been tracking our regular developer
previews and are already building great things. I love
reading about the project, it’s so awesome. For some of you, this is
entirely new you’re just getting started and want to know what
Android Things can do for you, so let me just get started with
some of the obvious questions. What is Android Things for? What problems does it solve?
People have been making connected devices for many
years. What’s different now is the profound revolution in
Machine Learning. Powerful models and tools are available
for all developers and there is ever increasing demand for
smarter devices Traditionally, your only option
when building smarter devices was to delegate some of the
serious processing to the Cloud. Now, it’s possible to run much
of it on device which is useful if you
need low latency or handling sensitive don’t and so on. And
this growing area is exactly where Android Things is
targeted. It is for devices that need
powerful on-device intelligence. When we look around the
devices that populate your homes and businesses, IoTs, of
course, are very broad and diverse, so in a sense the
product categories are all familiar. We all know these.
It’s just the user expectations have gone up, way up. Let me
just pick on two use cases At home, we increasingly expect
to be able to talk to our devices and they’ll know who we
are. Another one is Smart Building
devices such as cameras that can now detect how many people are
in a room and do something useful with that, for example
talk to the HVAC system. We know users want these kind of
devices and we know we could build them, but of course
building these devices is actually very hard, especially
maintaining them over time. It’s actually very expensive to
do this. We know this at Google because we’ve been shipping
these kinds of products. What’s so hard? Well first off,
we’re talking about an entire device, so a whole
physical product so responsible for the
entire software stack from the bottom up to the product
experience. Now of course the product experience is where you
ideally want to spend most of your time because that’s where
users are spending and interacting
with it. Below the product experience, IoT devices are currently built
using many different flavors of roll your own Linux or Android, and every d device
is different. It’s hard to expect consistent
and reliable APIs. When you go from
project to project you want to reuse what you
built, you want to get support from a large community.
At the same time, we’re talking about richer
experiences with these devices, so obviously you need
to get your app working, chances are you need to get third-party
apps and services working. Each time you need to figure out
basic building blocks like a reliable
setup flow from a smart phone for instance. You could do it
all yourself, but it just feels like extra time and effort.
Second, when you get down to the hardware, building prototypes
with microprocessor SoCs have gotten easier over time. People
use all sorts of prototyping tools to get that done, but
those don’t get you through to production, let alone commercial
launch. We’re talking about PCP design
and layout involving high speed, highly integrate the components,
SoC vendor relationships for general support and bring up
component supplychain, access to parts in low volume, and so on.
These are all very different skillsets that you need to hire
for your team. Again, prototyping solutions get
you off the ground, but that’s it. Finally, while it’s amazing how
much these devices can do on our behalf these days, the flip
side, of course, is that they have the keys to our home, our
businesses, our banking systems. Clearly there is more to gain
from attacking IoT devices. Perhaps less obvious is why it’s
hard to provide the security Well, it turns out it’s
incredibly expensive to provide on products
out of the gate and keep them updated over time. It involves a lot of backend
infrastructure to do this, and with he know this at Google
because we’ve been doing this for many products.
Meanwhile there is immense cost pressure from end
consumers, of course, and it’s hard toamortize costs without large
volume. So those are the issues that our
engineering teams and business teams focused on when we built
Android Things. The thinking was we could take
care of all the challenges that make development expensive, we
provide Android Things free of charge and you have less to
worry about. Our goal is to reduce the barrier to building
powerful, intelligent edge devices and make it more
accessible to everyone. Now, let’s take a look deeper.
About building on reliable and consistent APIs, Android Things
is Android. As you would expect, you can rely on the same Android APIs and Google
services as when you build for phones. Familiar tools such as
Android Studio and the Android SDK are
available. You can build on Google Play
services, Google Cloud Platform, Firebase
and the MLkit that you heard about yesterday and more
Of course, you’re building hardware and so it’s also about
component providers, SoC vendors and so on. As a device maker,
usually you spend a lot of time interacting directly to get to a
working image. With Android Things, we’ve already done the
hard work for you and that means you have access to hardware
reference designs, it means the build support package comes from
Google, and you get a working image right
away. You can scale to production
because we’ve signed agreements with the SoC vendors to
guarantee support over the long term. Literally, our goal is if
you know how to build an Android app you can now prototype IoT
products and take them all the way to production. You don’t
have to be a hardware engineer or firmware engineer to
do this. Finally, on security, our
strategy is to have security features built into the
platform. Os hardening is on by default
and you don’t have to spend time figuring it out yourself. What
about fixes over time? Regular security patches come regularly
from Google. Google provides all the backend infrastructure
for you to be in control, and you can decide when those go
out. Now, let’s go a bit deeper. How
do all the pieces that I just talked about fit together?
Developers often ask me, how is Android Things different from
Android? How is it different from AOSP?
I found that the best way to describe Android Things is
that it’s a fully managed solution beyond an
AOSP model. At the lowest level we define
turnkey hardware in partnership with the silicon providers, this
is on a form of system module architecture and I’ll
talk about more on that in a bit The way it works though is
we work closely with SoCs to land code
in the Google tree. Google is hosting the BSPs. In fact, we
have the system in our test labs looked up for continuous
integration. Moving up to Android in the
framework layers, Android Things is Android. What we’ve done is
we took Android and we shrunk it down to around 50%
of the footprint of AOSP, it can run
with lower compute and storage requirements.
Security features are built in, meaning they’re done for you
and they’re on by default. Up at the top is the app layer. We formalized a boundary so all
the product-specific code lives there. We make sure the Google
services you might want to work with work well out of the box so your apps can rely on
Google Play Services, Firebase, Cloud
Platform, Assistant, and more Finally, what’s pictured on
the left side is the management
console. It shows everything from bottom
to top because it’s where you control everything that goes
into your device, where you get access to the BSP that Google
and SoC vendors are supporting, it’s where you create full
product builds with all your apps and services included, it’s
where you manage releases and update channels over time.
As you can see, that’s why I described Android Things as a
fully managed solution. It gives you portfolio of reference
hardware to choose from, it gives you Android with security,
and key services working on top. It gives you console to
configure what exactly goes into your devices. It’s all of the
ingredients necessary to prototype and scale to
production. On behalf of the entire
Android Things team, I’m super excited to share that we are now
generally available. Version 170 is the first long-term supported release. Came out two
days ago on Monday, fresh off the suppress. Many of you have
helped us along the way. We’ve seen more than 100,000 downloads
of the Android Things SDK. We’ve received feedback from
more than 10,000 developers and it’s all critical to get us to
this point. Thank you so much. Now, what does Version 1.0
mean for you? Most importantly, it means Android Things is a
stable base for you to build commercial products. In fact,
we’ve worked with a few partners to put Android Things through its paces and vet whether we
could power real shipping products.
Let me show you a few. What you see here is our first
shipping product and it really put our program to the test.
This is a smart speaker by LG and it landed on shelfs last
month. So the story is LG wanted to
build a smart speaker with high-end audio and go to market
quickly and they felt we would be a good fit.
One of the first things was to determine a good hardware
reference design. The system module architecture that we
talked about before is in all the shipping products. From your point of view, it is
all done so you can just pick it up, it will be an officially
supported platform and you’re good to go. Well, LG was also concerned
about ongoing security fixes and infrastructure to send over the
air updates. Bottom line these guys want to work on amazing audio and not staff up a
team to do security and do OS bringup. This is smart speaker
so it’s nontrivial audio processing and
audiocrafting for Google assistant, we make sure they
work well and they’re part of the full solution as well
There is more OEMs working on smart speakers and they’ll be
coming out in retail soon. Soon after, partner chose us for
a new category called Smart Displays, the idea is a visual Google
assistant, deep integration with photos, maps, video calling,
YouTube, and so on. What you see are devices from
Lenovo, JBL and LG. All of these guys wanted to move
quickly and they felt that we could help them do that.
at the time, we were already working with SoC vendors,
expanding our portfolio and we were locked in a common system
module for all of these products. In other words, if
you looked under the hood, all of them are using
the same system-on module which is quite cool. Again, when you
work on your project, you can just skip all of that upfront
SoC work and just pick up the supported design. The OEMs were also concerned
about ongoing updates especially because this is a new product
category. They’re expecting frequent updates and during
development, they wanted to partition folks into different
groups and test the user experience. This is where our console was
put to the test to manage all the
builds, release channels, and frequent updates
Of course, IoT is broad and diverse. What’s the most
exciting for me is see being what everyone has been building
based on our developer previews. Here are two examples.
The product on the right is made by a company called byteflies, a
wearable startup based in Belgium. The hand is holding a sensor
that monitors vital signs and it sends the data back to the
Cloud. They let us know Android Things
helped turn the docking station into a secure hub. They don’t
have to worry about ongoing fixes and how to update the
device The product on the left is made
by a couple called Mirego based in
Montreal. Built a network of public things
and connected to a photobooth in
downtown Montreal that sounds really fun. They wanted a
solution to help them move quickly and Android could help
them do that. If you’re building a new product by Android Things, we want to work
with you too. We introduced a special limited
program to partner with the Android Things team for technical
guidance and support. If your company is interested in
learning more, please let us know at the link.
So now that I’ve shown you some real products, let’s switch
gears and go back to how all the pieces fit together starting
from the bottom. We talked about the hardware reference
designs, we talked about the system modules, you might be wondering,
what do they actually look like? So on the right here is a
system-on-module. SoM for short. It’s the NXP, IMX7D. All of the
complicated components are in the SoM. The key component such
as the SoC, memory, flash, WiFi, Bluetooth,
the baseboard is what you see on the left. This one shown here
is actually one of our development baseboards, but you
can imagine designing
product-specific baseboards with just the right set of connectors and plugging the SoM
on top. The SoM is complicated and quite
costly to design if you were to do it yourself, and we’ve done
the hard work for you. It would also be very expensive at low
volume, but the SoMs are what everyone will be using and that
helps drive up volume for everyone.
The baseboards are much less expensive to produce, even
at lower quantities. The electronics are much lower speed
and less dense. What I’ve shown here is also
referred to as a physical SoM because the SoM is on its own
board. You can also imagine designing
it in so it’s all on one single board and we call that the
virtual SoM. This would allow you to save on costs at high volume or to
better match XY dimensions for your product. Going back to the
three Smart Displays I showed earlier, they all use different
tactics, some use physical SoMs and some design it in virtually. For each SoM, Google hosts the
board support package, BSP for short. In other words, you
don’t have to interact with the SoC vendors. There is a stable
layer of separation with the BSP that makes the code portable.
You could imagine starting development on one SoM and then
swopping to another at later stages of production if your
design changes without having to change your code.
Now, part of making the SoM architecture viable is
guaranteed support. Behind the scenes we’ve been working really
hard with the SoC vendors to work out the collaboration so
you don’t have to talk directly to them. We
are announcing SoMs that are certified for production. The
list includes hardware from Mediatek, NXP and Qualcomm.
More specifically, SoMs on this list have guaranteed
long-term support and all security features built in, making it possible to bring
prototypes and go all the way to production. Hardware and reference designs
for these SoMs will be available in the coming months, so stay
tuned. SoMs supported for development
is already available now. Hardware from RaspberryPi and
NXP. Development hardware have a large community of support and
are readily available in retail even at unit equal tees, usually for cheap.
The beautiful thing about the Android model is you can swop
over to SoMs and certified for production once you’re ready
Everyone has been wondering, for the list of SoMs
that are certified for production, the ones on the
left, how long is the support and how much does it cost? Each SoM certified for
production is supported by Google for a minimum of three
years from the time it was first made available. What this means
is you will get regular security patches and stability fixes from
Google for you to push to your devices. You can even set your
devices to auto-update and not worry about it.
In other words, you get to fully harness the power of the
Android Security Team. Of course, you can push your own
updates to your devices any time you want for the entire product
lifetime. What about fees? There are no
charges to using Android Things. There are no license fees for
the OS or the management console. The API surface is stable,
meaning the length of support is provided for each major version.
This is why we refer to 1.0 as long-term support. We know
moving to a new major version can be disruptive and it may not
make sense for many IoT product, so there is no need to do that.
Let me show you in a bit more detail. Each line on this
chart represents a major version of Android Things, and of course
2.0, 3.0 in the future. As I said, every measured
version will have long-term support, the minimum three years
we talked about. These major versions do map to underlying
Android deserts and will pick the right points to intersect
over time The important point is when 2.0
comes out, 1.0 and 2.0 are supported in parallel. The security patches and
stability fixes come regularly and that’s what each white dot in the lines
represent, and we’ll also have minor feature updates on the
latest major version, so this is so we continue to address
community feedback in a timely manner. So we won’t wait for
the next major version to push these out.
In fact, in the coming months, 1. 1 will come out with
improvements for you. One last note on SoMs.
Obviously the work we do behind the scenes with the SoM vendors
is never ending. It’s an ongoing process. We’ll continue
to expand the SoM portfolio. You saw the vendors that already
have something listed officially, and we’re deeply engaged with others too
like Rockship. The point is once we put a SoM
up as certified for production, it’s a commitment. The ones you
saw all support 1.0 and when we eventually add 2.0, we’ll be
explicit about which SoMs support it. Now, we’ve been talkings about
the next layer up already since Google supports the support
package for every SoM, but what does it actually look like. Let’s go into that in a little
bit more detail What you see here is a
simplified software stack for a traditional Android Mobile
Device. Going from bottom to top, the kernel up to the HAL
layers is primarily focused on enabling hardware support. The
Android framework provides a rich set of experiences for apps and
the applications provide useful features, right.
In IoT devices you can imagine many of the included components not
making sense. Well, you know, embedded devices are usually
single purpose, they may not have a screen, users are not
installing apps, and so on. So first Android Things removed most of the user-facing
applications and visual of course framework
components that don’t make sense. There is no use for a
messaging app. This removes unnecessary complexity and
reduces the footprint of the distribution.
If you build a device with a display, you have full
control. There is no system UI to get in the way. Now with or without a display,
the UI toolkit portions of the framework remain available to
the apps. Activities are still the primary component of an
Android app. The framework delivers all input events to the
foreground activity which has focus.
Next, we formalized the boundary between the system and
a product-specific code necessary for the device. Product specific code is above
the line in the app layer. Everything below the line is
provided by Google and our SoC partners. This way you don’t
have to figure out how to put together a working image.
Google can sign the full image and guarantee automatic security
over the support timeframe You might be wondering how can
this work in practice? Well, one crucial piece is the Android
Things Support Library. I’ll touch upon the API surface
briefly. For example, you may be wondering how to control
hardware. The peripheral I/O API allows
apps to interface with low level peripherals using
industry protocols like GPIO. It allows apps to inject
hardware events into the Android framework.
You might be wondering how to call APIs that usually
involve a dialogue or view to the user. These don’t make
sense when displays are optional. So we’ve added APIs
to control settings, control device updates, and configure
local area connectivity including Bluetooth and LoWPAN
I’d like to show you the complete picture. We’ve already
talked about the portfolio of SoMs you can select from and
these come in at the bottom. There are actually two more
building blocks. The first is device configuration and the
second is security. Let’s start with device
configuration. It’s the other crucial piece along with the Android Things Support
Library that helps maintain the formal like we talked about.
SoMs can support a wide variety of hardware and so in many cases
the OS has to know which is connected in order to properly
connect those interfaces. For instance, which audio
devices are connected where and their
formats? We provide a configuration
system so OEMs can is divide the connected hardware in the
console which will correspond the image at build time
Cool, let’s talk a little bit about security. First off, the OS hardening that
we do on Android is enabled to Android Things so these are
things like permissions, the application sandbox, mandatory access control with SELinux. Kernel Cisco filtering and so
on. Images of sign, you don’t need to run any signing
infrastructure yourself. Google provides the infrastructure to
build the full image for you. These partitions are signed for
security. Verified boot, which is full he’d implemented for
SoMs certified for production, make sure you’re running a valid
image at all times There is lots of interesting
scenarios that are handled for you, and I always get a kick of
thinking about them. For example, what about running an
older image with a vulnerability that’s since been fixed? Robot
protection makes sure this doesn’t happen. Another one is product key, so
images only run where intended. You can ensure only your product images run on your devices
Finally you don’t need to run update infrastructure.
Google takes care of building and serving updates, the same
infrastructure we use for our own products
All right. Now we’re moving on to the app layer where
all of the product-specific code lives. What does building an
app on Android Things look like? To get started, we want access
to the Android thing Support Library we talked about. Built
into the shared devices as a shared library. We’re showing code snippets from
two files here. At the top is a snippet from the
file. Declare the SDK dependency there and it will get
pulled in. The bottom is a snippet of the Android manifest
file, and you notice we use the user’s library tag so the app
can access the device’s shared library. Below that is
supplying the HOME category to one of your
activities. What this does is the system will launch the
activity automatically on boot and relaunch it if the app
terminates. This is of course exactly the behavior you would
expect for embedded devices. Now that we have this set up,
let’s do something useful. The example here is very simple. It
detects a button press, but it’s a good demonstration of how to
interface directly with peripherals using the peripheral I/O API. The code at
the top is the Android manifest file. This is where we declare
the required permissions to use peripheral I/O. The code
snippet below is the activity. There we use the peripheral I/O
API to open a connection to a GPO pin
which is where the button is connected, and we configure the
GPIO pin as an input and attach a callback and that’s pretty
much it. Good to go. A similar workflow is used for
all of the peripheral I/O interfaces
so you can imagine adding a few lines and you can light up an
LED. Of course, it’s not just about
simple stuff. There is a wide variety of peripherals out
there. Your app can interact with all kinds of sensors, actuators and so on
this way. we’ve expanded our community hub
this week as part of the 1.0 announcement. The site is
Androidthings.withgoogle.com. It brings together all of the
information in one place, samples, documents, forum.
Also, as you probably guess, much of the code we showed
before about driving hardware peripherals isn’t exactly
product specific. You can get up and running quickly with prebuilt drivers from my
peripheral driver library. These drivers extract a low
level communication details associated with many common
hardware peripherals, you can browse the driver library at the
community hub, and we’re also now expecting — accepting
community contributions as well, so let’s all make this better
together. Be sure to check that out.
Where do you go from here? You have full access to the
Android SDK and NDK when building apps. You can use
Android Studio, develop your apps, leverage standard
developer tools and third-party libraries. Many projects find Google Play
Services, Firebase, Google Cloud Platform, et cetera, very
useful. In fact, you can create an end-to-end demo very
quickly. For instance, if you’re collecting sensor data,
you can use Cloud IoT Core to help youingest data and do
something with it in the Cloud If you’re taking photo, the
Cloud Vision API, for example, can help you with object
detection. Of course, there is a lot more it. We’ll have many
samples in the community site for you to check
out. Let’s say a bit more about
Google Play Services because we’ve created an optimized
lighter weight version of that for Android Things. It’s
optional so it doesn’t take up space if you don’t need it.
Many developers do find it very useful though. You can literally take a
checkbox and add it to your build from the console. You can find a Firebase APIs in
there which are used in many of our samples. For example, the
realtime database is very useful when you sync device data. You can also use the new MLkit
to integrate Google’s Machine Learning technology into your
app. There is of course a lot more in Play Services, a few location
APIs, nearby for setup, maps, and
more. The final piece to Android
Things is a console for managing what’s running on your devices.
It’s shown vertically cutting across everything from bottom to
top because that’s where you manage all the builds and
releases for your entire device fleet at every stage of the
product development process. Let me explain. Here is a very
simplified view of the product development flow. Every product
goes through the prototyping phase where you just want to get
something working, then a series of hardware validation
manufacturing stages, and finally commercial launch.
At the beginning, you’re going to pick up one of the
development SoMs because that’s the easiest thing to do, and
then later swop to one of the SoMs certified for production.
After you get the hardware, you might be wondering well, how do I
assemble an image and push it to the device? In the prototyping phase, you
probably just want quick access to a generic image to get you
going. You might only care about one
device, so you go to the console and tell it which SoM you’ve
chosen and that’s pretty much it. You can then flash the image and
siloed apps to your device As your product evolves you come
back to the console to customize a lot more. For instance, you
may want to change the partition sizes, you may
want to include Google Play Services because your apps need
it, you may want to modify the de device
configuration build time based on the peripherals, you may
start caring about a lot more devices.
For the Smart Displays and smart speaker devices that we
showed earlier, the OEMs use the console to split the devices into groups, so
developers, QA, early beta testers, they’re all in
different groups, they all have different builds, and on
different update channels. All of this is configured from
the console. When you go to production, you obviously need to assemble —
I’m sorry. Previous slide. When you go to production, you obviously need to assemble the
entire factor image that includes apps and services. You
want to update your app over the lifetime of the device. It’s
important to have control over rollout. For example, roll back if
something goes wrong. Gradual percentage rollup so you can go
slow and check that things are going well. Access to metrics
so you can see that things are going in the right direction. All of this is why the console
exists. The underlying backend infrastructure is the same as
what Google uses for our own products. So I just finished explaining
how Android Things is a fully managed solution. We have a lot more to show you
at I/O. Come and see us at our sandbox dome, it’s the giant
bright orange one, it’s hard to miss. The entire dome is filled
with Android Things demos. You can see how real commercial
products evolve from prototype to production, Smart Display,
you can see them all there. We talked about devices getting
increasingly smarter, we have demos showing Machine Learning,
ingesting realtime video so it can recognize emotion, faces,
handwriting and so on. By the way, you notice Android
Things powering devices outside of the dome as well. It’s
powering the survey boxes all around I/O for instant feedback. There is a five-foot tall robot
roaming around somewhere, hard to miss that one, Android Things
in there. If you like to code, come to our codelabs and get
hands on. We have one showing how to work with TensorFlow,
Cloud IoT, how we make the survey box and more.
There is eight more sessions coming up today and
tomorrow that go deeper into specific topics. For example,
the session at 11:30 today talks about how we built the Smart
Display product, the journey all the way from prototype to
production. The last session we have tomorrow talks about the
console in detail and how to manage your devices at
different stages of the development process.
Finally, let’s all have some fun at I/O, join our scavenger hunt at
the link. You might have seen this already. There is very
cool Machine Learning enhanced flowers that you’ll notice,
solve the puzzles and you get a free hardware developer kit.
With that, let’s all go build some hardware. If you
know thousand build an Android app, you now have a new
superpower. You can prototype IoT products and bring them all
the way to production with Android Things. You don’t have
to be a hardware engineer or firmware engineer to do this. Pick up a kit from the scavenger
hunt and head to Android Things. withgoogle.com to get started.
This concludes the talk. My name is Vince. Thank you and
good luck. (Applause) this session. Grand ambassadors will assist
with directing you through the designated exits. We’ll be
making room for those who have registered for the next session.
If you’ve registered for the next session in this room, we
ask that you please clear the room and return via the
registration line outside. Thank you. May 9, 2018 10:30 a.m. PT A Fresh Look at Advanced cop
topics for advanced Topics for Android
Experts >>Welcome. Please fill in
the seats near the front of the room. Thank you.>>At this time, please find
your seat. Our session will begin soon. >>RETO MEIER: Wow.
(Applause). >>RETO MEIER: Good
morning, everyone. My name is Reto Meier, the
Developer Advocate here at Google. Technically, I’m a DA on the
Google Cloud dev rel team. So if any
of you bump into my boss or anyone from the Google Cloud
Team and they ask how the presentation went, if you could
tell them that I presented how to build a Cloud backend mobile
app, that would really help me out.
What I am present something hopefully what everyone is here
to see, Android Protips, a fresh look at
expert advice for advanced Android developers.
As the title suggestions, this is a lot of technical talk.
There will be a lot of code. If you’re not comfortable with
code and you came to see the Grateful
Dead, you’re in the wrong place and also about 20 years late.
To make sure we have the right people in the audience,
how many of you are developing for Android right now? Not now,
now, but at home and at work? Most of you, not everyone.
That’s cool, like there are some Android curious folks here, some
Dead Heads in the wrong place and time. Some folks on the
bank working on their tan, and that’s cool, too. Everyone is
welcome. Thank you for coming. Now, this is actually a
pretty special year for Android. This marks the 10-year
anniversary of Android development, which is super
exciting. Obviously not everyone has been developing an Android quite that
long. How many of you have been developing for more than two
years? Yeah? Wow. Five years? A few hands down. What about 10
years? Do we have any 10-year veterans? A few. Nice!
Congratulations! Yeah, a round of applause for the 10 years.
(Applause). I too have that honor, that
privilege of having been developing for Android for 10
years, I even brought receipts. This is my very first question
on stack overflow, asking about
the maps API for the . 9 beat-it SDK so the 10-year
veterans will remember and hopefully not remember, this is
not my first Googler, and not my first Protip
section. This will be the last question I ask for a little
while. How many have been to a previous I/O and seen one of my
previous Protips sections. A descent handful. All right.
And you still came to this! (laughter).
That’s reassuring, so thank you for that. I actually spend
a fair bit of time over the last couple of months looking back
over these session, and at first it was because I wanted to gain
some inspiration, you know, like what are the cool things we used
to talk about, gain some of the excitement from the early days
of Android. But I quickly realized that not
everything that I had said in those early days has held up all that
well. So as a result, what I’m
actually going to be presenting this year is looking at all the
things that we’ve said in previous years and giving it a
bit of an update. So the real, real title slide for today’s session is actually,
Android Protips Fixed. (laughter).
I’m sorry. So the goal here is to provide you with all
the tip, tricks, best practices, and good advice that’s going to
help us develop great apps for the next decade of Android
development, and in order to do that effectively, I want to you
take the same journey as me. So we’re going to start by going
back in time. We’re going to go back through previous Google I/Os, previous
Protips sessions, all the way back to the very, very
beginning. (laughter).
We may have gone too far back. This is me in kindergarten, a
happy child. I don’t know what a fragment is and I’ve never
heard of a content provider. (laughter).
That will change. Also, a side note. I didn’t know I was presenting
in the amphtheater and there would be a 400-foot screen, and
if I had known that I may have included fewer pictures of
myself. Take note if you’re ever in this situation.
Clearly we’re too far back so let’s just pump it a few
years forward, I think I can do that with one of these button,
let’s have a look. Yes. Yeah. This is more like it. Don’t I
look young? This is 2008 and I remember this because this is
actually the cover photo for my first Android book and we took
this photo just after I had gotten the first round of editor
notes back from the first draft, and the next
day Android release the the 1.0 SDK and up until then I was
writing based on the .9 beta. I knew it was coming, the clue is
in the name, but I figured how much is going to change. 1.0
from .9. .8 to .9 was minor, so it should just be a couple
little things. Turns out a lot can change. Like really a lot.
(laughter). I had actually written an entire chapter on the GTalk data API.
It was pretty cool but didn’t exist in the 1.0 SDK. 30 pages
of hard work. I didn’t remove it. I ended up leaving it in
the book because by the time it came out, who knows, right.
Android moves quickly. It’s been 10 years. It never got put
back in and eventually I removed it it from
future revisions of the book. The reason I mention this
is because I spent most of the past 10 years going around
talking about Android, giving advice on the best way of doing
things, the latest and greatest APIs that you can take advantage
of to build great apps. And I do this with the knowledge that sometimes within months and
even weeks, and these things are going to change. They’re going
to be replaced with something more efficient, more
comprehensive, and in fact weeks and months is a best case
scenario. I remember one particular incident. I had just
finished a keynote. Very exciting, a developer day
in Prauge I think it was, what’s new in cupcake. I’ve been
doing this for a while. I stepped off stage and as my foot
hit the ground we released doughnut, and in that instance I
had 3 nanoseconds of being current before everything I said
was effectively outdated. Now despite how all of that may
sound, this is actually a really good thing. Well for you and
not so much for me. For you it’s a good thing. It
demonstrates how quickly Android is evolving to take advantage or to
give users and developers the things
they need to be able to be successful.
There has always been a tradeoff and I know you all have
felt this pain. New platform comes out, also new API, great
new functionality to take advantage of, but you know it’s
going to take a while before lots of users are on that platform, and at the same
time you know all the platforms are going to take a few years
before they go away, so you have to do this tradeoff. Is it
worth putting in the time and effort to take advantage of the
new API, the new functionality, knowing that it’s going to have
to be backwards compatible and you’ll have to maintain that
code, potentially for years, so when do you do that? Well, the thing is, what makes
us really excited is now we have Jetpack which incorporates all
of the Android Support Library, Android
Architecture components and if you take Jetpack and combine
with Google Play Services we end up with a increasing surface
area of APIs which don’t rely on having the latest platform
version, so you can take advantage of the new
functionality that’s available and we’ll take care of writing
all of the boilerplate code to make it backwards compatible for
you. Right, so this is a nice
advantage, right. Suddenly that tradeoff isn’t quite as
expensive. Now, it’s still one of the big
pluses of this as well is we’re no longer tied to the framework,
right. So we can continue to launch new features, fix bug, encapsulate
more best practices, take care of
education and do all of this between releases. It’s great
for developers, not so great for those of us trying to talk about
it, things change more rapidly. There is a price, if you
want to take advantage of this cool new stuff, you have to
reconsider some of the fundamental architectural
changes in how we build our app, so I’m going to talk today a
little bit about why I think is the right time to do that.
If we bring it down to the real fundamentals, what we’re
trying to do is keep our app looking fresh,
lightweight, full of potential. If not, we quickly risk becoming visibly older, outdated, a
little bloated maybe, harder to maintain, harder to manage, and there is
an analogy for high cholesterol but I couldn’t figure it out.
You get the idea, right. So before we sort of dive into
some of the tips, which we do need to update. I did want to
spend just a little bit of time highlighting a handful of
protips which have stood the test of time, that like a fine wine or
sharp cheddar, actually age pretty well.
There is probably more than just the ones on this list, but
this is the highlight, and what you’ll notice about these particular protips is
they’re not really specific to Android. They’re not even
really specific to mobile development. This is just good
best practice that you should always be keeping in mind when
building any engineering app. So let’s not dwell too much on
the things that we know are still good ideas. Let’s take a
look at some of the things which have changed, which have perhaps
not aged quite as well. Now, everything on this list you can pretty much still do, but
it’s all been replaced with newer APIs that are more
efficient, that encapsulate a bigger amount of functionality. Sometimes the underlying
hardware is changed or the UI paradigm has changed, but a lot
of it you can still do. That’s actually kind of a theme
for a lot of the things that we’re going to be looking at.
There has always been ways of achieving certain things, but
there is now significantly better way, simpler ways which involve less work,
less hackie workarounds and making it more predictable and
easier for you to manage and maintain your code bases going
forward. So we know things have changed.
I’ve said that a few times, but there are certain things we’ve
always been able to rely on, right. You write your Android
code in Java language syntax, and all of the
application components are basically the same. We use the same core components,
they interact the same way, they achieve the same end results.
That’s something we’ve been able to rely on until, basically,
until now. Last year or so, things have
fundamentally changed. Android Studio 3 introduced Column as a
first-class language for Android development and that’s actually
what we’re encouraging people to use as their primary developing
language for Android. We saw the introduction of
Android Architecture components which introduced view models,
live data and the room database to significantly simplify the
way we manage data and have it interact properly with the
activity lifecycle. We also have the job scheduling,
which almost entirely eliminates the need to create your own
custom background services. And the intent receive is that we
sometimes set up to monitor device state to control when our
services are run, which is a good thing, because
in Android we depricated to have
long-running services when your app is in the background as
well. And most of the intent used to monitor device state have also largely
been depricated. Job scheduler, already
depracated as of yesterday morning, so now we have work
manager, which is nice because it does the same stuff as job
scheduler but backwards compatible, right, so you’re
starting to see some of the trends of what we’re trying to
achieve here. We should dig into some of this
stuff in a little more detail. Something of which I’ve talked
about at most of the protip session,
the first thing I talked about and it comes up every year. I
like to focus on what’s happening in the background.
(laughter). Sometimes it’s easy to miss
until you really focus on it. So the first protip I gave back
in 2010, the first session, the first Android session, the first
real session on a big stage. It was very exciting. And this is
the first protip so I thought let’s have a look at what I said
and see how well it holds up. This is the first thing I ever
advised.>>So it’s important that you
move anything which can in any way take longer than just a fraction of a
second and interrupt the UI flow and move it into the backend.
The most simple and straightforward is to use Java threading.
What we’ve created in Android is a class called asynctask.
(Applause). >>RETO MEIER: You know
what they say, the giant screen actually removes 20 pounds.
(laughter). So it’s kind of true, right.
Like kind of, like we don’t want to block the UI thread but the
context around what I was talking about is trying to
eliminate these application not responding dialogues which have
evolved over the years, they look a little bit different now.
They used to be very common because it was easy to block the foreground
thread. Right, if you tried to do a data transfer, for example,
it would block and that would be a bad experience and so with the
Android 3.0 we introduced a runtime exception so if you
tried to access the Internet through our runtime exception
the app would crash. Strong encouragement for developers not
to do that. More recently we’ve done similar things with file
system access and with database access as well.
Now in Android P this is no longer a problem because we don’t show
dialogue, we just crash the app, so that’s not a better outcome
for your users, but it’s a much stronger signal to us as
developers that we need to make sure that we’re taking care of
this particular problem. Now, I did mention one solution
and it got some love, the old
asynctask. It’s a reasonable solution to
the stated problem. We need to move everything back off the
foreground on to a background thread and that’s what
asynctask does. It creates a background threat
and has handles marshal back to the UI thread so you can update
when you have progress or final result to display.
I would like to tell you that the only thing missing from
that is that now you have to write it in kartlin, it’s better, it looks
nicer, it’s more concise, but you still have the same
fundamental problems where if you’re doing this stuff in an
activity, if the user rotates the phone, your thread is killed, your
asynctask, you may leak context, try to update
a UI element that doesn’t exist, in the process of getting killed and
back, it’s problematic. Back in the first I/O session I
had a solution for this, it was a great solution. The advice was you should move
it to a service, which it’s fair to say
this advice has not aged aged
spectacularly well. It’s the same principle, you have
something long running. You’re in the activity, you use an
intend to start a service, probably an intent service and
that goes and does the time consuming work, maybe it’s
pulling something down from a server or interacting with a
database or performing some complex calculation, and when it’s finished it sends
intent back to the activity and it then updates the UI
accordingly. Basic stuff, and it’s a reasonable approach, but
it does introduce some new challenges.
Services are heavy-weight components, right. We don’t
really want to create, then we want to kill them as quickly as
possible to gain back some of the system resources, and they
keep running even when your activity isn’t, which is kind of
the point but now you have to keep in mind your activity lifecycle,
what’s it doing, have you actually exited
the app, have you transitioned to a new activity, should the
service keep running, should it be canceled? All of this stuff
you have to keep in mind and it’s doable but a lot of work to
get right. Of course, because services are part of the high priority it
prevents the system from killing your app
if it needs to and there is no guarantee the app will keep
running if the app is force quit.
So there is another problem in Android 0 you can’t guarantee
your background services will be running when you want them to.
You can’t start a background service when your app is in the
background from a broadcast receiver and they’ll be killed
after a few minutes after they’re initiated while the
activity was in the foreground. Now, for really time
consuming things, which you need to have run, this is still a
reasonable approach. You have an activity, it uses a service
to then download data from a server, say, it’s a nice time
consuming operation, and then that saves it to a database and
then your activity just monitors the database, so it’s kind of a
one-way flow, which is nice. There is a catch. There is
always a catch. And the catch is that this is not an efficient way to use the cell
radio, and back in 20 2012, and I’m not going to go
into detail of how it work, when you transition it stands in
standby andry do you says more power and the goal is it reduces
latency for future transfers, and that’s all great, and the
reason it’s important for us to know, is if you’re doing lots of
transfers you can keep the radio turned on, it’s going to draw
power more quickly. So we have some specific advice
for this, which was you should deal
with that. You know how it works now. Efficiency is important so fix
it. I had this whole metaphor to help. It was about cookie, I
have a friend who is an artist, it was a excuse to get him to
draw things, and the idea here was to use the big cookie model
where you’re minimizing the total number of transfers that
you’re using across the app lifecycle, so if you have
something that’s not time sensitive like analytics, you Youku that up and
wait, and when there is an user initiated refresh, you send the
queue and looking into the future is there anything
scheduled to happen in a few minutes and let’s just do that
first, so it was a way to manually batch and bundle all of
the transfers so they happen at the same time and you get some
increased efficiency. Now, as I’m sure you can
believe, doing all of this and understanding how to do it right
was complicated in you have that a room full of
engineers, much like yourselves, back at
musconi, believe this is was a legitimate equation you needed
to solve in order to figure out how to do this right. It’s not.
This is just made up. By 2012 we had a solution, a
solution which was the sync adapter and the sync adapter we
claimed was pretty straightforward. Which is fair to say was a lie.
(laughter). Now, it’s still better than
doing it all yourself. Once you set it up, it’s not that
complicated, right. You’re able to trigger a one off for a periodic transfer and a
syncadapter will handle all of the background work, doing the
batching and time shifting for multiple of your requests that
happen within a sort of time window, so you don’t have to
think about when you trigger it, you just do and
trust the syncadapter to handle the timing for you.
Unfortunately, to make this all work, you needed to have a
syncadapter, fair enough. Syncadapter runs within its own
service. Of course, you also need to have an account manager,
which isn’t used but it’s part of the API, and the
account authenticator also needs its own service to run in.
Okay, I’m seeing a trend. You also need to have a content
provider, again, doesn’t need to be used but needs to exist. You do all of this, write it all
in a Java, and then maybe a dozen XMH
files you need to right and then you’re good to go, the
syncadapter works. It’s fair to say that a lot of people trying
to make this work found it to be kind of frustrating, and of
course it only batched and bundled your transfers within your own app, which is
why 2013, great year Android 5 we
introduced the job scheduler API which has now been replaced.
Job scheduler works at the platform level and does the
batching and bundling for all of the apps at the same time to be
able to take advantage, not just of cell radio battery improvement,
but all of the resource improvements that we’re trying to do, things like
standby and dose mode and all of this.
So within your app you create a new job service that
does the asynchronous background updates, in this case we’re using the asynctask which we can
use now because we don’t have the same memory leak problem,
and then within your specific activity you just define the
criteria you need in order for that job to be successfully
executed. If it needs to be on the Internet, it needs to be
charging, et cetera, how urgent, what is the preferred timing, is
it periodic, all of that sort of stuff. The job scheduler will
then handle all of the background work for you in the
most efficient way possible. The only thing which would be
better than the job scheduler is if it worked in a backwards compatible
way, which is way we launched Work
Manager, part of Jetpack. To use Work Manager it works
basically the same way. You create a new worker and that is
where you’re defining the background work that needs to be
done, and then within your activity you do the same thing
and you specify the constraints that need to be met before this
work can be accomplished, and then work manager will handle
all of this for you in the background, same as job
scheduler, but it now also encapsulates the
best practice solution for earlier platform releases. I
think it goes back to 4.4, so it will use the latest and greatest
job scheduler if you’re running on the latest platform, all the
way back to probably alarm manager for some of the earlier
platform releases. Again, you don’t need to write any of that
boilerplate code for backwards compatibility. We do all of
that for you, so you don’t have to maintain it, find the bugs,
find the education, you can rely on work manager to take care of
that, and of course this is going to continue to improve in
future releases so as job scheduler continues to improve
on the platform, work manager is going to be able to evolve to
take advantage of some of those changes.
You can learn all about Work Manager in Stage 2 today at
5:30. I recommend going along. It’s really a cool new API. So Work Manager, it’s great if
it’s something which is genuinely time consuming and
needs to be initiated on the client side. Now, one of the
things which we’ve always said and which really hasn’t changed
is that wherever possible, you should eliminate periodic
refreshes and any updates that need to happen
based on changes of the service side should be initiated by the
server, preferably using Google Cloud Messaging, by
which of course I mean Firebase
because GCM was depricated in April. Things move quickly.
Firebase Cloud Messaging takes advantage of the same
scalable reliable GCM infrastructure that we’ve been
using in the past, but also improves delivery reporting and
reliability. It also adds a web console so you can send
notifications to your apps without having to have your own server
instance. Now, the transfer this gives us
a pretty good solution, but there is really no way this is a
viable solution for being able to maintain a small amount of
data whenever the phone rotates, and for that matter,
using singlton ar overwriting the application class, which I
may or may not said in previous years, is similarly considered
somewhat distasteful. So which is why in Android 3. 0, we introduced headless
fragments and loaders. I assume everyone is familiar with these. Fragments were designed to break
your UI up into chunks that were independent, reusable, and
interchangeable. The key here was they were also aware of the
parent activity lifecycle. Now, the trick with headless
fragments was that they didn’t have a UI, so you could
configure them to be retained across an activity
restack based on a configuration change and this was really nice
because it meant now we have somewhere to store our data
model Indiana PendingIntent of the activity being recreated
every time the configuration changes. (independent).
Managing fragments was tricky. If you attended the
first Android session here this morning, you see just how tricky
it was. You can add and remove fragments from any activity,
have multiple instances of the same fragment within the same
activity, the fragment transactions are all done
asynchronously so you can’t guarantee when something is
there, and it’s got its own lifecycle to manage.
Now, none of this was great. Right. You were able to
achieve what you wanted, but it was complicated to make this all
work. Now, loading, loading was
better. Loading has made it easier to
asynchronously load data within your fragments and activities,
and then you can observe and react to changes
in the activity based on whenever the underlying data
changed, and in most cases you had to handle all of that
plumbing yourself, which meant you needed to load the data
from wherever it was stored, you needed to then observe changes
within that underlying data and bubble it back up through the
loader to have the simple code in your activity. Which was unless you used ku curserloader, it did most of the
work for you. You initial liesed at creation time and then
you filled in a number of callbacks to create a new curser
and then to receive the results so that you can update the UI
accordingly. Initial Initiallyizeing the curserloader
— you use the SQL statement that you want to use to get back
your results and all of it gets passed into a content resolver,
first your content provider and gets you back a curser which you
can then use to update your UI. But by definition that
meant that you needed to make your data available via a content provider, and
content providers are — actually, you know what? I’m
going to come back to content providers, I’m going to leave it
in there for a moment. So okay, so take a step back.
We’ve got headless fragments and loaders and technically that
enables us to do everything we need to do. There is a problem
though, it’s really hard to get them to work together,
understand how to make them work across multiple activities and
fragments in the various different implementations of
both. In fact, getting it all to work reliably and well was,
again, it was frustrating. Which is why we’re all so
excited to see Android architecture components.
Android architecture components introduced viewmodels, livedata
and the room database. It practical replaces fragments and
loaders as a simple way to separate your UI and your UI
data. The viewmodel itself is really
simple to create. You start with a property that stores the
data that you want to expose, in this case a mutable livedata
because livedata is cool, and then you expose a method which
is going to return that property, and so the first time
it’s going to populate that property and then from then on
it will return that cached version. Then you have a method which
does that population, and again this
is done asynchronously. In this case again, we’re using our old
friend the asynchronous task. Then whenever the value of
your mutable livedata changes, that is going to trigger any
observerses that you have attached to it, so backing your
activity, all you need to do is request the viewmodels, ask for the
specific viewmodel class that you’ve just created, and then
call that viewmodels data access method, which will either then
populate the data and return it, but either way you add an
observer to that and then your observed trigger is going to get
called both the first time you attach the observer and then any time the underlying livedata
changes. Now, the beauty of viewmodels is
they survive activity restarts so we don’t have to deal with
that particular scenario, and beauty of livedata is that it
understands the activity lifecycle, and so the first time the app is
run your viewmodel gets created, populates the livedata from the underlying
data source, sends the results back
by the observed handler within your activity, and then if
someone rotates their phone, the activity gets destroyed, gets
recreated, but the existing view model is returned. The existing
live data is returned via the observe handler and it’s
triggered with the latest version of the data whether it’s
updated or not in the interim. In fact, livedata is really
smart because it only sends updates
via the observe handler when the activity is active because you
don’t need to update the activity if the UI isn’t
actually visible, and if there is new —
so if there is new data, it will wait and transfer it once the
activity becomes active. There is a total result and what
we do is feel less like this and much more like this, which is
always our goal. Content providers, I promised
I’d come back to don’t providers, and
just some context. This is what happens, this is the number of
students enrolled in each of the lessons of awr our Android
class, and the highlighted columns when we get to providers
and loaders. You get the impression this is something
that is a little frustrating. We improved it since then and
it’s not so grim now. And some of what we’re going to talk about now actually explains some
of those improvements. Why is it so frustrating? This
is the boilerplate code for creating a simple content
provider, I’m sure many of you have built something like this.
In truth, it’s really just the first couple of methods. It
goes on for a little while, and keep in mind that this isn’t
doing anything particularly special. It’s just passing
content resolve requests straight through to a
SQL-like database. Obviously, that’s just the content provider
and then you need to create a contract that defines the column
names that you want to use to be able to interact with it, and
then of course you need to create your own SQL at
openhelper which is how you create, update, and access the
underlying database. Now, all of there operates on
cursers which is probably not what you’re using in your UI so
you’ll need to create a class to represent the records in the
database, which is nice. It’s actually quite simple in
kotlin, but you also need code between
the observes and add rows to the
tables and convert cursers and lists of class objects.
Somewhere align the line you need a list of observers so if
the database changes, your list of objects also gets updated.
It’s a lot of code for something which should be relatively
straightforward. Quick one, there anyone here who
gets paid per line of code? (laughter).
No. Okay. So if there is and you were just too shy, that
old method actually works really well, stick with that.
For the rest of us, this is where the room database really
comes into its own. So it’s part of an Android
Architecture components, now part of Jetpack, and all you
need to do is start with the class definition, defines the
columns that you want within your class, and you just add an annotation
to say this is an entity I want to use in a database and this is
the primary key for this table. Then you create a new room
database, which includes each of the tables that you want to have
within that database and you return an instance of the data
access object, and that is the class where you specify all of
your database interactions, inserts, updates,
and query, as many as you want and as many different styles as
you want. Now, you notice the queries here
return livedata and that’s really cool because if you’re storing your
UI data in the UI model obviously you are because that’s
the right way to do things now, it simplifies the
viewmodel significantly. You request access to the room databases access object, execute
the query, which returns livedata itself and then you
just pass that livedata right back through to any activity or
fragment which requests it. Back in the activity, it works
the same way. You’ve got the same observe handler, getting the same livedata
representing a collection of objects, and you can use that to
update the UI whenever the underlying database changes.
So you’ve got the same outcome that you did before, but
10 times less boilerplate code, 10 times less code that you have
to maintain, and also if you change a column name beforehand,
it was a lot of places that could go wrong. Now you just
modify the class. Now, if you combine that with
the work manager, you actually get a really clean solution to this whole
process, right. The work manager downloads data
from a server, updates the room database accordingly, which then
sends the livedata to the viewmodel within the activity
and fragment, all fragment you want to use, which then by the
observe handler updates the UI. So if you’re not one of the
lucky folks still getting paid by line of code, this is
probably a better solution, right.
So speaking of better solutions, what else have we
got? I wanted to talk a little bit about Android location
services because this is again a topic that I’ve spent a lot of
time talking about. Being able to find the current
position and put it on a map somewhere is something that’s
been part of Android since the very first beta SDK. So by 2011, I had actually
managed to amass quite a selection of pro tips for how to
take advantage of the location manager in the most efficient
and effective way possible. It was a big topic in a couple of
my Android pro tip’s talks.
So I went to the location team and I said I’ve built a
project which encapsulates all of our best practices, all of
the things which we tell people to do, and it’s fair to say that
they were impressed. By which I mean it really left an
impression. They then went away and did what they do which is
fixed everything, and so where beforehand I had all of these
classes and interfaces, they went away and gave me an API as part of
the fused location provider, which does all of the magic for you, and encapsulates
all of those best practices within a small number of simple
APIs. All you need to do here is tell
it what’s more important, battery life or accuracy? How
often do you want to get results, and the fuse location
provider goes and does all of the hard work for you.
Now that said, things have changed a little bit n. our area
we made some changes, new restrictions about how you can
receive location updates while the app
is in the background in order to save battery life.
Specifically, that means if you’re in the background you
only receive updates from location manager or a few
times an hour. A few ways to deal with it, make a foreground
service, another approach is to use geofence, so for those of
you who remember the very first version of geofencing on Android
were proximity alerts and proximity alerts were bad. Easy enough to set up and
specify a lat, long radius around a point
and they would reliably figure if you went across the boundary. The thing the way it achieved it
is by turning turning on and leaving
on your GPS, if you had a proximity
alerm set GPS would run constantly, it’s not what the
documentation said but it is what happens.
Now the fused location provider also gave us a real
geofencing API that had all the functionality that was claimed
from — that was claimed to be done with proximity alerts but
also gave you a few additional
improvements. Most important of which is that it didn’t keep the
GPS locked permanently on. We it take our geoto the next level which
allows us to make more complicated fences, we can add
date and time, proximity to beacon, user activity, walking,
cycling, driving, and device state. You can combine all of
these conditions together, such as in this example, we have one
fence which is triggered if you drive within 1,000 meters of a
given location between 10 and 6. We have a little bit of
time left. One thing I learned in 2018 is that there is a new
way to — I’m sorry. That should be there is a new way to
control media. Right. Now, back in the day,
controlling media playback was done with the media player. I’m
going to buzz through this pretty quickly. Mainly it meant
that you had to listen to media buttons getting clicked. Now,
of course, you can control media a bunch of different ways so we
have a better solution which is
exoplayer that effectively replaces the framework, media
player is easy to customize and extend. I’m not going to go
through the detailts, but I had been speaking so slowly, so I
recommend you go to Mark and Andrew’s talk tomorrow morning
at 10:30 and get all of the details.
Now, not all of the changes in Android have been in code.
Some of the biggest changes are actually in the availability of
Android expertise. Back when we started we didn’t have soip
slack or Google developer experts or stack overflew. We had IRC, Google developer
groups, Dev rel people that spoke at conference, and that
still kind of happens. The big change now is that we’re
just voices within a much larger developer ecosystem. There are
now lots and lots of people around the world who help each
other. This is just a small selection
of the 90 Googler experts specialing
in kotlin around the world, a small selection of the
engineering team that we have helping the community.
Similarly we have Android conferences all around the
world. This is a small selection of some of the Android
events that people pointed out to me on Twitter a couple of
weeks ago when I asked what the best conferences to go to were
because really the biggest tip that I can give you is that much
more than changes in the tooling or the frameworks or the APIs or Jetpack or anything else, much
more than that the biggest thing driving Android development
forward is the community, it’s the people around you, the
people online who are all helping each other with Android, and it’s important
because we know the up with thing that doesn’t change is
that things are going to change. So take advantage of things like
Jetpack to make things simpler, work with the people around you,
and hopefully your apps will survive much longer than my pro
tips. Thank you. (Applause). >>Thank you for joining
this session. Grand Ambassadors will assist with
directing you through the exits. We’ll be making room for people
in the next session. If you registered for the next session
in this room, we ask that you please clear the room and return
via the registration line outside. Thank you. May 9, 2018
11:30 a.m. PT What’s New in Architecture Components >>Welcome to Google I/O
2018. We’re happy you’re joining us as we celebrate
platform innovations at Google. Right now you checked into
registration to receive your badge. It must be visibly warn at all
times, and don’t forget you need it to
ride the shuttles and enter the end of the day activities on
days one and two. Throughout the event visit Google.com/I/O.
Or download the app for the most up-to-date schedules and
conference map. For questions or assistance stop by the
information desk located across from the Codelabs building or
chat with any staff member wearing a yellow badge. Sessions will take place in the
aphitheatere as well as eight stages throughout the venue. If you haven’t received a seat
for your favorite session, or if you miss a session, don’t worry
recordings will be available shortly after the sessions end. Be sure to visit the Codelabs
with red-to-code kiosks and Google staff will be available
for helpful advice and provide direction if you need support
Make some time to visit the office hours and app reviews
tent to meet one-on-one with Googlers and ask your technical
questions and get feedback on your ploj projects. If you’re
looking to network join one of the several meetups hosted in
the community lounge. Finally, we would like to invite to you
visit the sandbox domes where you can explore, learn, and play
with our latest products and platforms, interactive demo,
physical installations, and more. After sessions are done
for the day, stick around for food, drinks, music, and fun.
Be prepared for some surprises along the way. We’d like to
take this opportunity to remind you that we are dedicated to
providing an inclusive event experience for everyone, and
that by attending Google I/O you agree to our Code
of Conduct posted throughout the venue
Your opinion is valuable to us. After the event, look out
for a feedback form to share your experience. Thanks for
attending and have a wonderful time exploring Google I/O. May 9, 2018
11:30 a.m. PT What’s New in Architecture
Components >>Good morning.
(Applause) >>LUKAS BERGSTROM: It’s
great to see everybody here. I’m Lukas berg strom. Product
manager. How many were here last year
when we addressed architecture components for the first time.
Awesome. This is a much bigger venue so that’s actually a
descent percentage. When we did launch architecture components
last year we were doing something sort of different for
us. For the first time we were offering explicit guidance in how to
architect your app and giving you components letting you do
that. And frankly for us, it was a
little bit of a journey into the
unknown, so 1 12 months in let’s check in on how we’re doing.
We’ve shipped 26 releases since May of last year, so we’ve been
constantly iterating and improving the core set of
architecture components. The sign of any healthy Open Source
project, we have a very active issue tracker and I’d like to
thank everybody that’s taken time to file a feature request or even a bug and we’ve
closed a lot of those. We launched a major new library
Paging, which is now stable and we’ll talk a little bit more
about it today. And I’m pleased to say based on our survey data,
over half of you are using or planning to use architecture
components, and this survey was done only a few months after we went
stable, so this data is pretty out of date by now, but I’m
pretty proud of the fact that only a few months after
launching to stable, over half of the
Android developers we talked to were planning to use this stuff. But more importantly than any of this, you’ve told us that
architecture components actually make it easier for you to build
robust apps, that having a clear path on how to architect your
app and components that help you realize that has actually made a
difference in the real world in how you build your apps. And
we’ve heard that not just once but we’ve heard that over and
over from a lot of developers that have taken time to speak to
us. So architecture components has
grown and we’re going to continue to invest here. This is foundational, we think,
for Android apps going forward. But not just are we investing in
architecture components but this year with Jetpack, we’re going
to take that same approach that we took with Architecture
Components, a blank sheet of paper approach to how Android
developer experience should be and how we can improve things
for you. Jetpack is going to take that and apply that to the
rest of the developer API service that we offer. So now I’m going to turn it over
to Yigit to talk about what’s new in Architecture Components.
(Applause).>>YIGIT BOYAR: Thanks, Lukas.
So we’ll talk about what we’ve been doing in the last one year.
We’ll look at exiting libraries, talk about improvement, and also
look at the new shiny stuff, paging,
navigation, and work manager. Let’s start with
lifecycles, so we have shipped lifecycles in the
last I/O but to better understand why we have created this component, let’s go
to two years before. In 2016, we did a bunch of surveys and
asked developers, what is the hardest part of the Android
development? And by far in the list, big
surprise, was lifecycle managements. We
were wondering what can be hard about a phone rotating or users
reaching applications, this appears all the time on Android,
Android is built for this. But if you look at the problem in
detail, if you want to handle them properly in your
application, you need to understand these graphs very
well and when these two are interlinked it becomes very
confusing, so we we have created lifecycles component to
get at these problems, and it seems to be working because many developers,
you ask testimonials, a group of
problems just disappeared from their application when is they
started using these libraries. Another important change of
lifecycles is one year ago we introduced them as a optional
library, but now they’re a fundamental part of Android
development. You don’t need to do any additional libraries to
start using them. Now both app compatactivity and both include
lifecycle. Another interesting thing that
happens is the community adoption, so we ourself create
new APIs that works with lifecycles but we do other
people doing the same in their libraries and this is so much easier because already
app compat has the dependencies so you can easily depend on them
in your libraries. One great example is auto
dispose library from Uber, so if you’re using Java but you want
the automatic lifecycle planning management,
you can just add this to the stream,
give it to lifecycle and it will manage the subscription for you
for free. Now, working on these things
we’ve also covered more problems we have. One of them is the fragments of
lifecycle. Now fragments has a very complicated lifecycle.
Let’s look at an example. So if you have a fragment that when it’s created observing the
livedata. It goes through the creation, we create a view for
it, and it goes to the resume state. At this point if you’re losing
your livedata, the UI will start displaying them and
everything works fine. Later on the user hits a button, so you
want to detect this fragment because they’re going to another
fragment, and you’re going to stop that fragment but once the
fragment is stopped, we don’t need the view for it. We would
like to reclaim those resource, so we destroy the view. Later on user hits the back
button, you go back to the previous
button, you reattach, and because we have destroyed the
wheel we create a new one, we go ahead and do it but it’s a brand
new wheel, and it goes through the regular creation cycle and now we have a
bar. This new wheel will never resume the state of that live
data because you’re using the fragments lifecycle to observe
it so we don’t have any reason to redispatch the same value.
Of course, if livedata receives any value, the UI will
be updated but it’s kind of too late because on recreate your UI
has a bad state. Now, this left you with two
options, you either subscribe oncreate, it looks very clean
and one-time setup, but it will fail if you recreate the wheel
so you need to manually recreate the wheel. And onconcreteview handles the
recreation but you have double registrations you need to use
and lose automatic lifecycle managements.
So the problem here is fragments have not one but two
lifecycles and we have decided to embrace it and
give the fragment view its own lifecycle. So now starting a separate
library 28 or Android text, 1. 2, while observing livedata you
can specify the viewlifecycle, so while you’re observing if
it’s about the wheel, you use the wheel lifecycle, otherwise
you use the fragment lifecycle and we manage the subscription
for you for free. Okay. Data binding, so when we
started to look at our offerings as part of Jetpack we decided to move data binding
as part of architecture competence. If you’re not already using data binding it’s our solution for
boilerplate UIs. If you have this like this in your
application, binding layouts you can reference your object and we
take care of updating the UI for you.
In data binding 3. 1 we have added native support
for livedata. So if you have a view for live
user and pass for binding, you can
use the livedata as regular field, databinding will
understand it’s live data and generate the correct codes.
Unfortunately this is not enough for it to observe it
because data binding doesn’t have a lifecycle. To fix that
when you get your binding instance, you just tell it which
lifecycle it should use and it will start observing the live
data, keep itself up to date, you don’t need to write any code
for this. We have also part of the data
binding compiler to be more incremental. If you have a project with
multiple modules it’s going to compile a lot faster, and we’re
also working on making the compilation more incremental but
they’re not finished yet. This new compiler also gave us
the ability to support instant apps and now you can use data
binding in the future modules of your instant tabs.
Okay. Room, my favorite architecture
competence. So Room was our solution for object mapping that binds the gap
between the SQLete or Java code. One important change in Room 1.
1 is the support for better multithreading. So if you are
using Room 1.0 and if you have one thread trying to exsetter exert a lot of database
into the database and another thread trying to read the data,
while your write is executing your read will be blocked and it
can only execute after the write is complete.
In Room 1. 1 they can run in parallel, and
another nice feature of this is your
writes run a lot faster than before. Best
part of this change, you don’t need to do anything for take
advantage of this. If the device is running, Jelly
Bean or influencer and not a low-memory device we’re going to
enable it for free. Okay. Another important addition to
Room was the support for @RawQuery, but
to understand why we needed @RawQuery let’s talk about the
query annotation. While using Room you can specify and use the main parameters and
put as regular function elements and tell the Room what to
return. The best part of this setup is that Room is going to
validate all of the set compiled time so if there is a mistake in
your query, if your parameters doesn’t measure, what you try to
return doesn’t make sense, it’s going to file the compilation
and let you know whether a is wrong.
Now, this is all cool except what if you don’t know the query at
compiled time? What if you’re writing a real estate
application, my user can’t search houses with the price,
maybe they want to specify number of bedrooms, bathroom,
whether it has a yard. If you needed a query and it mattered
for each variation of this that would be impossible to maintain. So what you do in this case is
you create at runtime with the
options provided for the user and you provide the arguments
for that query. You obtain a instance of the Room database and use this query
method to get the result. So far so good. The problem with this approach
is that it returns a curser. Who wants a curser in you’re
trying to get the list of houses, so this looks like a failure of the library,
hence we decided to introduce @RawQuery. It looks very
similar to Query, except instead of specifying the query in the
annotation, you pass it as a parameter to the function and
then you tell us what you want us to return. Now here Room cannot validate
the query anymore so it’s kind of like you promise us to send
the right query and it will take care of it. Once you have that,
if you go back to our previous example, we can get an instance of our Dao, and we need
to merger with parameters and the query in the simple of the which is a
basic data holder and then you compress it
to the Dao and get the list of houses, no more cursers and no
more boilerplate code. Okay. Paging, so Paging is our
solution for lazy loading in RecyclerView,
but to better understand why we drew up this competence, I want
to go to an example. So you have a list like this,
it’s very common and every single application has it, user can scroll, but you
know it takes a much larger list than
what’s on the screen. If you’re in a application, it
could be a very, very long list so you probably cannot use the
memory and very inefficient to load all of it, so what you
would do is keep some of it in memory, the rest of it in
database, and you also have your server competent where you pull
this data from. This is actually hard to
implement properly, and that’s why we have created the Paging library to make this
common flow very easily and efficient to implement. Paging library comes with a
paged list, an actual Java list
implementation but it works with a data source. Every time you
access the items in the page list it pulls data from the
data source lazily, it just brings in
more data. How can we get an instance of
these? It’s actually super easy if you’re using Room. So Room
knows thousand create the data source, so it’s a great example
of how the architecture components work very well
together. You can just tell Room to return your data source
or a data source factory, and in this case I’m using a data
source factory because data is something that changes and each
data source represents a snapshot so we need a factory so
that we can create new data sources
when data changes. Once you have this you can use
the live page builder and build on
it, it’s going to give you a livedata paged list of users and
almost the same as livedata list of users.
Now in your activity or fragment, you would use this
page list adapter which is a cycle review adapter that works
with page list. You would observe the line data any time
we have a new page list, give it to the adapter, and then the
adapter can just call this and getitem function typed in the
user object. It’s a super simple code to write and we take
care of all the hard work of paging it lazily for you.
Even though I have shown all of those examples with
livedata, paging supports Rxjava out of the box,
if you are using that and you want a
level of pages, you can just get the same factory that the room
generates and but instead use the RxPageed Builder to
build. Paging features from the
database have been shown here but also it supports paging from
the network or you can combine both database and the network
for the best user experience. To learn more about it, please
join tomorrow at 2:30 in the Paging session.
All right. Now —
>>LUKAS BERGSTROM: Possibly some of the expense of
what Fu is gone new but when you think about sort of core
problems that almost every app has to deal with, in-app
navigation has to be close to the top of the list there.
Right now the framework doesn’t really offer anything for you to
do — anything for you to use there other than start activity, which for
various reasons is not the best option.
So that means that for navigation there are a bunch of
things that you have to figure out on your own, and that ranges from executing a fragment
transaction without throwing an exception, hopefully, passing
arguments from place to place, possibly
even with type safety if you can figure that out. Testing the
navigation is working and that the right things are happening
when you navigate from place to
place, making sure the up and back buttons work correctly and
take the user where they’re supposed to go, mapping
deep links to destinations in your app and having that work,
and by the time you’ve solved all of these problems, you’ve
typically gone one of two directions, you’ve either
written 60% of a navigation framework just for your app or you’ve got a lot of
errorprone boilerplate and so everywhere navigation needs to
happen you have a bunch of parallel lines of code that need
to be changed, and any time the navigational structure or the
app changes, and this is all pretty brittle and can end up
being a headache. Individually these problems are
pretty tractable but when you compose them into a realworld
example, so say I have a item screen in my app, maybe a
product screen, and that screen is accessible via Deeplink, but
actually there are other pages that, you know, if the user had
navigated here by opening the app from the home screen, they
would have come via the home screen,
the category screen, and I want the up but the button to take
them through the screens rather than exiting the app, right.
That means if somebody deeplinks into the app, I need
to synthesize these screens and add them to the back stack but
only on a deeplink, and talking to a third party developer, he
said, you know, it’s when you’re in the middle of writing code to
do this, you know, on any deeplink into my app and
synthesize the back stack correctly, you
start to feel maybe this is a failure of the framework, so
that’s why we’re really happy to be launching Navigation
which is a, both a runtime component
that performs navigation for you and a visual tool that works with XML to
define the navigational structure of your app and then,
you know, allows you to just navigate at runtime with a
single navigate call. And so the kinds of things that
you’re going to get for free, right, that you simply need to
define an XML and then the navigation framework will handle it runtime for you, are animations, passing arguments in
a type safe way from place to place, making sure that up and back work
correctly, and mapping deep links to various screens in your
app. And last but not least, no more fragment transaction ever.
(Applause). So I’ll show a couple of demos
of this in action. The first one is just to kind of
give you an idea of what this all is, so we’re looking at a
set of fragment destinations in my app, and I’m adding a new
one, and now I’m creating an action and this action is the
thing that I’m actually going to call at runtime to go from place to place.
You can see that there are a bunch of other options that
we’ll get into more in the navigation talk, but the one
thing I do quantity to show you in more detail right now is the example
we went through before. So this is a simplified version of that
where there isn’t a category screen, we just have the home
screen and then the item screen, but I’m
going to right now configure this to both
have a deep link pointing at the item screen and make sure that
if somebody deep links into the app that they go to the home screen first when they hit
Up or Back rather than just exiting the app right away.
So first I’m just going to configure a deep link on this
screen, and the curly brackets that I’m going to put around Item ID indicate that
I want to extract a variable there and pass it as an argument
into the item screen. Okay. Now that’s ready to go. If I
compile and run my app, that will just work and navigate to
the right destination. Now I just set the home screen
to the start destination, and that means that it’s the
hierarchical parent of all the other screens in the graph, so
when someone deep links into the item screen and then hits Up,
they’re going to go directly to that home screen, so
now I’ve just solved in 30 seconds what would have been a
really terrible and time consuming task in Java or
Kotlin back in the old world. So now I’m going to pass it
eefer off to Yigit to talk about Work
Manager.>>YIGIT BOYAR: After using
demos for the past couple weeks it
feels like magic, I hope you all like it. Work Manager is our example of
deferrable guaranteed execution. Whether a do I mean by this?
You’re out of executions on Android that you really, really
want to do if user does something. If user tries to
send a tweet, you want to send it now, but if there is no
network connection you want to send it as soon as device is
connected to the Internet. There are things like uploading
logs you may want to do if the device is charging or you may
want to periodically sync your data with your backup. Now, we
know this is not a new problem on Android. We had some
solutions for this. We have introduced Job Scheduler
in Lollipop and the functionality
into devices that has Google Play Services and we also have manager for
exact timing. And now each of these has different behaviors
and different APIs, it becomes very hard to implement. Hence
we have built Work Manager that sits on top of them and provides
them much cleaner API with new
functionalities. Work Manager has two simple concepts, you
have the workers that execute these actions and you have the work
requests which trigger these workers.
Now, if you want to look at the sample worker, this is
basically all you do. Your worker clause, you
implement one function that says do the work, and that function
just needs to return us what happened as a result of that
work, so you can do whatever you do and you return the result. There is no services, no
intents, no bundles, nothing like that.
Once we have the worker, we need to create a work request so you can
use this one-time work builder or there is a periodic version
of this one and you can specify the worker clause. But now you can also
constraints. You can tell it if there is network connection, if the device is
charnging, or back off criteria, so if the
work is failing, how should we retry it, you can also pass
input parameters to these workers. Once you build that
work request, you can get an instance of work
manager and enque it and now work manager
will take care of executing it. One of the important
distinctions of Work Manager is it has input
and output semantics so the workers can receive input but
they can also output some data. You can obvious this data
through Work Manager, but it’s actually very useful to train
your workers. So imagine you have an
application where a user picks an image from their device and
you want to route some image processing on that picture, and
then once it’s done you want to upload it to your server. Now,
these are two different units of work. Like you can process the
image, maybe you want to do it when the device is idol or you
can do it any time, but to upload it to server you need
Internet connection, but you don’t want to wait the
processing for the Internet connection because it doesn’t
need it. This is super easy to implement
in Work Manager. So have two different workers, they’ll have
single functionality, one of them does the image processing
and the other one does the upload to server. Okay, the
helper function that receives an image file and creates the
process image work request, prepares the input, uses the
same builder to produce the request. Now we get that and we want our
network upload to wait for the Internet connection, so we set a con
constraint, and we say wait for Internet connection before
trying to run this work request, and then create the upload image work using that
constraint. Once we have them you can tell
work manager and say okay, begin with the process image work, and
once you’re done then run the upload to server work, and in you enqueue both of these as
a automatic operation to the Work Manager. And your device
start and anything can happen in between, and we will take care
of running the two. You can also use this API
extensively and run image processing in parallel the same
way, so if user picks multiple photos and you want to process
all of them but upload it to server while some of them are
done, can you easily do that with Work Manager. We’ll just use the save
function, create three work requests for each of the images
the user picked, we create the upload work image in the same
way we did before, and now we say okay,
begin with all of these three work items and once all of them
are done then upload work and then you enqueue that
as automatic operation and that takes care of running it.
Another important future of Work Manager is it’s not just for job
scheduler, (?), it’s executed itself. So this is why
opportunistic execution is important. So if you have an application
the user can send an email and they hit a send button, you send the job info to
the job scheduler or Firebase and it will eventually call you
to execute it back. The problem here is that I don’t
know how long it will take. Even if your device currently
has network connection it may take only a couple of minutes
for job scheduler to call you back and you have no control
over it, you just don’t know. And it results in a bad user
experience to work around that, and what you usually do is you
also have your own thread pool whenever user hits send, you try
to run the same thing there as well, and you take care of the
duplicating when the job scheduling calls you back.
If you are using Work Manager you don’t need to think
about it. Because when you send the work request for the Work
Manager it puts it into its own database. It tells job
scheduler or whichever scheduler it has on the device, okay, I
need to be invoked when these constraints are met, but it also
checks those constraints itself. If they are ready it will
instantly execute the job. Laitser on if job sceld iewler
asks to execute, Work Manager knows whether it executed or not
and handles the request properly. To learn more about
Work Manager please join us today at 5:30 in the Work
Manager session.>>LUKAS BERGSTROM: All right.
What’s next? Okay. So I think it’s been a pretty great year in
Android App Development. Hopefully you agree. We
launched a set of great new components last year and we kept
working on those and iterating on those. We’ve launched three
new major components, Work Manager,
Navigation, and Paging since then. So does that mean we’re
done? Obviously, not. We have a lot more to do an the first
thing we want to do is make Architecture Components the
default way that people build Android apps and that doesn’t
mean that it’s going to be required,
by but it does mean that we want to make sure that as many people
as possible get Architecture Components regardless of how
they get into Android development, so that means not
only are we going to be building more tools like the Navigation Editor in Android
Studio that are sort of Architecture Components aware,
but we’ll also be adding more templates that include things
like ViewModels so that people starting a new project have the easiest
possible onramp into Android development.
In terms of libraries, not only are we going to be building
more architecture components and not only are we going to be
building in more of the kind of core architecture components
goodness like lifecycle awareness into Jetpack, but we want to look at
other Google APIs as well and see, you know, how we can make those sort of a architecture components a ware
where if you call another Google API that’s asynchronous it
already has that built in, you’re getting architecture
components whether you know you’re using it or not.
Finally, we’ve heard from everybody that you want us to
speak with a single voice, you want us to give clear and
consistent guidance, so that means that in terms of
education, and that means not just documentation, but it also means sample apps, Codelabs,
that all of this stuff is going to be
refract refactored to built on a tech
xur components so whether you start with guide app
architecture or download a simple media player app and
start customizing it, that you kind of regardless of when you’re onramp
in Android development is, that we get to you the best possible
place. We know that we still have some
areas left to address in the core and you can see some of those here,
so this is just to say we’re definitely not going to stop
investing in the original set of architecture components, and
there is some not just problem-solving here but some
exciting stuff that we can do around how to make
architecture components as fun for people to
use for people using Kotlin. There is a lot to be done and
still I think the core set of app architecture and lifecycle
problem areas, so we’ll keep working there.
But beyond that I think you’ll see something interesting
about your trajectory if we look at all the
components we’ve launched to date. Last year this kind of
set of architecture components was, these are relatively small
pieces, relatively small APIs that are designed to be used in
a lot of different places in your app, and then if you look at
room navigation, work manager, these
are much larger and richer APIs but they’re still sort of relatively sort of
self-contained. They solve a single problem and do it really
well. Paging also solves a single
problem and solves it well, but in this case we took a very
specific use case, right, so lazy loading for recyclerview,
and we’re actually in this case orchestrating multiple
architecture components and pieces of Jetpack to solve that
problem, so Paging is a little bit higher level, and it’s not
just sort of here is your object mapping layer, right, it
actually takes a very specific, I know I
have or recycling view with more data that I can fit in memory,
and it uses multiple pieces of architecture components to make
that as easy as possible. And we want to continue to, you
know, build more stuff like that so not just — we’re not just
going to keep investing in the core areas of app architecture
and lifecycle, but we want to start solving higher-level
problems and make more and more as easy as possible.
But I can’t leave today without thanking everybody that
helped us get here. The reason that we were able to have a really high quality bar for
architecture components was because a lot of people, many of
whom are here today, were reallying generous with their
time and that includes not just filing issues on the issue
tracker but also, you know, testing pre-release components,
having one-on-one conversations with us
to tell us what your biggest problem areas with Android app
development were. This has been critical to us in making sure that we’re kind of focusing
on the right problems and delivering solutions that are
going to work for you, so I really have to thank everybody
in the community that’s been so helpful.
>>YIGIT BOYAR: Thank you. (Applause). >>LUKAS BERGSTROM: And
there is a lot more. We had to fly over a lot of don’t today,
but you’ll be able to go into a lot more depth in the talks on
Navigation, Work Manager, and Paging. So thanks a lot, and I
hope to see you there.>>YIGIT BOYAR: Thank you.
(Applause). May 9, 2018 12:30 p.m. PT TensorFlowT TensorFlow for JavaScript lts >>Welcome. Please fill in
the seats near the front of the room. Thank you.>>At this time please find your
seat. Our session will begin soon. (Applause) >>Hi, everyone. Thanks
for coming today to see our talk. My name is Daniel
>>NIKHIL THORAT: My name is Nikhil,.
>>NICK KREEGER: And I’m Nick.
>>DANIEL SMILKOV: Today we’re very happy to talk about
JavaScript. If you’re in the world of Machine Learning, doing
a training model or anything about Machine Learning, you’re most certainly dealing with piet Python, it’s been one of the
mainstream for the last decade. It has a lot of tools. It
doesn’t have to be that way. Today we’re here to convince you
that the browser and JavaScript have a lot to offer to the world
of Machine Learning, and TensorFlow Playground is a great
example of that. How many people here have seen this
visualization? Quite a few. I’m glad.
For those of you that haven’t seen it, TensorFlow Playground is
in-browser neural network visualization and shows you what is happening inside the
neuralnetwork as the network is training. When we released
this, this was kind of a fun small project, and when we released this it had tremendous
success, especially in the educational domain, and even
today we get emails from high schools and universities around
the world thanking us for building this and they’re using
it to teach students, beginners about Machine Learning. When we saw the success of
TensorFlow Playground we started wondering, why is it so
successful? We think the browser and JavaScript have a
lot to do with it. For one thing that’s very special about the browser, it has no
drivers and no installations. You can share your app with
anyone and all they have to do is click on a link and see your
application. Another important thing about
the browser is it’s highly interactive. In the playground
app, we have all these controls, drop-down
controls that you can change and quickly run different
experiments. Another nice thing about the
browser is it runs on laptops, it runs on mobile devices, and
these devices have sensors like the microphone and the camera and the accelorometer and
they’re all behind standardized web APIs that you can easily
access in your web app. We didn’t take advantage of this in
the playground, but we’ll show you some demos later.
Most importantly, when you’re building web apps, these apps
run client side which makes it easy to have
the data stay on the client and never have to send it back to a
server. You can do processing on device, and this is important
for privacy. Now to come back to the play
playground example, the library that powers the visualization is not
TensorFlow. It’s a small necker necker small neuralnetwork and it
became clear when this was so successful and popular that we
should go on and build such a library, and we went and built
deeplearn.js that was released in August of 2017. We figured
out a way how to make it fast and scale by utilizing the
GPU of laptops and cell phones through
WebGL. For those of you that are not familiar, WebGL is a
technology originally meant for rendering
3-D graphics, and the library that we built allows for both
inference and training entirely in the browser.
When we released deeplearn. js we had an incredible
momentum. The community went instantly with it and took pre-trained models from
Python and put it in the browser. One example I want to
show you of that is the style transfer demo. Someone went and
took the pre-trained model and this demo has a image, a source
image on the left and an artist on the right, and then it can
mash them together and they made this in a creative, interesting
application. Another demo is people will take models that read a lot of text
and then they can generate new sentences and then they port it
in the browser and explore the novel interface, how you can
explore all the different endings of a sentence.
In the educational domain, people took the standard convolutional
neuralnets and built a fun little game where you can train
your own image recognition model just by using the webcam and
this was very popular. And lastly, researchers took
another example, they took a font generation model that can
generate new fonts, previously was trained on a lot of font
styles, and then they built at that novel interface in the
browser, highly interactive, where you can explore different
types of fonts. Now building on that incredible
momentum we have with deeplearn. js about a month ago at the
TensorFlow dev summit, we announced we’re joining the TensorFlow family and with
that we introduced a new ecosystem of tools and libraries
aren’t JavaScript and Machine Learning called TensorFlow.js.
Now before we dive into the detailts, I want to go over
three main use cases of what you can do with TensorFlow.js today.
You can write models directly in the browser. We have sets of
APIs for that. You can also take pretrained
models that were trained in Python or other languages and
then port them for inference in the browser. You can also take
these existing pretrained models and retrain
them, do transfer learning right there on device. To give you a schematic view of
the library, we have the browser that does all the computation
using WebGL. TensorFlow.js has two sets of APIs that sit on top
of this, a core API which gives you a low-level building blocks,
linear algebra operations like multiply
and add, and on top of it we have
layers API which gives you high-level building blocks and best practices to
build neuralnets. And on top of that because there
are so many models written in Python today, we offer tools to
take an existing model, a keras model and these tools will
convert that model to run in the browser.
Now, to give you an example of our core API, I’m going to
show you how we’re going to go over code that tries to train a
model that fits a polynomial curve and we have to learn the
three coefficients, A, B, and C. Now this example is pretty
simple, but the code walks you through all the steps of how you
would train such a model, and these steps generalize across
different models. So we import TensorFlow for
TensorFlow/TensorFlow.js, and for those of you not familiar this is standard EF6 modern
JavaScript. We have our three variables that we’re trying to
learn, A, B, and C. We mark them as variables which means that the optimizer of the
machine, it’s machinery that runs and trains
our neuralnetwork can change the variables for us. We have our function, F of X,
given some data it will compute the output and this is just a polynomial
function, quadratic function. On top of the standard API like
tf add and tf multiply. We have a chaining, and that’s
been popular in the world, you can
call these mathematical methods on it itself and this reads
better closer to how we read math.
That’s our model. To train it we need a last function and
in this case we’re just measuring the distance between
the prediction of the model and the label, the
ground truth data. We need an optimizer, and this is the
machinery that can optimize and find those coefficients, we specify a
learning grade there, and for some number of epochs passes over
your data we call optimizer. minimize with our loss and f of
X and Ys. So that’s our model. Now, this is clearly notes how
everyone writes Machine Learning models today. Over the years we
have developed best practices and high-level building blocks and new APIs amerged like TF
Layers and Keras that makes it much easier to write these
models, and for that I want to walk over our Layers API to show
you how easy it is. We’re going to go through a
simple neuralnetwork that sums two numbers, but what’s special
about this network is that the input comes as a string
character by character, so 09 + 10 is the input to this network being
fed as a string and the network has an internal memory where it
encodes this information, it has to save it, and then on the other end the neural
network has to output the sum, 100, again character by
character. Now, you might wonder why go through such a trouble to train
this neural network like this, but this example forms the basis
of modern machine translation and that’s why we’re going over
this. To show you the code, we import
TensorFlow from TensorFlow. js and we have our model and we
say tf.sequential which means it’s just a linear stack of
layers. The first two layers I’m not going to go into
details, but those two are building blocks that can take
the strings into a memory, into an
internal presentation. And the last three layerses take
this internal presentation and turn it into numbers, and that’s
our model. To train it we need to compile
it with a loss and optimizer, and
metric we want to monitor we call accuracy
and we call model.fit with our data. One thing I want to point
out with model.fit, training can take for this example, it can
take 30 or 40 seconds in the browser, and while that’s
running we don’t want to blog the main UI thread. We want our
app to be responsive, so this is why model. fit is an asynchronous call and
we get a callback once it’s done with the history object which
has our accuracy as it evolved over time. Now, I went through examples of
how you write these models in the
browser, but there is also a lot of models that have already been written in
Python and for that we have tools that
allow you to import them automatically.
Before we dive into the details, I want to go through and show you
a fun little game that our friends at Google brand studio built called emoji
scavenger hunt and it takes advantage of a pretrained model convolutional
neural network that can detect 400 items. I’m going to walk over to a
pixel phone and open up a browser just
to show you the TensorFlow.js can also run in a browser
because we’re using standard WebGL. And I’m going to ask here,
Nikhil on my right, to help me out here because I’m going to
need some help. To give you some little details about the
game, it shows you an emoji, and then you have to go with your
camera, run around your house and find the real version item of that emoji
before time runs out and there is a neural
network that has to detect that. All right. Shall we start? Let
me see. We’re going to play it here live. All right. We have
to find a watch, 20 seconds.>>NIKHIL THORAT: A watch.
>>DANIEL SMILKOV: That’s great. Woo hoo! All right.
Let me see what’s next. We need a shoe.
>>NIKHIL THORAT: Shoe. >>DANIEL SMILKOV: Thanks,
buddy. Yay! Let’s see what the next item is. Banana. We have
30 seconds to find a banana.>>NIKHIL THORAT: Anyone have a
banana? Anyone? Oh, awesome. We got a banana over here.
>>DANIEL SMILKOV: Thanks, man.
>>NIKHIL THORAT: Here we go.
>>DANIEL SMILKOV: All right. Our high score is going
up. Let’s see what our next item is.
Beer. (laughter).
>>NIKHIL THORAT: Beer? Daniel it’s 12:30 in the middle
of I/O, let’s get back to the talk man.
>>DANIEL SMILKOV: All right. (Laughing). >>NIKHIL THORAT: All
right. So let’s talk a little bit about how we actually built
that game. Let’s switch back to the slides here. Okay. So what we did was we trained a
model in Python to predict from images
400 different classes that would be good for an emoji scavenger
hunt game. These are things like a banana, watch, and shoe.
The way we did this was we took a pretrained model called
MobileNet and if you don’t know what MobileNet is, it’s a state
of the art computer vision model that’s designed for Edge
devices, designed for mobile phones.
So what we did is took that model and we reused the features
that are learned there and we either transfer learning task to
our 400 class classifier. So then once we do that we have
an object detector, so this object detector lives entirely
in the Python world, so the next step of this process is to
actually take that and convert it into a format that will be
able to ingest in the web and then we’ll skin the game and add
sound effects and that kind of thing.
So let’s talk a little bit about the details of actually
going through that process. So in Python when we’re actually checkpointing and train training
our model we have to train it to disk. There is a couple ways to
do this, the common way with TensorFlow is to use what’s
called a saved model. Details are not important for this talk
here, but the idea here is that there are files on disk that you
need to write. Daniel also mentioned that we support importing from Keras,
that’s a high-level library that lives on top of TensorFlow and
that gives you a sort of higher level API to use these things.
Details are unimportant but there are also files on disk
that it uses to checkpoint. All right, so we have a set of
files and now next up is to actually convert them to a
format that we can ingest in a web page. So we have released a tool on
PIP called TensorFlow.js and inside of that tool we have some
conversion scripts. All you do is run the script, you point it
to those saved files that you had on disk, and you point it to
an output directory and you will get a set of static build
artifacts that we’ll be able to use on the web. The same flow holds for Keras models you point for the input H
file and out pops a factory of static
build artifacts. So you take the static build artifacts and
host them on the website, the same way you host TFGs or anything of
that sort. Once you do that we provide APIs
in TensorFlow.js to load the static build artifacts, so it
looks something like this. For TensorFlow Save Model we
load the model up and get an object back. That model object
can actually make predictions with TensorFlow.js tensors right
in the browser. The same flow holds for Keras
models, we point to the static build artifacts and we have a
model that we can make predictions on. Okay. So under
the covers there is actually a lot going on. When we convert
these files to a format that we can ingest in the web, we
actually are pruning nodes off of that graph that aren’t needed to make
that prediction, and this makes the network transfer much
smaller and our predictions much faster. We’re also taking those weights
and sharding and packing them into 4 megabyte chunks and that
means the next time the browser loads the page
it will be cached so it’s super snappy.
We support also about 90 of the most commonly used
TensorFlow ops today and are working hard to continually support more n on the Keras side
we support 32 of the most commonly
used Keras layers during the supporting phase and also
computing and training of the models so computing accuracy
once you get them in, and of course you can also make
predictions as well. All right. So I want to show
you a demo before I bore you anymore. This demo is built by our
friends at Creative Lab. As a collaboration between them and a few researchers on Google Brand,
so I’m going to go back over here to this laptop. Okay, so the idea of this model
is it takes a 2D image of a human being and it estimates a
set of keypoints that relate to their skeleton, so things like
your wrist point, the centers of your eyes, your shoulders, and that
kind of thing. So I’m just going to turn a demo on here.
When I do that the webcam will turn on and start predicting
some key points for me. I’m going to step back here so you
can actually see the full thing, and as you move around you’ll
see the skeleton change and make some predictions
about me. All right. So there is
obviously a lot you can do with this. We were really excited to
show you a fun little demo. It’s very thin, and what’s going
to happen is when I click this slider we’re going to move to a
separate mode where it’s going to look for another image on the
Internet that has a person with the same pose as me. Okay? Let’s try that. Is it going to
work here? Of course it’s not working now.
Okay. We have a physical installation of this which you
can go check out at the experiments tent on H and it’s
really fun, it’s a full screen version of that where you can
see another version of you. We have released this model on
NPM so you can use this and you need no Machine Learning
experience to do it. The API lets you point to an image and
out pops an array of key points, it’s that easy, so we’re really
excited to see what you do with that. Okay. So there is a lot
that you can do with just porting the models to the
browser for inference. Since the beginning of deeplearning.js
and TensorFlow.js we’ve made it a high priority to actually
train directly in the browser and this opens up the door for
education, interactivity as well as retraining with data
that never leaves the client. So I’m going to actually
show you another demo of that back on the laptop over here. Okay. Daniel, do you want to
come help me? Here we go. Cool.
So the game is in three phases. In phase one we’re going to
actually collect frames from the webcam and what we’re going to
do is use those frames to actually play a game of Packman.
So Daniel, why don’t you start collecting frames and what he’s
going to do is going to collect frames to up, down, left, and
right, and those are going to be associated with the poses with
the four controls for the Packman game itself.
So as he’s collecting those we’re saving some of the images
locally and we’re not actually training them all yet. So once
he’s done actually collecting those frames, we’re going to
train the model and again, this is going to be trained entirely
in the browser with no request to a server anywhere.
Okay. So when we actually train that model, what’s going
to happen is we’re actually going to take a pre-trained
MobileNet model that’s actually in the page right now and we’re
going to do a little retraining phase with data that’s he’s just
collected. So why don’t you press that train model.
Awesome. Our loss value is going down and it looks like
we’ve learned something. Okay. So the phase three of this game
is to actually play, and so when he presses that button, what’s
going to happen is we’re going to take frames from that webcam
and we’re going to make predictions given that model
that we just trained. Why don’t you press that play
button and we’ll see how it goes. If you look at the bottom
right of the screen we see the predictions happening. It’s
highlighting the control that it thinks it is and you’ll see him
actually playing the Packman game now.
So obviously this is just a game, but we’re really excited
about opportunities for accessibility, you can imagine
the chrome extension that lets you train a model that lets you
scroll the page and click. Now all of this code is online
and available for you to go in and build your own applications
with and we’re really excited to see what you do with it. All
right, man, back to the talk. >>DANIEL SMILKOV: All
right. (Applause). >>NIKHIL THORAT: Okay, so
let’s chat a little bit about performance. What we’re looking
at here is a benchmark of MobileNet 1.0 running with
TensorFlow Python, this is classic TensorFlow, not running
with TensorFlow.js. We’re thinking about this in the
context of a batch size of one and the reason that we want to
think like that is because we’re thinking of an interactive
application like Packman where you can only read one sensor
frame at a time so you can’t really batch that data.
On the first reel we’re looking at TensorFlow running with CUDA on
a 1080 GTX, a beefy machine and 3 milliseconds per frame, and I want to mention
the smaller the bar the faster it is.
The second row is TensorFlow CPU running with AVX2 instructions
on one of these Mac Book Pro here. We’re
getting about 60 seconds a frame. Where does GTX fit in?
Well it depends. So running on the 1080GTX, that
beefy machine, we’re getting about 11 milliseconds per frame.
On TensorFlow. js running on integrated
graphics we’re getting about 100 milliseconds per frame. I want
to point out that 100 milliseconds is not so bad, that
Packman game was running with this model and you can really
build something interactive with that.
The web is only going to get faster and faster. There are a new set of standards
like WebGL compute shaders and web
GPU that gives you much closer access to the GPU, but the web
has its limitation, you live in a sandbox environment and you
can only really get access to the GPU through these APIs. So
how do we scale beyond those limitations?
With that I’m going to hand it off to Nick who is going to
talk about how we’ll scale.>>NICK KREEGER: Thanks,
Nikhil. Today we’re launching TensorFlow support for Node.js.
(Applause). We’re really excited to
bring a easy to use high performance machine library to JavaScript
developers. The Open Source community around Node.js and NPM
is really awesome, there is incredible movement in the
space, a ton of libraries and packages for
developers to use today and now we’re bringing ML to this front.
The engine that runs Node. js V8 is super fast, it’s had
tons of resources put into it by companies like Google, and we’ve
seen the interpreter be up to 10 times as fast as
Python. Lots of room for performance improvements.
Also, using TensorFlow gives us access to really high end
Machine Learning hardware like GPU devices and TPUs in the
Cloud. Look for support for that soon.
Let’s step back and look at the architecture we highlighted
earlier. We have a Layers API and at a
little bit lower level Core API that
has our Ops. This whole runtime is powered by WebGL in the browser, but today
through NPM we’re shipping a package that gives you
TensorFlow that gives you access to those TPUs, the GPU and CPU. All of this is through our MPM
package. To show you how easy it is to
use our Node bindings I want to show
you a code snippet. This application function right here is a very common server side
request response handler for an endpoint. Those who have worked
with express framer know exactly what’s going on here. Our endpoint listens for slash
model and takes input as a request, which we pass into a
TensorFlow.js model and that output is pushed out into the
response. Now, to turn on high-performance
sensor flow code we only need two lines of code, an import
that loads our binding, and then an existing API called the
setup back in the TensorFlow, and now this model is running
with the performance of TensorFlow.
Whether a works today out of the box? So we can take
pre-existing Python models and run those natively in Node.js.
The models we’ve kind of showed off today, all of those were run in
our Node runtime. There is no need to bring a Python stack to
your Node infrastructure, just run it in JavaScript. Our NPM package today ships a off-the-shelf CPU build, no
driver, just install the package and you’re up and running.
Our whole API that we ship in TensorFlow.js will work with
our Node runtime. Every API that we’ve showcased will work
today out of the box. Now, we’ve built a little demo using Major League Baseball
data in Node.js to show what can you do with Machine Learning,
Node.js and JavaScript. We’ve used Major League
Baseball advanced media pitch effects dataset to do some
Machine Learning. The pitch effects dataset is a
large library of sensor data about pitches that baseball
players have thrown in actual baseball games.
For those that aren’t super familiar with baseball, we’ve
given a little context here. A pitcher will throw a different
type of pitch to fool the player who is trying to hit this ball.
There is pitches that have higher velocities, pitches that
are a little slower but have more movement, and in this
example we’ve sort of highlighted the fast ball, the
changeup, and the curve ball, those are all types of pitches
much don’t get too hung up about the details of baseball, but
what we’re really trying to solve here is a very classic
Machine Learning problem of taking sensor information and
drawing a classification to that.
So for that I’m going to actually showcase this demo. All right. So on one side of my screen I
have a terminal that I’m going to start my Node application
with, and on the left I have a web browser. We built this with
a very simple UI that listens to what our server is doing over
sockets, and those who have used the socket I/O library in Node
know what this interaction is doing. I’m just going to type Node,
start my server, and now all through Node my model is up and
running and training. Every time we take a pass through our
dataset we report that over the socket to our client and can you
see the blue bar is moving a little bit closer to 100%.
That’s our model understanding how to tell the difference
between a curveball or fastball, and as you can kind
of see, every step it moves a little bit differently. And our
model is having a little bit of trouble at the moment with the fastball sinker but only looked
at data for a few passes. The more the server run the better
it gets at training. So all of this training data
we’ve shown is historical baseball
data from 2015 to 2017. I’m going to hit this test live
button and this is going to use the
Node framework to go out to Major
League baseball, pull in some newer live data and run the
evaluation. Once this data comes in, we’re
going to see the orange bars, and the orange bars show us how
much better we are at predicting data that our model has never
seen before. These are live pitches, and we’re really good
at identifying the curveball and not so create at that
fastball sinker still. Let’s jump back and continue to
look at our next slide. This is the architecture of
what we were doing in that demo. We’ve built a very simple Node
server that hosts our JavaScript models, and when we
hit that live button, it reaches out to Major League Baseball,
pulls in the live data and runs inference on the new pitches.
We continue to report that to any connected clients through
the browser. What is the stack up and
performance? So in that training set we were looking at
7,000 pitches and we were training every couple seconds
7,000 pitches, so that’s an interesting benchmark but let’s
actually classify that with the MobileNet
benchmark we showcased earlier. So these are the numbers for
Python TensorFlow, the GPU and CPU inference time. We’re just
getting started, we just launched an MPM package, we have
a lot of ways to go, but we have some promising early numbers to
showcase. TensorFlow with Node.js is
exactly as past as Python TensorFlow, and so with that I’m
going to hand it off to Nikhil to wrap
up.>>NIKHIL THORAT: Thanks, Nick.
Exciting stuff. Okay, so let’s recap some of the
APIs and libraries and tools that we have supported today
with TensorFlow.js. We have a low-level API called the Core
API which is a set of accelerated linear algebra kernels and a
layer for automatic differentiation, we saw an
example of with that with the polynomial regression demo. We also have a high-level Layers
API that encodes Machine Learning best practices into an API that’s
much more easy to use, and we saw an example of that with the
addition of RNN translation demo.
We also have showed you a couple of ways that you can take
pretrained models from Python via saved model or
via Keras models and import to the browser for inference or
doing retraining or computing accuracy.
And of course, we also showed you the new Node.js
support for TensorFlow. js today, and we’re really
excited about that. Okay. So this project,
TensorFlow.js was not just the three of us on the stage here.
It was a cross-team collaboration between many amazing people at
Google, and we also have some amazing Open Source contributors
that I want to send a shout out to. This project literally
would not have been possible without them, so thank you.
All of the things that we talked about here today, the demos, all
of the code is Open Sourced on our
website js.tensorFlow.org and all of the code is also Open Sourced on
GitHub/TensorFlow/tfjs. We also have a community mailing list,
a place where people can go and ask questions, post their demos
and that kind of thing. We have some office hours today
at 3:30 and we have some office hours tomorrow at 9:30, which I invite
you to come and talk to us in person, and we also have some
tents, we have the Experiments Tent at H, that’s where that
Move Mirror Demo the full screen version of that will be,
and we’ll also be at the AI Tent as well.
So we are really excited to build the next chapter of
Machine Learning and JavaScript with you. Thank you.
(Applause). >>Thank you for joining
this session. Grand ambassadors will assist
with directing you through the designated exits.

Leave a Reply

Your email address will not be published. Required fields are marked *