Livestream Day 2: Stage 6 (Google I/O ’18)
Articles Blog

Livestream Day 2: Stage 6 (Google I/O ’18)

August 16, 2019


… >>Welcome to Google I/O2018. By
now, you’ve checked into registration to receive your
badge. It must be visibly worn at all times. And don’t forget,
you’ll need it to enter the after-hours activities
at the end of days one and two. Throughout the event, visit
Google. com/IO, or download the IO
mobile app. For questions or assistance,
stop by the information desk located across from the code
labs building or chat with any staff member wearing a yellow
badge. If you haven’t already, reserve
a seat for your favorite session on the I/O website or mobile
app. If you miss a session, don’t worry, recordings will be
available online shortly after they end.
Be sure to visit the code labs building for hands on
experience for ready to code kiosks. Google staff will be on hand for
helpful device and provide direction if you need support. Make time to visit office hours
to meet one-on-one with Googlers and get
feedback on your projects. If you’re looking to network
with Googlers and fellow developers, join one of the
several meetups hosted in the lounge. Finally, we’d like to
invite you to visit the Sandbox domes where
you can play with our latest demos, physical installations
and more. After sessions are done for the day, stick around
for food, drinks, music and fun. And be prepared for surprises
along the way. We’d like to take this
opportunity to remind you that we are
dedicated to providing an inclusive event experience for
everyone and that by attending Google I/O, you agree to our
code of conduct posting throughout the venue. Your
opinion is valuable to us. After the event, look out for a
feedback forum to share your experience. Thanks for attending
and have a wonderful time exploring Google I/O.>>Hi, everyone, I’m Alesha
unpingco, we’re here to talk to you about designing AR applications so
Google has a long history in designing for AR. We’ve been
doing mobile AR for the past four years and working on other
augmented reality projects before that. What’s changed most recently is
that mobile AR is really starting to take off.
And Google’s AR core makes it possible for anybody to
create quality AR content. So I’ll share some fun facts
about AR Core. We released 1.0 at the end of February, and that
made AR content available to more than 100 million devices. We’re already seeing rapid
growth with more than 300 apps available in
the Play Store. You’re here because you want to
learn how to design for AR. And what we found is once you
understand your users and the type of experience you’re trying to
create, design principles for getting started fall into five
different categories, which we call the Pillars of AR
Design. First, you want to understand
your user’s environment. Where will the users be experiencing
your app? Think about the surfaces that are available and how your app can
adapt to different environmental constraints. Then, you want to consider the
user’s movement and how much space the user will need to
experience your app. And when it comes to
initialization and onboarding, you want to make the onboarding
process as clear as possible so that users understand exactly
what to do in this entirely new medium.
When it comes to object interactions, design natural
object interactions that convey the feedback so that
users understand how these digital objects fit in the
context of your real, physical space.
And when you’re thinking about user interfaces, balance on-screen UI
with interface design so you’re able to create an experience
that’s meaningful and usable for your users.
So we have some examples that showcase the different
guidelines within each of these pillars. And the thing that we
want to point out is that this framework can help anybody get
started with AR Content Creation.
So throughout our talk, we’re going to show you some
different demos that you’ll be able to play with very soon through an app we’re launch
called ARCore elements. And many of the Core interaction patterns in the talk are
available in theme form and will be
available later this summer. >>ALEX FAABORG: Let’s start
out talking about the user environment. ARCore is a regular technology.
Let’s talk about what ARCore can do. It does lots of things. I
think the first thing everyone’s familiar with is surface
detection. It can understand surfaces, tables, floors those
types of things. It can also do walls. Protocol surfaces.
And what’s better than horizontal and vertical
surfaces? It can also do angled surfaces with oriented points,
being able to place an object on any angle.
ARCore does light estimation. This is really
important for having objects sort of look realistic in a
scene. And there’s also some other fun things you can do with that that we’ll
get into. And announced at I/O yesterday, Cloud Anchors, both on Android
and iOS. Two people viewing the same
thing. And also announced yesterday, augmented images. The ability to remember NIZ an
image, but also not just recognize it, but use that image
to get 3D pose data off of it so you know where it is in space. That’s the current set of ARCore
capabilities. And this is growing over time. And the more
your app integrates with the user’s environment, really the
more magical your app is going to feel. Let’s look through a
few examples here. First, for a surface plane detection, a lot
of AR apps currently only use one surface. Say you’re playing
a game, a game board will appear on a surface.
What we found is — there’s really no reason that you have
to use one surface, you could use all of the detected surfaces
. At moments when you have your game interacting with the
multiple surfaces, these can be breakout moments in your game.
It feels magical. Even something, like, say you’re
playing a physics-based game, you destroy your opponent’s
castle and the bricks fall on to the floor. Or that moment when
you see the items on the floor. It can really be quite stunning.
All right. Light estimation. This is critical for making
objects look realistic in the scene. Here’s an example that
we’re working on sort of test out some different techniques.
Here we have three fake plants and one real plant. I think it’s
actually a real fake plant. And we have unlit, which is the
most basic lighting you can do, really not very realistic. And
dynamic and a combination of dynamic and baked. And what’s great with dynamic,
you get realtime shadows. And then, when you combine Dynamic
with Baked, you see things like self-shadowing, where the leaves
of the plant are a little bit darker
and it’s picking that up. You can see how this looks with
movement. And lighting is — there’s definitely a lot of
innovation that’s going to be occurring in this space as we
try to get more and more realistic lighting. You can see where we are right
now. And especially, like, as the
scene gets darker, you start to see the unlit object doesn’t
perform as well. So it’s really important that you’re using the
realtime lighting APIs that are in ARCore.
The other thing you can do is you can actually use lighting
as a trigger to change something. So here, in this
example, when you turn the light switch off in the room, the city
actually glows. And it responds to that change.
And these types of moments can really feel great and
magical for users where imagine you’re playing a city simulation
game. And it’s having these kind of, you know, significant,
meaningful changes based off of the environmental light.
Oriented points. This is, actually, a very new feature. So
we don’t have a whole lot of examples here. But here’s one of
the more basic ones. I filmed this when I was out skiing. And
here, I’m just attaching androids to the side of a tree.
And you can see, you know, as I placed them, they stick to
exactly that point on the tree at the angle of where the
branches were. Cloud Anchors, announced
yesterday. You can see the exact game board in exactly the same
place and they can play that game together. And this is
tremendously fun. You can actually try it out in
the sandbox if you want to stop by later today. This works with Android and iOS.
All right. Augmented Images. There’s a lot of different ways
you can use this. One demo we have in the Sandbox
you can check out is actually an art
exhibit built using Augmented Images. We’re really excited about what
you can do with Augmented Images.
There’s a lot of possibilities from artwork to something like a
toy coming to life. A surface of a product box. Where you can see 3D models of
what you’re about to play with. Now that we’ve got the sort of
basics on the capabilities of ARCore,
let’s talk about how you design an app for AR. One of the first
things you’re thinking, okay, where do I actually start? Blank page and ready to start
brainstorming and new ideas for AR.
And one of the first things to focus on, AR exists outside
of the phone. Your design work should exist outside of the
phone, as well. So something I found a lot of
people have done tremendous amounts of mobile design is they
tend to be very attached to the phone frame and flows of screens
and they’ve been doing that for so long that one of the first
things to do when you’re starting to think about AR is to actually, you know, put away all
of the stencils and don’t think about the phone at all.
Instead, what you want to do is sketch the actual environment
that the user is in. You should be sketching living rooms and
tables and outdoor spaces. And then, as you sketch the user’s
environment, you start to sketch the various AR objects they’re
going to be interacting with in that environment.
You can sort of think of AR as having the same challenges as
response design in terms of different window sizes. But it’s
more complicated. You have response design for 3D spaces
that are the user’s actual living room.
So you want to sketch the user for scale to get a sense of
how you’re going to start crafting this experience. The
user could be very large relative to the AR objects or
very small. And then, you want to start thinking about how that
user’s going to move around in that environment. >>ALESHA UNPINGCO: And that
brings us to user movement. And now that we understand how to design for the environment,
let’s understand how to design for user movement. It’s okay to
design beyond the bounds of the screen.
And what we found in many ways, this can make the
experience feel more delightful and more immersive. When you
have an object that begins on screen and extends beyond the
boundaries, it can make the user feel like the object is really
there. And beyond that, it can also motivate user to organically
beyond the bounds of the screen. And what we found in many
ways, this can make the experience feel more delightful
and more immersive. When you have an object that begins on
screen and extends beyond the boundaries, it can make the user
feel like the object is really there.
And beyond that, it can also motivate user to organically
begin moving the phone around the environment to appreciate
the full scale of the digital objects in the physical space.
And that brings us to our next observation. Because users are more familiar
with 2D mobile applications that don’t typically require user
movement as a form of interaction, it can be very
challenging to help convey to users that they’re able to move
around. So many users don’t because it doesn’t feel natural based on
how you’ve used 2D apps in the past. What we realized is that
characters, animations or objects that convey visual
interest on screen and then move off screen can be a natural way
to motivate users to move. And so, here we have a bird and
it appears in the middle of the screen. And when it flies off
screen, it’s replaced with a marker that moves around and
slides around the edge to help users understand the bird’s
location in relation to the user.
Another major thing you want to think about is whenever you
have an experience that requires a user to move, you also want to
think about how much space a user needs.
So we found that experiences fall into three different sizes.
There’s table scale, there’s room scale and there’s also
world scale. And when it comes to table
scale, what we found is that your experience is able to scale
to the smallest of surfaces so that many users are
able to enjoy your experience. And with room scale, it expands
the impact of AR so that content will start to feel life-sized
and you’re able to do a lot more with the space that’s available. And world scale has no limits — I this is an area we’re
particularly excited about because for what it means for procedurally generated
content in world scale. So no matter what size your
experience ends up being, just remember to set the user — set
the right expectation for the users so they have an
understanding of how much space they’ll need. Because it can be
a very frustrating part of the experience if the user’s playing
a game. And in the middle of the game, they realize they don’t
have enough space to enjoy it. And when it comes to how much
movement your experience requires, there’s no one-size-fits-all
solution, it depends on the experience that you’re trying to
create. For example, if you have a game
that requires user movement as a core part of interaction, that
can be a very delightful experience. You can use proximity or
distance to trigger different actions so that as a user gets closer to the sprog,
it can leap behind the mushroom or the mushroom can disappear.
However, if you have a utility app where the core
purpose of the app is to help users understand complex data
and information, then requiring users to move might be a really
bad experience because what it means is that users who have
different movement or environment limitations won’t be
able to get the complete app experience. So allowing users to manipulate
the object, to rotate it, move it around in a space that’s more
appropriate will ensure that all users have easy
access to the data that they seek.
>>ALEX FAABORG: Because AR’s relatively new, the process for
users to flow from 2D to 3D, at times,
can be awkward. And we’re starting to create some
standards around that. So we’ll talk about initializing into AR.
One of the first things you can do, you can leverage standard
view in AR material icon. So users, when they see that,
they know when they hit the icon, they’re going to be going
into AR. You can use this in all of the normal places that icons
appear, like, you know, floating action or on top
of cards as the indicator to view this object in 3D in your
environment. One over the next things you’ll
see playing with AR apps is
something you might not initially understand. How understanding depth requires
some movement. You’ll see these types of
animationsaction or on top of cards as the indicator to view
this object in 3D in your environment. One over the next
things you’ll see playing with AR apps is something you might
not initially understand. How understanding depth requires
some movement. You’ll see these types of
animations where I was trying to get the user to move their phone
around. Why is that actually happening?
Basically, we perceive depth because we have two eyes, but we get a lot
by moving our head around and being in the scene. And for the case of AR Core,
current — most current phones on the market only have a single
camera on the back, device only has one eye. If it hasn’t moved yet, it
doesn’t necessarily know what’s going on. This is the first
thing the phone sees. It’s going to say, all right, that’s
interesting. But I don’t totally have a sense of, you know, where
these objects are yet. And once you move a little bit,
then it becomes clear. As soon as you bring in a little bit of
movement, then you have enough of that data on different angles on to
the scene, it’s starting to bring up a model of what it’s
seeing. That’s why we have the animations to start to get the
movement, to have enough information to recognize the
scene. Next thing you want to think
about is deciding if users are able to
easily move the objects after they’ve been placed or if these
are more permanent objects. And, again, there’s no right
answer here. So, you know, more persistent object might be,
like, a game board or something that itself takes
input. We want to recommend you use standard icons to set
expectations for users so they know as they’re placing that
object if that object is going to move around later on as they
swipe on it. Examples of that. Here, as
you’re swiping on the city, you’re going to be interacting
with the game itself. So we recommend using an Anchor
icon for the more specific object placements. And you want
to enable the user to move the game board later, perhaps,
through a menu screen. But to set expectations ahead of
time, the city, actually, is going to be stuck to the ground
there for a while as you interact with the game. Versus,
you know, something like say you’re shopping for furniture
and placing a chair on the scene. Here, the chair itself is
an interactive so you can map swipe gestures on to the chair
and move it around. Using the plus icon to set expectations
ahead of time that you’re not really committing to exactly
where you’re placing this object.
So now that we’re talking about object interactions,
there’s quite a bit of details there. >>ALESHA UNPINGCO: Now let’s
start thinking about how users can interact with objects in
their space. One of the things we challenge
you to think about as designers and developers in the community
is thinking about how to problem solve for user behavior even
when it’s unintentional. One of the things we recommend is
giving users feedback on object collisions. And this solves a
huge problem that we see in mobile AR where a user will be
moving the device around and once the device collides with an
object in AR, that object might disappear and the user has no
feedback in terms of how to fix it. So what we recommend is we
recommend providing feedback in the form of camera filters or
special effects that helps users understand when object collision
is not an intended interaction. And this tends to work really
well. The other thing you want to think about is how to give
users the right type of feedback on object placement.
And it’s really important, in this case, think of each
stage of the user journey even as it relates to surface
feedback. So surface feedback in AR is
very important because it helps users understand how ARcore
understands the environment. It gives a sense of the surfaces
available, the range of the surfaces that are available.
So we recommend including feedback on the surfaces when
the user’s placing objects in the scene.
The other thing that we recommend is maintaining the
height of the tallest surface as a user drags an
object from one surface to another. And once an object is suspended
in the air, make sure you’re always
communicateing a visual feedback on the job point that way it’s
very clear to the user at all times where the object is going
to land. And once an object is placed
into the scene, we also recommend providing feedback in
the form of visual feedback on the surface or even on the
object itself just to communicate the object’s entry
into the physical environment. So now that we know how to
play with objects in your scene, let’s think about how an object
might get there. We recommend using gallery
interfaces in order to communicate user — to users how
they can take objects that live on screen and drag it out into
their real world. So here, you see we have a gallery strip at
the bottom bar. And as a user selects an object, they’re able
to drag it on to their space. And not only that, we’re
able to support both selection states and also very familiar gestures that
allow users to manipulate the objects. So you can use pinch to scale,
twist to rotate and even drag to move.
And you’ve seen many examples in our talk of how dragging objects is
a very common and expected behavior. But another
alternative for object selection and object movement is
through a Redicle. Allows objects to manipulate in
the scene without covering too much of the view. We have an example where Reticle
selection is being used to select a rock. And that’s
triggered in the bottom right. But it allows users to do,
it allows users to see the many surfaces that are available. And
as you can imagine, if the user is selecting an object with
their finger and dragging it across the screen, you don’t
have as much screen real estate to see all of the surfaces that
the user might want to place the object on. So Reticle Selection is very
impactful here. The other thing you get are Raycasts. So
Raycasts are very effective in helping the user get a sense of
the virtual weights applied to each of these objects.
Here, we have another example. Where the user is able to pick
up a feather. And once the feather is picked up, you’ll notice that the Raycast
has very little movement and little bend on it. And for the
most part, it remains straight. However, when the user picks
up the rock, you’re able to see a more dramatic bend applied to
the Raycast that signifies the larger amount of
mass, the heavier weight in relation to the feather. >>ALEX FAABORG: Let’s move
on to the final pillar. One of the first things
However, when the user picks up the rock, you’re able to see
a more dramatic bend applied to the Raycast that signifies the
larger amount of mass, the heavier weight in relation to
the feather. >>ALEX FAABORG: Let’s move on
to the final pillar. One of the first things you want to
consider here is that the phone is the user’s view port. They’re
actually using the phone to look out into the scene and see the
application. And because of that, you don’t actually want to place a lot of
2D UI on the screen that’s going to skew the view on the
application. I’ll show you a quick example.
It’s obviously a lot nicer to have a limited set of controls.
As soon as you start to clutter the screen, it gets in the way
of the user’s ability to enjoy the AR. And the counterintuitive thing
is users are so focused on the app out in the world that often designers will
place a control on a screen level because they want to draw attention to that
control. They’re more focused out in the screen. They’re
missed controls on the surface. So really, you want to be
very mindful of where you’re making decisions on if you’re
going to put a control up on the scene versus out into the scene
itself. For not just, you know, obscuring the view but
discoverability of them and finding that.
It’s not to say that you should never put a control up on
to the screen. But you want to be considering a few different
metrics on it. Recommendations is that you only really leverage on-screen surface UI
for things, like, controls that have a high frequency of use or
controls that require very fast access. So, like, a camera shutter
button is a perfect example of something that hits both
criteria where, in a camera, you’re taking lots of pictures.
And also, you want to take pictures very quickly. But
imagine if you’re, like, playing a game and there’s some ability to
fire or something, that would be a good candidate for an onscreen
control. You need to get to that button very quickly. So we talked about using the
View in AR icon to transition into 2D to
AR. But you also want to be careful of the opposite when
they’re transitioning back to a 2D experience. And one thing we found is if the
user’s not initiating that action to go back into a 2D
experience, it can be pretty obnoxious because they’re so
focused out on the scene. The users during the application and suddenly a 2D UI shows up and
blocks the port. That can be annoying. Depending on, even if the user
is exiting, customizing an item in the scene. You want that flow
back to 2D screen level UI covering most of the screen to
be something that the user’s actively doing and not something
that happens by surprise. So common thing with mobile
application design, you want to maintain touch targets about the
size of the user’s finger. For 3D, this is of course, a bit
harder because the object could be any distance away from the
user. Quick example of some things you can do here. Here we
have two tennis balls, and when you tap on the tennis ball,
confetti fires out of it because in AR, you can do whatever you
want. And we’re showing the touch target size with the
dotted line. One of the tennis balls is
maintaining a reasonable touch target sides size as it gets
farther away, whereas the other is not, it’s mapping to the
virtual size of the object. And of course, it’s easier to
interact with the one with the large
target size. We’ve also found where
manipulating objects, if you’re not doing tricks to maintain
target size, you get problems where you swipe the object to
very far away and it’s hard to bring the
object back. So you have to walk over to the object to get it,
which is a little bit frustrating. Maybe, you could
say it’s very immersive, but it’s nicer to be able to bring
the objects back, as well. You want to be thinking
about what controls are on the screen versus out on the scene. And a mantra that the team has
had is say scene over screen. Obviously, we talked about
boundary cases of when you’d want to put something on the
screen level. I found, it’s many people’s initial reaction to
design everything for the screen level because that’s the
type of design work we’ve been doing for 2D applications. You want to start thinking about
volumetricUI. To give a quick example of this. This is one of the demos. It’s a
solar system simulator. Loads of fun. Also, it’s missing a planet
right now. We did fix that for the public release, in case you
notice that in the video. But imagine — now, you need to
design the UI for this. A lot of people initially think, oh, a
gear menu up in the corner. That will throw something up on the
screen. The problem there is, you’re not going to be able to be as
immersed as you’re interacting with it. An alternative way is
to leverage these objects, you know, on the scene itself. As
you’re tapping on planets, you’ll get feedback on what
planet that is, which is nice for educational use cases. And
then, in this particular demo, when you tap on the sun, that’s
how you start to control the entire solar system. Here, the user’s tapping on
the sun. And that brings up a panel. This is actually an
android view. So in theme form, you can map standard android
views into AR. And here, you have controls like, you know,
changing the orbit speed or the rotational speed of the planets
themselves. And it’s really nice to be able to interact with
these objects in the scene and not to have that sort of
sudden loss of being able to see things and be taken out of the
experience. And that kind of brings me to
final point, which is this idea of the AR presence. So we’d
actually seen this coming up in user research studiescontrols
like, you know, changing the orbit speed or the rotational
speed of the planets themselves. And it’s really nice to be able
to interact with these objects in the scene and not to have
that sort of sudden loss of being able to see things and be
taken out of the experience. And that kind of brings me
to final point, which is this idea of the AR presence. So we’d
actually seen this coming up in user research studies where
people would be looking through a phone and then, they would
kind of step outside of the — look outside of the phone to see
something was placed correctly. And then, of course, we’re
recording it and they laugh and they’re like, yeah, right, you
can only see it through the phone.
And you know, we always laughed when we saw this happen. And then, I was testing out an
app. It was sort of, you know, plastic interlocking bricks and
had instructions of what I was building and I was playing it
for a long time. And one moment, I looked over
for the instruction book, and it wasn’t there.
I had the reaction that you normally have of, like, an object
disappears in real life. And then immediately, I’m like,
that’s silly, yeah, it’s AR. But I was so immersed in the
experience in the application, I’d been playing it for so long, I was no longer
mentally tracking what was real and what was virtual. And I was
sort of buying that the experience was happening. And
you’re going to start or the have that experience, as well,
when you’re interacting with the applications. That’s the moment
that the application is performing really, really well.
It means the user is completely immersed and
engrossed in the application. If you have these moments where
people are, you know, looking at a vase through their phone and
look down and it disappears and they react,
that’s good. The app is performing great.
>>ALESHA UNPINGCO: We’ve gone through the five pillars of
AR design, which include understanding the user’s
environment, planning for user’s movement, onboarding users by
initializing smoothly, designing natural object interactions and
balancing on screen and volumetric interface
design. This framework, we believe, will help anybody get
started with creating apps that everybody can enjoy.
We have a quick video for you. Some amazing content that many
designers and developers like yourselves from the community
have created. We hope you enjoy. that, we’re happy to share
everything we did today and happy to see what you create. Please fill out our survey and
check out resources online. >>ALEX FAABORG: Thanks a lot.
Nice job. >>Thank you for joining this
session, brand ambassadors will assist with directing you
through the designated exits. We’ll be making room for those
registered for the next session. If you’ve registered for the
next session in this room, we ask that you clear the room and
return via the registration line outside. Thank you. Determines the performance. So
what does that mean? Say you have 50 documents in a
database. That’s not a hard problem to solve.
But the performance is based on the 50 that you query for. Now what if you have 100,000
items in your database. The performance is exactly the same.
And what about 100 million? Well, still with Firestore, if
you’re querying, you’re getting the same performance . Not with the size of the
entire data, but the size of the result set. And it’s effortless.
There’s no sharding, no maintenance, you write the queries, populate
the database and write the queries on top of that. thousands of users, they support
a lot of infrastructure. They were looking for a way to
increase the velocity, lower their ops
burden and deliver new features to the customers all at the same
time. They mentioned some of the
primary reasons for choosing fire store. These aren’t the same ones
everyone picks and I thought this was insightful. They are
really excited about the offline support. So the client
SDKs allow for writing when you’re disconnected and syncing
with the server when you get back online. And the consistent APIs across
the client SDKs really, really speed
development and they’re super excited about that. Evernote usersen often use
multiple devices. They might use the desktop app
or the mobile app. And then at home, they use the tablet. This app, everybody’s keeping
the really important, personal data and observations on the
world. I wanted to take a moment and
tell you about Firebase Rules and this is a key
part that makes the Firebase data
storage options work well. You can’t have the queries
open to the world. You have to do authorization, you need to do
— open to things you normally have to do. They’re not fancy.
They’re just stuff that is important. And so, with Firebase, Firebase
Rules is a simple declarative syntax that you just upload to
the server and that says which of your data paths are
accessible. You can also do validation when
you limit the length of a string or, you know, make sure integers are
within a valid range. Doug, tell us more about Fire store.
>>DOUG STEVENSON: I would love to. For those of you who
have already used that, you know it’s a cloud-hosted no SQL
database. We observed what developers were doing with it. We found some patterns, best
practices and tried to code that into the destructuring of Firestore. It
mirrors the most common use cases. At the very top level of the
Firebase, you have collections and these are containers for
documents. You have to organize your
documents into collections. A document is the lowest level,
unit of data. Nothing really smaller than a document. But a
document can have fields in it. And those fields can have tight
values. You’re getting all of the
fields, all the data types in one query. Now, within a
collection, the documents can be ordered and they can be filtered
based on the content. If you have fields in there you want to
use to narrow down your search or order by, say, date or count
or something like that, you can perform queries like that in,
like, I said, that scales with the size of the result set. >>SARAH ALLEN: So I’m
excited that Ever Note decided to share about how their data is
modelled. And they’re doing some common
patterns — this can give you a sense of
how the data modelling works. You have a collection of all of
the users. When somebody logs in, the client will write this
data whatever user data they signed up with and, then, you
have your basic profile information. From then on, that’s cash cached
locally on the device. So if you connected on a
different device and changed your profile, it would
automatically sync. So then, they organize data in
your notes, in notebooks and spaces. And those — you also listen to
from the client side. And there’s another top-level
collection of all of your notes. This is a common pattern for how
Firestore organizes these datas. You have the top-level
collections, which can have a huge number of
documents, but the individual user is only getting a small
subset of them. And therefore, you can have that result set be small and queries be
fast. So the client listens to the
notes they have access to. And then, when they change, it
automatically is synced across devices and between users. And then Firebase rules works on
the server side, keeping user data safe. we build? Our use case is a
little bit different than Ever Note, calling it Friendly Shop,
and the idea here is, this is totally fictional, of course,
the companies all over the world can upload images with
inspirational quotes. So you can think of it kind of
like a shop. And the idea is you can go and browse them and maybe
buy them. What’s common between the Ever Note case and our case,
we don’t need to show all of the documents. We don’t need to show
all of the items together. We only need to show what can fit
on the screen or in this case, we’re showing 50 items per
query. We wanted to put a million
documents. How many did we get to?
>>SARAH ALLEN: About 260,000.
>>DOUG STEVENSON: We’ll demo about 260,000 items today. We didn’t
quite get to a million. >>SARAH ALLEN: I’ll get into a little bit of details about some
of the gotchas we got into. Not the limitation of the scale of
Google storage or anything like that.
>>DOUG STEVENSON: Yeah, that was our lack of planning.
>>SARAH ALLEN: Our excitement and enthusiasm for
adding features at the last minute.
>>DOUG STEVENSON: And Sarah will talk about how we populate the
database. We used Firestore like mentioned earlier, but a variety
of cloud-based products to build this out. And what we didn’t
have to do is stand up a server, configure it, log into it, we
didn’t have to scale it. It happened automatically.>>SARAH ALLEN: Firebase
hosting, for web app, this is of course, essential. But whenever
you have a mobile app, typically, you’ll have at least
a few screens that you want to deliver
on the web. And you want to be able to easily serve up your static assets, html,
JavaScript and images. And then, you also want to be
able to do this globally. And have those accessible, close
to the edge. And Firebase hosting is a global content
delivery network with physical edges all over the world. So no matter where users are,
they get — they get low latency, which turns into speed for the user
experience. >>DOUG STEVENSON: So planetary
by definition, right? >>SARAH ALLEN: Exactly. >>DOUG STEVENSON: The other is
Firebase Authentication. It helps you get your users logged
into your mobile application. Supports Google log-in,
Facebook, twitter. If that’s not enough to suit
your own planetary needs, you can write your own plug-in, your
own users, your own OS system, you can plug that in yourself
and that’s fine.>>SARAH ALLEN: Cloud storage,
each poster has its own image, and we
need a massively Scalable data storage place to put them. We don’t need something as
massive as cloud storage, but it’s good to know it’s there — the images can get
pretty big. And there’s room for extra
thumbnails and variance across the devices.
And let me tell you a little bit about how big Google Cloud
Storage is. It’s exobytes of data. It’s a number of bytes with 18
zeros after it. And roughly a million terabytes.
And I don’t know if anybody’s old enough to remember when we
had to worry about physical disks.
Like, we get bigger and bigger disks and that was, like,
really exciting. And then, if your stuff had to go on multiple
disks, that’s a lot of work. It’s lovely not to have to worry
about that and not to have to plan for that to be in the
future. And so, with our poster app,
even if we were storing a really high
res image that was, like, ten megabytes, I
could search trillions of those. And that’s really delightful.
And it’ll work for our friendly shop even if we get to planet
scale with our user base. >>DOUG STEVENSON: That’s right. And the last component we used
to wrap these products up is called
Cloud Functions. A serverless back end. It’s a
managed environment. Write node, write JavaScript, deploy it to
cloud functions and it runs in response to events that happen
in your system. What about the events that
happened? Well, for example, what we could do in the app,
when a new user signs up, we can turn around and create a user
record for them. PVR or anytime We can automatically kick off a
process that generates thumbnails. Or if — oh, I don’t have a
third case. The point is, all you do is write and deploy that
code, you don’t manage any servers, don’t log into
anything. And you don’t scale it up or
down. It scales up and scales down to
zero if all of your demand goes away. You only pay for what you
use. You don’t have hundreds of servers fired up. You only pay
for the compute resources that you actually use with cloud
functions. And in this product, or live app
that will add functionality to the app without deploying a new
version of the web app. .
>>SARAH ALLEN: This is a serverless experience, which
significantly simplifies both the development and the
operations. This illustrates how to power the friendly shop
app. You can do capacity planning on a napkin. Having this fully managed
infrastructure means we can focus more on our application
features and that is really what drives the business
value for our friendly shop. One place that we do need to
do careful thinking. As we mentioned before, how our data
is modelled. It’s not auto magic. You have to think about
your use case and how you’re going to put that data together. And Firestore is structured to
organize your data for scale. So I want to dive into the
Firestore experience. This is an image from our — from the data console in Firebase UI from
our friendly shop data — data
structures. And I’ll just walk you through the interface, which
also walks you through our data structure. If the app were more real, you
could have more collections on the first panel on the left,
you’ll see items. So that would show a list of collections,
normally, to have items, users, whatnot.
And you can also add collections direct in the
interface. Being able to direct that in the
interface sometimes means that you can get started on the view
of your app without — before you build all of the interaction
into your app. The next column is a list of
documents for the selected collection. And here, we have — 260,000
plus documents and you can actually scroll through all of
those and snap to a particular ID and pick a particular
document. And when you select the
document, its content show up on the
right. And here is some of the data or
all of the data for an hour, all of the
metadata for each of the posters in our shop. And you can also,
our app doesn’t do this, but if you want to
structure your data more and deeper, you can
control how much you access at a time by creating subcollections.
You can see that you can add a collection, as well.
Although, our app has a fairly cynical but flexible data model
with a top-level. >>DOUG STEVENSON: You hold on
to that one. Here is how you query the database. It’s a
simple few lines of JavaScript. If you go and view the source,
you’ll get something like this, although, it’s a little bit
broken out. What we do here is we’re using
the Firebase SDK, JavaScript SDK, reaching into the Firestore
product for that. Firebase has a bunch of different products in
it, we’re just interested in the Firestore here and reach into
the items collection and then, we’re going to order all of the
items, all 260,000 of them by price, descending. We’ll get the
most expensive one and the least expensive ones last.
And then, we’re going to limit by the first 50, the top 50, again,
based on the ordering that we’re going to get a asynchronous.
This is how we populate the home screen for our apps. Let’s jump
into a demo. We’ll jump into the demo. If you
want to view this demo live, see that short link, go ahead and
copy that into your mobile browser or your laptop or
whatever and you can follow along with me.
What I’ll do is copy this and paste it into the browser. And it loads up. And we have
results. So here’s 50 something, the top
50 by price. We have all of the expensive ones here. Everything is 25.99. I can reverse that sort if we
want. I’m a budget shopper, so I’m going to keep it on the
cheapest ones first. So that’s how we populate the
home screen. Could we go back to the slides, please? >>SARAH ALLEN: When we want to
add an additional query, we want to add filters, we can specify another
ware clause. We want to filter by topic. So we can say where a topic
equals whatever the user’s picked in the interface. And then, the prior query is in
gray and the new part is highlighted.
And this is just another option in the query builder that let uss us
filter the results. Let’s go back to the demo and see what
that looks like in the app. >>DOUG STEVENSON: Let’s choose
a topic. And these are randomly generated. We used a node module called
faker and faker generated this business-y
sounding things. Tongue in cheek, kind of humorous. I’m
going to filter on a topic. I’m totally into e business. I’m
going to do e-business. And the results come in. Notice that the
queries are just as fast as all of the others. Adding a where clause — now, we
can see we have all e business posters. And I can go in there
and change that, again. I’m also totally into platforms. How
about platforms? Want to do that?
>>SARAH ALLEN: Inspirational posters for businesses around
the world. >>DOUG STEVENSON: And the
query’s the same no matter how many where clauses we add. So
I’m going to go back to the slides.
>>SARAH ALLEN: Now, we want to add another filter. We do it
in an intuitive way by adding another where clause. So these particular poster
images have a range of colors . And so, we can choose the top 50
posters, sorted by price, starting with the most
expensive. And then, for a particular topic
and a particular color, we can see the result set.
>>DOUG STEVENSON: Let’s go to the demo, again. So you saw
on the last page, that bit of code, they’re been anded
together. If I want to look at the posters on the target platforms and in, orange,
that’s the closest thing to Firebase. Let’s look at the ones
that are orange. And again, we get — that’s actually really
attractive. Again, you notice we made the
query more complex by adding another where clause, but the
query took the same amount of time because it’s the same 50
size results. Queries scale based on the size of the result
set, not on the source data set. And I should have added — I
should have mentioned earlier, we actually have a script over here that’s
constantly adding new — oh, now you can see it added new ones.
This is simulating what users might be doing. Imagine there’s users all over
adding new documents. That’s what this script is doing. The result set you see on screen
or your mobile app could change over time. This isn’t just a
static result set. This is a constantly changing result set.
And this becomes significant later on.
Let’s switch back to the slides. >>SARAH ALLEN: You can keep
doing this. And we could go through another one and another
one and another one and keep making up variations of our
posters. But we think you get the idea by now. So the only requirement is that
when you’re doing these composite
indexes that you create a — composite
queries that you create a composite index.
And so that allows you to support all of the combinations
of filters and ordering that you need. So, we have another demo?
>>DOUG STEVENSON: No, we’re talking about indexes now.
>>SARAH ALLEN: Okay. >>DOUG STEVENSON: Yeah. You can keep adding clauses to
your heart’s content, but your only constraint is you have to
add indexes for them. Here’s a screenshot of what it looks in
the Firebase console. We’ve got all of the combinations of
things we want to filter and order by. As long as the indexes
exist, your queries will be fast.
And there’s nothing you have to do. When you create an index,
it’ll go off and do it. You can also put these in a
configuration file and deploy them with your app so you don’t
have to manually create any one of these.
But the interesting thing here is that if you don’t — if
you forget to create an index and you try to query on it, Firebase or sorry Firestore
will say, well, I’m not going to do that query, it doesn’t scale.
I only do Scalable things. It’ll give you an interesting error
message. This is an error message of what
Firestore SDK. You can create it here. And it gives you a link to
go create that index. You don’t have to, like, know any sort of magical incantations, you
click the link and makes that index for you.
During development, if you forget to think of the different
combinations of things you want to query on, you can create one
ad hoc. This is maybe the most
actionable error message in an SDK. It’s literally telling you
how to fix — thank you. We hope Firebase
is easy to use. >>SARAH ALLEN: So just wrapping
up the scaScalable Queries,
indexing made easy, all of your individual
fields are indexed by default. So if you’re just doing a very
simple query on a single property, you don’t have to do
anything. And then, for compound queries, we have a little helper
there. So I think you’ve seen lots of
different demonstrations of how the performance is based on the
result set. And we have a lot of great documentation on how to
structure your data. We’ve gone through some really common
patterns today. And once you get a feel for it, then you can put together these really Scalable apps. So we can also expand our app on
the server side with cloud functions. We talked about two
different data storage ways to scale your app
with Firestore and with Cloud
Storage. And with Cloud functions, you can provide
server side code. You can do a lot with Firebase,
only with client side code, but server side code can be helpful for three
things on the had slide, data wrangling when you’re creating
your profile, want to get some stuff from, like, social sharing
sites or figure out who people’s friends are.
You might have some business logic to drive user growth with a
shopping frenzy. The other kinds of things people
do with cloud functions is put code on the server rather than
client — in order to make fair decisions. In order to be really
careful with that business logic and protect that
so it’s not, kind of, in the wild, in an
untrusted client but secure on the server side. And then, it’s also a great way
to do multiparty interactions to
create a bot, to coordinate many users at the same time.
>>DOUG STEVENSON: So Cloud Functions has support for a
BUMPL of different event providers within Google
infrastructure. You’ve already seen Firestore
and Realtime Database. We’ve mentioned Firestore and
RealtimeDatabase. If a document is created or changed or
deleted, you can write code to respond to that. . bunch of different event
providers within Google infrastructure. You’ve already
seen Firestore and Realtime Database. We’ve mentioned
Firestore and RealtimeDatabase. If a document is created or
changed or deleted, you can write code to respond to that. .
So this is good for doing things like data sanitization.
If you want to change a value that’s invalid at the time of
insertion, you can do that with a cloud function. With Firebase authentication figures, if
there’s a new account created, you could create a new profile, or if that is
deleted, you can clean up the data so you’re not paying for
extra storage for a user that no longer exists. You can write code in response
to conversion events. A typical event if someone buys
something or performs a high-profile action.
You can write code to respond to that and keep track
of it on the back end. Cloud storage triggers — if
it’s changed within your cloud storage bucket. In our app, if a
poster was uploaded, we could write some code that creates the
thumbnails of various sizes that are appropriate. We don’t have
to know anything else. It happens automatically. You can use cloud pub subs. Also, you can use Crashlytics.
So you can get what’s called Velocity Alerts to know if an
error happens and how severe it is among all of the users for
the apps. And lastly, http trigger. A web
hook of some sort. So you can use this to create
bots as Sarah said, rest APIs for your app. Or even if you
want to do something like populate a database which we’ll
talk about later. So let’s switch to the demo,
again. Remember before, there was that
script that was populating the database. This has been going on
about every minute or so, it adds a new poster. Now, this is
simulating what users might do. It’s not the real thing.
Let’s say I want to drive traffic to my website by
creating a trigger that responds to when a new poster is
uploaded. What I want to do is Tweet that
out to everyone. I can do that without changing any of the web
app codes. What I’ll do is fire up my editor here. And this is
my cloud functions code. And I have in my copy buffer a — oh,
that’s not it. How about this one?
This is our new function, I’ll walk through a redacted
version of it. If we could — oh, actually, let’s not switch
just yet. What I’m going to do is deploy it. And while that’s deploying,
maybe it will pick up one of the posters that’s being created by
the script. If we could switch back to the slides now. This is a strip down
version of what basically this function is doing. Three SDKs in this, the
functions and we’re using a twitter SDK,
as well. We’re going to define a new
function, call it Tweet new items and say, hey, functions SDK, I want a
Firestore trigger and pay attention documents to the items
collection with wild card ID. And whenever a document that
matches that pattern is created, my
anonymous function here is going to get invoked with a snapshot and context
surrounding the context. Reach into the snapshot, get the
raw data. I’m going to build a reference
to a path — a reference to location
in cloud storage where the poster was uploaded.
So this is now, for those of you know cloud storage, it’s a path
within the bucket. We’re going to generate a URL that’s going
to — you want to go back? Just that one — yeah, so we’re going
to generate the actual URL to that poster in the web shop. So
what I want to do now is advance the slide. And what we’ll do is
download that file from storage. We’ll download it locally in the
instance it’s running in cloud functions.
And we’re going to use the Twitter API to download that
image once it’s downloaded. And once we have the media ID
string, we’ll turn around and build a message, Tweet a message
that says take a look at our latest poster. The name of the
poster and the URL so the user can click on it and go
to the store. You’ll get to see the poster along with the Tweet.
Let’s go back to the demo and see what happened here. Here we are in twit Twitter. So we had unleash granular web
readiness, which is something I’ve always wanted to do. If we
click the link, we go to the store and, and we see the poster
there. So it works. So we had a Firestore trigger
that triggered in response to a new poster being added and
automatically generates social media to drive traffic to our
store. I think that’s pretty cool.
>>SARAH ALLEN: Very cool. >>DOUG STEVENSON: Can we go
back to the slides, please? Thank you.
>>SARAH ALLEN: Now, I’m going to give you a whirlwind
tour of how we generated the images. So this is kind of you can
imagine lots of batch jobs where this would work. We — I use this amazing, fun, library called fractastic. Turns
out to be one of the great ways to create a lot of random
images. This is an http function. It created a function so that we
can curl that URL. And I created a little class
that makes the — generates the fractal image. And then, optionally, the script
takes parameter, or lack of parameter
will tinge this a whole bunch of colors.
Ships with cloud functions on an environment you’ve got
there. So then, it creates a random
image using the faker module. So we decided to use the company
noun as a topic. And we get a verb an adjective to put those together
into an inspirational message. And we take the immanuals we’ve
created earlier and we generate a unique path based on a
document ID. Because then, we can make sure that each of these posters has a
unique storage path and we upload a poster to a bucket. This is happening multiple times
to each color. And we finally get a download
URL. It’s really important for the Tweeting.
And then, we put it all together. It shows you, you
know, we’re putting together a documents and we call
set on our document reference. And that changes the data in
Firestore. We’re updating Firestore and storage in this
Google Cloud Function. And then, you end up with a JSON
blob like this. And there’s a couple of
configuration options. It can take two minutes or so, so we
extended the time out. I actually increased the memory
because I found out that sometimes it ended up using a
lot of memory and then, that was causing things to fail
occasionally. And you can also adjust your
quota. So it turned out, one of the
challenges we came up — we ran into in the
last 24 hours, we decided that we wanted to have a nice URL for
our friendly shop. Recreated the app. I forgot to
turn on billings. You get a lot of free executions without even
setting up billing. But if you want to make a million images,
in fact, if you want to make 42,000, you’ll get — in a very
short amount of time, you’ll get a quo quota exceeded. I had a bug there where I forgot
something to do with my file and didn’t have the twitter set up. This is the health tab for
functions, it’s really super handy for figuring out what’s
going on. So also, the local disk turned
out to be one of the reasons I was
having things fail because of memory. Doug, actually, pointed out in
our documentation something I hadn’t practiced. It
turns out to be handy but can also be a gotcha. You can put
whatever you want in your functions directory and when you
deploy your functions, all of those files are available to
you. You can access them locally. But if you want to
write, you can write in the temp directory. But you want to use
platform independent paths so you can test locally and on the
server even if you have windows locally. Normally, you create a temp file
like this and then also need to be
careful to know that the local files might be there next time.
Or they might not. They’re good for, like, caching
something. But if you don’t need them, you should delete them. Because it’s an in-memory file
system, it’ll chew up your memory. It’s super fast and
that’s great. But you want to clean them up. So you want to
make sure after whatever you do, you delete the file. Unless you
really want to keep it there for, you know, you’re going to
reuse it next time. So in summary cloud functions is
server side code without servers, we love it for that. Most people use it for core app
logic or most of the things we’ve talked about today. It’s also like our user, driving
user growth example. You can easily add features when your
app’s in motion. >>DOUG STEVENSON: Without
publishing your app, again. >>SARAH ALLEN: And also great
for back-end jobs. >>DOUG STEVENSON: That’s how we
built our app. A total of five Firebase and
Cloud features. Authentication for log-in,
Firestore, cloud storage for our images and Cloud functions to do
things on the backend in response to events that happen
in our system. This is pretty easy. This is pretty standard. Application architecture for
what I would consider development of a mobile app in
2018. This is, I think, the way things will go for a lot of
developers. If you’re building a mobile app or web app, I would urge you to
consider. You don’t stand up servers, don’t log into servers,
don’t have passwords, don’t have configurations.
That’s all you have to do to build an app these days. That’s
all the content we have. We will be in the tent if you
want to ask questions. In the tent across the way and
we’ll see you over there soon. Thanks. session. Brand ambassadors will
assist with directing you through the
designated exits. We’ll be making room for those registered
for the next session. If you’ve registered for the next session
in this room, we ask that you please clear the room and return
via the registration line outside. Thank you. the seats near the front of the
room. Thank you. >>JAMES BIRNEY: All right,
good morning, everybody, how’s everyone doing today? Yeah?
Good. All right. Well, welcome to Google I/O, my
name is Eitan Marder-Epstein. How many of you
are familiar with Augmented Reality in general? Okay. I’ll give a quick refresher
about augmented reality for those of you who maybe aren’t
quite familiar with it. And especially how augmented reality relates to smartphones which,
which is something we’re excited about. I need my clicker. I’m going
to go over here to get the presentation started. But off we
go. So, smartphone AR stems
fromfamiliar with Augmented Reality in general? Okay. I’ll
give a quick refresher about augmented reality for those of
you who maybe aren’t quite familiar with it. And especially
how augmented reality relates to smartphones, which is something
we’re excited about. I need my clicker. I’m going to
go over here to get the presentation started. But off we
go. So, smartphone AR stems from this observation that over the
last decade our phones have gotten immensely more powerful.
CPUs and GPUs have improved a lot. Until very recently it was
largely unchanged and limited. If you point your phone at this
table, it will allow you to take a picture of the table or even a
video of your friend climbing over the table. But your phone
wouldn’t really have an understanding of the geometry of
the table of its position relative to the table as it
moves through space. And so, what augmented reality
seeks to do on smartphones is takes the advances and
leverages it to bring new capabilities to your phone. And
to take your phone from beyond just the screen, beyond its own
little box to expand it to understanding the world around
it. Now, when my phone looks at the
table, it can see there’s a surface there, there are chairs
next to it. And as I move through the environment, my
phone can actually track its position as it moves. And we
think at Google that augmented reality is really exciting and
we’ve been excited to see some of the stuff that you’ve built. And we’ve kind of categorized it
into two main buckets where we think augmented reality can be
really, really great for applications. So the first bucket is we think
that augmented reality can be useful on smartphones. So
recently, I was remodelling my kitchen. Another poll, how many of you
have remodelled anything in a house?
If you’ve done that, you know that measurements is a real
pain. And what I needed to do was to measure for a back splash, we were
buying subway tile for our kitchen. I pulled out my phone,
went to my counter and measured from point A to B to C and I did
all of that without moving any of my appliances where I would
have normally had to move in order to get an accurate
measurement with my tape measure.
AR can be useful in providing a better, geometric
understanding about your environment. It can also be useful for
shopping applications. Recently, we had some very old chairs at
my house. And my partner and I were looking to replace them,
kind of like these customaries here and we were getting into a
debate over which chairs we liked more. And so, with
augmented reality, we were able to take a 3D model of a chair,
place it in the environment, see the exact size and scale and
color. And we could have our arguments
about inevitably what kind of chair we would have at home
rather than exposing everyone to it at the store and be more
targeted about how we made our purchase and even buy this furniture
online and be more comfortable with it. That’s how AR can
provide more utility in your life. But AR can also be fun. Imagine
a character running across the floor, jumping on to this chair
and table. Or, me sitting in one of these chairs and having the
floor drop out from under me to create an ice fishing game.
Ice fishing sounds a little bit boring, but I can tell you
that in this game, it’s actually a lot of fun. And AR can also be used for
creative expression. So here, now in your pocket, you
have a lot of ability to go out and create new things that were previously only
capable to be created by professionals. So you can
generate computer generated content on the go, on the fly.
You can take your favorite character and put them into your
scene and have your friend pose next to them. Or you can take pizza or hot
dogs or your favorite food items as we show here and put them on
the table in front of you. But now, you have the amazing
video editing capability in your pocket. And for those of you who have
seen our AR stickers But now, you have the amazing video
editing capability in your pocket. And for those of you who
have seen our AR stickers application on the Google pixel
phone, you know what I’m talking about. And for those who
haven’t, check it out. It’s cool to have this creation power in
your pocket. So that’s great. AR can be useful, AR can be fun,
how do you actually build applications for AR? How do you
get involved as developers? This is the developer conference. So how many of you are familiar
with ARCore? All right. About half of you. So ARCore is Google’s
development platform for augmented reality. We want to
make it easy for you to build applications that take advantage
of the new capabilities that phones provide. Of the ability
of phones to see and understand their environments and to build
applications that actually react to this understanding. And ARCore was launched a few
months ago. The first is what we call motion
tracking. Imagine taking the scare crow
from the Wizard of Oz and place in front
of a taco stand and make it seem like he’s waiting in line for
tacos. If I look at the scarecrow with
my phone, ARCore understands its
position relative to a virtual object I’ve placed in space. As I move a meter forward, the
phone knows I’ve moved ha meter in this direction. And as I turn
left, the phone also knows that. It’s able to track its motion as
I move through space. And now, if I combine that with
my desire to place the scare crow a meter in front of me, I can put the —
I can change where I’m rend ing that is to register virtual
objects to a physical seen in a natural and intuitive way. The second capability that
ARCore provides is lighting estimation. Continuing our Wizard of Oz
theme. We want to make the lion afraid
because it’s cowardly. Here, the AR core is looking at
the camera feed, estimating the real world lighting of your
environments. And with that estimate, ARcore
can light characteristics in a realistic fashion. The virtual
objects that you’re putting in your scene look correct. You’ll
see the tone on the lion change when it goes from light to dark.
And you can even script interactions for your characters, in this
case making the lion afraid when the lights go off.
And the third capability that ARCore provides is
environment to understanding. So here, as ARCore is moving
around the world and tracks its motion and also estimating the
lighting of the environment. The ARCore is recognizing
surfaces. Might recognize this plane below
me, which is the ground or this surface here, which is the table or
maybe this vertical surface behind me and allows you to
place objects that are grounded to reality. If we want to place the Android
character on this table, I can detect the surface and place my
virtual character on a physical object in the world.
Those are three capabilities, motion tracking,
lighting estimation and environment understanding. And when you combine them
together it allows you to bill on these experiences that were
previously impossible. That bring the virtual and physical
worlds together and meld them into a new reality that enables people to
see in new and different light. You have worked really, really
hard. And with help from our partners in our Android OEM eco system, today,
ARCore is supported on 100 million devices and we’re
working to increase that number every single day.
We believe that augmented reality is a next shift in
computing. And that soon everyone will take for granted
that this power is in their P devices. devices. That’s our scale, but
we’re also interested in scaling the capabilities of ARCore. We want to teach ARCore to do
new and interesting things and that’s what the rest of the talk
is going to be. Today, new things into AR Core.
And the first is we’re announcing some new capabilities
for AR Core, improving what the devices can do. And those are Augmented Images
and Cloud Anchors and we’ll talk about them in the talk today.
And then, we’re also announcing new tools for ARCore.
How you can use augmented reality on the web, which we
think is really exciting and you can check out a talk to that
later today at 12:30 p.m. And another is how you can more
easily write 3D applications for
Android and AR specifically. We’ve introduced our seam form
library, which is for 3D rendering on
Android and encourage you to check out that talk at 5:30
today. So enough about the preamble.
We’re going to get into the meat of it and talk about what’s new in
ARCore and kick it off with our first
feature Augmented Images. They stem from your feedback. We heard you as you develop
Augmented Realities applications ask us, hey, AR is great.
Wouldn’t it be better if we could also trigger augmented
reality experiences off of 2D images in our environment, like,
movie posters or textbooks? And so augmented images seek to
do just that. Augmented images provide a
mechanism to take a 2D texture in a world and
make it more engaging by expanding it to
a 3D interactive object. And to show a concrete example
of this, consider the case where we have new children’s toy called castle
toy, I think. And we have told AR Core, hey,
we want you to recognize the surface of this castle toy box.
take a 2D texture in a world and make it more engaging by
expanding it to a 3D interactive object. And to show a concrete
example of this, consider the case where we have new
children’s toy called castle toy, I think.
And we have told AR Core, hey, we want you to recognize
the surface of this castle toy box. Now, as part of the
product, you can hold up your phone to it and you can actually have an immersive
experience come out of that box, a more engaging experience for
your product. Augmented images allow you to
detect these kinds of textures and then scripts, behaviors and
then take this 2D flat surface and turn it into 3D, which we
think is really exciting. And it’s based on your feedback. You
told us that you wanted this feature and now we have it.
So that’s the feature in a nutshell. But I want to tell you
about how it works and also, how you can use it in your
applications. So augmented images
fundamentally work in three major steps. The first step is you need to
tell ARCore what images you’re
interested in. And there are two ways you can do this. The first
way is to tell ARCore that you want to detect certain kinds
of images in realtime. You could download from a server, bundled
into your application. And you tell ARCore at run time
that, hey, please load this image, learn how to detect it in
the scene and tell me when you do. The second option is to tell
ARCore in advance where we provided tools on your desktop
computer can take up to a thousand images and train ARCore
on them in an offline fashion saying I would like you to be
able to recognize any of these a thousand images when I
run my application on device. The next step, we’ve trained
ARcore to run these images. We want to show as seen and have it
detect the images that we’ve trained. So now, when ARCore moves around
the environment with your phone, ARCore will also look for
textures in the environment and match those to the textures you
trained on. And when it finds a match,
ARCore provides you information on that match with the third step, which is a
tracked object. So this point that’s been
planes, both horizontal and vertical. But it can also give you points
of interest you can attach to. And now, an augmented image
is another tracked object. You use it just like you would use
any plane or any points. And you can attach your virtual contents
to the detection of the physical object in the world.
So that’s it. Really simple. Three simple steps. Number one, tell ARCore what
you’re looking for. Number two, have ARCore detect
objects in the scene. And number three, attach your virtual
content to these physical objects.
And because this is a developer conference, I want to
show you the same steps in code. We’re going to go through them
in Java really quick. But this is the same for Unity
and Unreal. We’ll go through the same exact steps, again.
Step number one, you need to add images to AR Core’s memory.
You need to tell it what images it’s interested in. And so, here we’re creating image base and adding
an one, you need to add images to AR Core’s memory. You need to
tell it what images it’s interested in. And so, here
we’re creating image base and adding an image to it. And doing
it realtime on the phone. This is a little bit expensive,
you have to pay a — I’ll show you how to create
it with alternate flow on the computer. Once AR Core has a database of
images, we go to the second step. The second step is always
looking for those images for you. And you can get it from the
AR frame. Each and every frame that AR
sees or ARCore sees in the world.
Now, you’ve got a list in the scene. And you want to
attach virtual content to it. So that brings me to the third
step. So for step number three, you just take the augmented
images that the augmented image that you want. And you create an anchor off of
it and then, you can attach virtual content to the anchor
and the same you would for any kind of plane detection or point
detection that you’ve been used to in the past.
So that’s it. Three simple steps. And if you want to do the
precomputation on the computer disk, this is what you run. So this is a command called
build DB, and you can pass up to a thousand images into the command and it’ll build
in advance that you can, then, load in ARCore using this code.
This loads the database from file, pulls it in. It’s
computationally efficient. ARCore has done the work it
needs to recognize the images later. And now, you can go off
and running with the other two steps we showed before detecting
the image and replacing content relative to it.
Pretty simple. Now, I want to show you a demo of this in
action. So we’re going tothis code. This
loads the database from file, pulls it in. It’s
computationally efficient. ARCore has done the work it
needs to recognize the images later. And now, you can go off
and running with the other two steps we showed before detecting
the image and replacing content relative to it.
Pretty simple. Now, I want to show you a demo of this in
action. So we’re going to switch to the pixel phone here. And we’re going to run this
augmented images demo. So here, we’ve actually trained
ARCore to recognize this poster on the wall. And so, when I look
at the poster. You can see that it fades out and goes from 2D
into 3D. And now, as I move, the perspective that I see changes. I can make it more engaging and
immersive. That’s the demo. Pretty simple. We think that augmented images
have a lot more potential, as well. So the first use case is
education. Imagine a textbook coming to life in front of you
or going into a museum tour where artwork on the wall jumps
out at you and gives you more information about the artists or
maybe their progression as they were sketching a painting.
We think augmented images are useful for advertising.
Advertising is all about engagement. Imagine being at a
movie theater and holding your phone up to a movie poster and
having content come out or telling you showtimes. Imagine
being at a bus stop with ha little bit of time to kill and
engaging with the ad you have on the side of the bus stop
station. We think augmented images can
also be useful for the products that you’re advertising. You can build products that meld
the physical and digital worlds that
bring both together. It could be a how-to guide for
your coffee machine as you try to make coffee for the first time with your
expensive espresso machine and have no idea what to do. We think Augmented Images, we’re
excited to see what you build with them. And we are not done,
yet. We’re going to talk about one more feature today, and for
that, I’m going to bring up James Birney and
he’s going to talk to you about Cloud
Anchors, and I think you’ll really enjoy it. Come on up,
James. [ Applause ]
>>JAMES BIRNEY: Real quick, before we get started, you’ve
been sitting for a while. We’re going to do the wave real quick
going across the room. All right? You ready? Laptops, ready ? Three, two, one, ARCore . There you go, guys. Like Eitan mentioned, I’m
talking about Cloud Anchors. Tell me if you saw the
presentation on Cloud Anchors yesterday. You’re going to want
to immediately start building. So before we hop into Cloud
Anchors, it’s important to start with where AR is today.
Could I get a quick hand if you’ve BIMT built an AR app before.
That’s roughly about half of you. For the other half, what
happens when — let’s say we’re going to
build an app or play some dinosaurs. They’re going to interact. The
way that we would do that in an AR app today is plant an anchor. Then the T-rex and and that becomes your reference
frame. Let’s say he’s going to come
back up on stage, he’s not going to, that’s a long walk. He goes ahead and creates a
separate dinosaur app and places a bunch
of teradactyls. The app is running
a different reality. A different augmented reality than the app
we have over here. And the reason why, the anchors
can’t talk to each other. This is what cloud anchor solves.
We give you the ability to create a shared reference frame. Anchor and the offsets, which
now, we can have a common anchor in the middle and all of the AR
content. So everything are able to
interact and play. And you can create these funny experiences where not only is my
content interacting with Eitan’s
content, but he can control mine and I can control his. That’s an abstract thing where
I’m moving my hands around on stage. A more concrete example would be
our lineup. Which is an experimental app that we as
Google built. It draws a single line in space. And what we added to it is the
ability to do not just one artist, but
multiple art IGSist, but multiple
artists. You can see multiple artists
drawing together. You can see the powerful experience you get
out of this. Where no longer is it — you’re
able to draw together. What one person draws the line, you can
build on top of that. I’ll give it a second for the
video to finish and you to absorb what’s going on. That’s a
new concept. Okay. So let’s talk a little bit
about how we create these Cloud Anchors. We’ve done an awful lot
of work to make it simple. It’s only a few steps. Let me walk
you through them. Step one is — let’s take in this example, we
want to make our stick woman, her name was going to be Alice.
And Alice is going to place a Cloud Anchor. Now, the verb we use to create a
Cloud Anchor is called hosting. The reason why, we’re going to
host that native anchor up to the cloud. Some when we host that Cloud
Anchor, the features, the visual features in the environment. So let’s say that Alice is
standing here. And as Alice is looking at the table, she places a Cloud Anchor or the
app will place a Cloud Anchor for
her. Do you like it? Thank you.
Appreciate the one person. hosting. The reason why, we’re
going to host that native anchor up to the cloud.
Some when we host that Cloud Anchor, the features, the visual
features in the environment. So let’s say that Alice is standing
here. And as Alice is looking at the table, she places a Cloud
Anchor or the app will place a Cloud Anchor for her. Do you
like it? Thank you. Appreciate the one person. Okay. What the
phone is going to extract from the environment is all the
points where these leaves come to what the phone will see as
contrast points. Where the colors change, where the
lighting changes. So the edge of this table, the
edge of this table cloth, every point where the leaves kind of
change. Those are the visual features that get abstracted and
then get uploaded to the cloud. That, then, gets saved and
processed. And when Alice gets back in a
couple of seconds is that Cloud Anchor. In that Cloud Anchor is
a really important attribute. That attribute is the Cloud
Anchor ID. You can kind of think about the Cloud Anchor ID as — you can
think about Cloud Anchors the same way you think about a file.
Say you’re going to save a file to Google Drive. And when you
save it, you need to create a file name. We’re going to
create, essentially, that file name or that ID for you. And
that ID is the way that you’re going to reference it later. It
would be hard to find the file without knowing the name, right? The Cloud Anchor ID is the same
concept. How this comes into play is all Alice needs to do to get Bob,
our stick man over there to connect to Alice’s cloud anchor is to — excuse me,
to send over that Cloud Anchor ID to
Bob. And that’s all she needs to send over.
Once Bob has the Cloud Anchor ID, he then uses the ID to — and our
verb here is resolve. And resolving will add the Cloud
Anchor ID to Bob’s reference frame. Let’s say that Bob is
standing right here, as well, he looks at the same area, the
visual features that we’ll get uploaded to the Cloud, in the
Cloud will match those visual features against the visual
features that Alice had previously uploaded. And we will give Bob back a
Cloud Anchor that will be relative to where his device is.
Even though both devices are in different locations, we’ll
create the Cloud Anchor in a consistent
physical location. You then have a shared reference frame. And
then, at that point, we can place, again, use dinosaurs because
everybody loves dinosaurs, right? We can place our dinosaurs
relative to the Cloud Anchor. Hopefully that
makes sense, Cloud Anchor comes back.
And we created a fancy visualization. The orange dots
come up, those are the visual features, they go up to the cloud, Bob uploads his up to the
cloud, they get matched and the two create the same shared
reference frame. And once that is created, wait a second for the gif to loop around,
you’ll see that spaceship show up and the two of them can
follow the spaceship around the room. Once they’re paired, the devices
can interact together and go anywhere in the room. All right.
So let’s keep on going one level deeper, like inception with some
sample code. Okay. So same format as before, but
before we get to the two methods of hosting and resolving, it’s
really important that we enable the feature. So when you’re
working with ARCore, interact with the session. config and turn on our feature.
Hopefully, this is straightforward. Then, on the
first device, so this is Alice’s device, the one that creates the
Cloud Anchor. The main method is host Cloud
Anchor. And with — excuse me, with Host
Cloud Anchor, you can feed in any existing Cloud Anchor. As
mentioned before, normally this is created from a horizontal
plane or vertical plane and you can pass that into host Cloud
Anchor. Asynchronously that will create
in a couple of seconds. What we talk about is the
important thing that comes from the Cloud Anchor. All right.
Cloud Anchor ID, thank you. So and then, it is completely up
to you what the device communication you want to use.
The demo we’re going to show you in a second shows you Firebase.
There’s two other demos, those also use Firebase, as well. It’s
a great means to communicate between — you can use any means
you want. So then, on Bob’s device and
it’s really important point here. We can also have Bob, Jerry,
Johnny, Eitan, it can be as many users
as we want that all they need to do to
join that Cloud Anchor is receive the
Cloud Anchor ID, the one that Alice just sent over. And then, we need to resolve the
Cloud Anchor. Dead simple. All you need to do is pass in
the Cloud Anchor ID in the back end under
the hood, we’ll take the visual features. It’s important that the users’s,
again, looking where Alice was. You’re good to go. You can start placing assets to
that Cloud Anchor. What operating system were those
devices in that code example running on?
Anyone? All right. So the really important point
here. Any iOS/AR kit enabled device.
Which today is going to be iPhones. And we believe this is
a really important. This is incredibly important to making sure AR reality . So so now I’m going to invite
Eitan up on the stage. And we’re going to give you a live demo. It’s one thing to show you —
one thing to say that everything works across platform but it’s
another thing with a live demo .
>>EITAN MARDER-EPPSTEIN: Who thinks I’m going to win? Raise
your hand? Oh, it’s tough. Who thinks James is going to win?
That’s the rest of you, right? >>JAMES BIRNEY: Okay. You are getting to know Eitan
better every minute. He sandbags a lot. >>EITAN MARDER-EPPSTEIN:
I’ve got to join this room. And now, I’m going to set up my
board and James, you set up yours.
>>JAMES BIRNEY: Yep. >>EITAN MARDER-EPPSTEIN: You
want to get close to me. Jam>>JAMES BIRNEY: The state is
being reflected in both. I’m going to press that. Here’s our futuristic-looking
light boards. >>EITAN MARDER-EPPSTEIN: Here
we go. >>JAMES BIRNEY: And we have
people in the back organizing bets in case anybody wants to
make money off of this. >>EITAN MARDER-EPPSTEIN: I
feel like James has been sandbagging me in all of our
practice sessions because he’s doing much better than he has in
the past. Let’s see. >>JAMES BIRNEY: Oh, no. >>EITAN MARDER-EPPSTEIN: Oh,
so close. >>JAMES BIRNEY: All right. Did
I get him? Hey, it worked.
[ Applause ] And just to reiterate, that was
an iPhone that Eitan was using, but
this could have been any Android
ARCore enabled device. That could have been any enabled
device. And there’s my clicker. Okay. Let’s talk about use
cases. That was gaming, that was an example of gaming working
really well. But there’s shared AR does not
need to stop at gameing. Shared AR can make a big difference in
the world. Oops. Lance help me going back a
slide, please. Pretty please. Thank you. Okay. So four categories that briefly
let’s talk about. One is in the education space.
This is an example of — let me phrase this as a question. So
which would be — raise your hand after I say two options. Option B, you can learn from an
interactive 3D model that you can play with your friends. All for option A? Option B? All
right. See, we’re making improvements in how people
learn. And the demo we’re showing you here, an example
that NASA built for us. The — this doesn’t need to stop
the space exploration, you can do this, as well, any sort of visual area
such as biology, there’s a couple of cool demos that you
can explore the human body together. I’ll leave it at that. Creative — let’s hop on down to
creative expression. We draw the white line in space.
But we can go beyond that. Take, for example, this block building app that was built by euphoria.
You can build a full block building thing and 3D print it
later. It’s very, very cool. And you can imagine what this
would look like, as well, with the AR
stickers. You can imagine what this would
look like now as you’re placing storm troopers or — as you’re replacing
demigorgon andBut we can go beyond that. Take, for example,
this block building app that was built by euphoria. You can build
a full block building thing and 3D print it later.
It’s very, very cool. And you can imagine what this would
look like, as well, with the AR stickers. You can imagine what
this would look like now as you’re placing storm troopers or
— as you’re replacing demigorgon and have the fight
between different phones . Now, you can do ice fishing
with your friends. Looking at the sidewalk and turn
it into an ice fishing pool. Beyond ice fishing, you can
imagine playing laser tags with your friends can now be with
your phones. You don’t need to buy special gear.
You can just, you know, two people quickly pair, host and
resolve. And then, you’re off and going
and is playing laser — Cloud Anchors
are not limited to two devices. And then, shopping. How many of
you guys have bought something and then had your partner when it actually showed up veto it
and then you had to return it? Show of hands? Yeah, that’s like
a big pain, right? And then, you have to go through, find the UPS
store, FedEx store, mail it back, that’s not a good
experience. It’s a lot better if you can preview it with your
partners. So now, with Cloud Anchors, if
I’m placing a speaker system here, I
can have my wife also look at that speaker system from her
phone. There’s a feeling of consistency
and that feeling of trust that you built if you’re the
advertiser or e commerce site. If you have two users looking at
it and it shows up consistently for both of them, it builds this
trust. The product I’m buying, when I’m
previewing it, it’s looking that way. It’s showing up on multiple
devices. So that’s everything for Cloud
Anchors. Now, let’s talk about getting
started . So ARCore already supports
Unity and Android Studio for Android
native development. As well, since Cloud Anchors are
across platform, we provide SDK so you
can do your development X code, as well. All four of the
developments are live as of yesterday at 1:00 p.m.. [ Applause ]
Thank you. So for the folks here at I/O,
you guys have a bunch of resources, you folks have a
bunch of resources you have at your disposal. Please take
advantage of them. There are three awesome demos in
the Sandbox. If you like playing lightboard
and want to play Eitan, our Sandbox is
over there in the AR Sandbox. He’ll be there until somebody
beats him? We also have the just align demo over in the
experiment sandbox. And then, the demo that Eitan
showed with this picture frame as well as two others are
available in the AR sand box. It’s a really, really fun
exhibit. Pleads go ahead and play around with it. I suspect
it’ll give you a bunch of cool ideas for what you can build. For code labs, we have over 80
workstations set up. Please play around with them. Every
workstation is paired with an android device. Not only can you goas two others
are available in the AR sand box. It’s a really, really fun
exhibit. Pleads go ahead and play around with it. I suspect
it’ll give you a bunch of cool ideas for what you can build.
For code labs, we have over 80 workstations set up. Please
play around with them. Every workstation is paired with an
android device. Not only can you go through the code, you can
compile it on to the phone and then, you can play with — you
can see what the code you just built actually works like on a
phone. And then, we also have office
hours, please take advantage of that. We have incredibly intelligent
staff to answer questions. And the AR Core team is
incredibly busy giving talks this week. Please take advantage
of those. Done an awful lot of work to give you a concise
explanation. There’s two more today and two more tomorrow. And then, after I/O or for the
folks online, developers. com/AR — or developers. com/Google/AR, has all of the
resources plus the code labs are available on there. And again,
all four our of SDKs are available as of yesterday. Thank
you very much. Appreciate your time. [ Applause ] Thank you for joining this
session. Brand ambassadors will assist with
ushering through the exits. If you’ve registered for the next
session in this room, please clear the room and return via
the registration line outside. Thank you. >>Hello, good morning, and
welcome to today’s session. Frictional It’s so great to see
such a great turnout. Thank you so much for coming. My name’s
Jonathan, and I work on the mobile ninjas. We’re a team
within Google passionate about testing. It’s likely you’ve used some of
our products . Both within and outside of
Google. There’s a general consensus of
the virtues of writing tests. Therest a cost of writing tests. It’s one that quickly pays
dividends throughout the life cycle of your project. A bug caught early on is far
cheaper to fix than after the deployed application. They give
you a safety net for making changes to your code. You’re
free to refactor, cleanup and optimize. What’s more? A suite of readable tests
provides the living, breathing specification applications
behavior. There exists the concept of the
testing pyramid made up of three layers, unit, integration and
end-to-end tests. It also gets harder to
maintain and debug these kinds of tests. You’re leveraging the advantages
of one layer to compensate for tradeoffs in others, to produce a holistic
automated testing environment. We recommend a 70/20/10 split as
a general healthy guideline. And now, while the rules
apply, the unique characteristics of android
development have introduced difficulties along the way. So unit tests. Due to the
need to be fast, run on the local workstation. Integrations run on a real or
virtual device. And so, separate tools have evolved at each layer
of the pyramid. Robo electric or the markable
android framework for the device unit tests. Espresso, and the android
testing support library for the on
device tests. Now, Android’s got really
familiar core concepts, such as getting a
handle to your application context or maybe driving your
activity life cycle. And each of these tools has its own distinct APIs and ways for doing
things for achieving the exact same tasks. This has led to a test writing
contest? As a developer, it’s hard to know what tools are
available for use and which of those are recommended. Having multiple tools at each
level has led to an explosion of styles, each with their own
distinct patterns and APIs. And this, in turn, leads to a
lack of mobility between layers. Tests can’t be easily refracted
or reused between layers in a pyramid without being completely
rewritten for a new tool. To discuss this further, let’s orientate ourselves to
constitute a well-structured test.
There’s some following common patterns that define a
well-structured test. And so, we generally break tests
down into three clear sections. And I like to separate them with
a blank line to demarcate them. So given some predetermined
state of the system. When you exan action and verify
the state of the system or behavior that’s occurred, be sure to name the
test after the condition you’re testing and the expected
outcome. Keep the test focused on very
specific behavior and then test all of the behaviors independent ly — these guidelines will help
you keep each test understandable in isolation. Use common setup methods for
scaffolding. And this is maybe creating the object that’s on the test and wiring up
some dependencies. Let’s take a look at the problem of the
explosion of styles and highlight this using a simple
test case. We’ll have a single activity
with one button that responds to a click sending an intent to the
Android system. And I’m going to walk you through this test case comparing the
different styles of Markito, Robo Electric or
espresso. First up, let’s consider
Mockito. Sometimes it’s used
inappropriately or overused. Mocking your own is great, but
while mocking the Android framework may seem like a good idea at first, it
can lead down a path of difficult problems. Many classes are stateful, they
have complex contracts and these are difficult or impossible to
satisfy with marks. So even though we don’t
recommend it, let’s just start by taking a look at one of these
tests and walk through it using the Mockito framework. So in the given section, we first of all have to stomp out
the framework behavior in the super class activity. So that it
responds as we expect in our tests. Now, this introduces some
problems. First of all, we’re partially
stubbing the test. We’re not testing the true behavior of
that object on the test. And furthermore, it brings with
it excessive stubbing, which introduces all of the undesirable boilerplate. For the WNT section over the
test to execute the code in the test, you have to register an
argument captor earlier to get a handle on the click listener that you will, then vote
manually to call a code that you’re wishing to test. You can descend into argument
captors, stubbing calls and answer invocations. Finally, you’re going to have to
use another argument captor. Mocking the Android framework in
this way tends to force you into
testing implementation details when you want to be testing
behavior instead. And furthermore, these drawbacks
have tended to lead to developers to build their own
abstractions to isolate Android. This leads to its own set of
problems. Firstly, you’re introducing another layer of craft into your
application. And secondly, you’re introducing texting gaps where bugs can
hide. And we believe that while you should architect your
application very thoughtfully, the limitations of the tools shouldn’t dictate your
application architecture either. So let’s see how this looks
like with roboelectric. The popular open source testing
framework. You’re able to use real android objects in your
tests rather than having to program your own stubbing
behavior in each test. It runs on your local host, which means,
it’s very fast, making it ideal for unit tests. Roboelectric tends to create
tests that read a lot cleaner. Let’s walk through each section
in turn. In the given section, we can
simply bring an activity just by calling the set of activity API . In the WNT section, we’re able
to use real Android SDK APIs to get
ahold of the views and Roboelectric’s
click on the API and invoke the code we wish to test. Finally, we then use the
testing APIs to check that the intent was sent to the system. See how much cleaner this
version is. We’re focusing on the items that really matter in
the test and free from the extra distraction.
Now, espresso is a UI testing framework and runs on a
real or virtual device. It provides you with a really
realistic environment. The tradeoff here is a much lower
execution speed. You’re building up your entire
APK, deploying it to the device, and
substantiating the test run, waiting for the results and
collecting those back on your local workstation.
All of this is adding valuable development cycles. Now, the exact same Android
concepts exist here, we’re just getting ahold of an activity,
clicking a button and then very verifying the intent to the
system. Let’s step through this one together.
In the given section, we’ll use the activity test rule,
which comes from the Android testing support library.
This can be used to start an activity and provide us with a handle to
it with our tests. For the WNT section, we can use
the Espresso view APIs, find the viewing question and safely
click on it to invoke the code that we wish to test. And finally, we use Espresso
intent library to capture the intent and verify it was the one
we want sent to the system. Notice here, that while a test
has many similar structures to
Roboelectric test, the syntax is very, very different. Each have contrasting strengths
and weaknesses. It’s an explosion of L styles that’s
become the big problem for writing tests. Who has been using roboelectric
for writing tests? Raise your hands? And what about Espresso?
And who is using both? Wouldn’t you rather thinking
about writing an Android test instead? We feel as a developer,
no matter what kind of test you’re writing, you shouldn’t
first have to think about environments and tools and
libraries that you’ll need. We believe that you shouldn’t
have to suffer the mental load of having to learn multiple sets of APIs for
exactly the same thing. Use your code no matter where
you choose to run it. What if there was only one set
of APIs that you needed to learn? And now, imagine, also,
being free to focus on writing on your test rather than
considering those tools and libraries and environments. We’re unifying the experience
around a canonical, high-quality set of
APIs that will reduce the boilerplate and eliminate the
number of tools you need to learn. Naturally, support is included allowing you
to write beautifully concise tests and all of this will be
open sourced, we love contributions from the community
members. We’re going to satisfy
developers’ needs in each of the four key sections of the test. Remember, scaffolding given when
and then. Scaffolding encompasses the
configuration and control APIs. Getting ahold of that
application context. Today, we’re excited to announce
for the first time, you can now use
these APIs for your on and off device tests. The next section of a test
is the given section and here we’re going to provide two key
categories of APIs for you. Firstly, the Android rules from
the library will become part of
JetPack and soon be adding more to help drive the life cycle for
you in tests. As you’ve seen previously, thing
aivity test rule is used to start of your activity and make
it available to test in the resume state. You probably use the API running
on a device many times before. Well, today, this API, too, will
be available for tests that run off device, as well.
Secondly, we’ll be providing you with a set of Android test data
builders. These will help you construct Android objects that
your code and the test will interact with. Many of the Android framework
classes that you need for setting up your test date are
difficult to create. Often, there’s no constructer.
Perhaps, they’re final so mocking’s out of the question.
And sometimes, they’re plain clumsy to substantiate with any
degree of gravity at all. So we’re including Android test
builders to give you a concise way to set up your test
environment. They produce readable code. A fluent way to create the
Android components you need to interact with.
And, of course, they’re portable. Both your on and off
device use cases. The structured test, the WNT
section. Usually, this is a case of calling your own code
directly, but when you’re writing a UI test, it’s likely
you would reach for the espresso APIs.
Today, we’re happy to announce that Espresso, too, is joining
JetPack. They read beautifully. You’ve used them for your
on-device tests for a while now. And today, we’re providing
preliminary support for the tests in the off device use case
also. The final part of a
well-structured test is the then section. This is where you make
assertions on the state of the system in response to an action. So firstly, espresso is going to
be joining Jet Pack as intends. The APIs for
your own device testing, great news. Today, they, too, will run in
your device tests. And finally, we’re also
releasing an assertions library to help reduce the boilerplate
in your tests. See here how easy it is to
get the arguments mixed up. This makes it difficult to
comprehend the error messages in tests. At Google, we love to use
truth. It’s our own open source, fluent
testing assertions library. Using affluent assertions
library is a great step toward producing
readable code. You can lean on the built-in support of your ID’s autocompletion
feature. To help you write concise tests
against Android code, we’ll be releasing a set of truth extensions for Android
which reduces the boilerplate and
gives meaningful error messages. Of course, these will work
across all environments, both the on and off device tests. We’ll be bringing you the tools
you need so you can concentrate on writing beautifully concise,
easy to read tests. A single set of canonical APIs
for common tasks that will reduce the boilerplate leaving
your tests clear and readable. And the environmental agnostic, allowing it on your local
workstation or perhaps in a cloud test lab. And now, that
I’ve shown you how to use the unified set of APIs that
decouple the act of writing a test from where it’s going to run, I’ll hand you
over to my colleague Stefan who is going to show you
how to run the tests in a new,
simplified world. [ Applause ]
>>STEFAN RAMSAUER: Thank you, Jonathan. Welcome,
everyone. Quite amazing. I want to say a
big welcome to everybody joining us at the live stream today. At Google, we believe that
testing should be a fundamental part of
your app development strategy. Let’s bring back the pyramid. As we can see, our friend with
the JetPack has solved the API dilemma. Now that we have one API to rule
them all, it’s easy to start writing tests for Android.
No more excuses. We can start with a simple test
for our business logic. Usually, a unit test. And over time, we can add more
and more tests. Once we start implementing the UI for our
application, it might be worth adding an integration test. This just became a fluent
experience since both layer support the
same APIs. That’s not the only use case for
the new API becomes handy. Let’s
imagine we have an integration test that got too large and
complicated. I have seen many of those tests.
It’s just too convenient to test business logic and UI flow in
one single test. At Google, we don’t like those
tests. They are hard to read and they tend to become flakey. Small, well-focused tests are
much better. So let’s refactor this large,
complicated test. Refactoring a large test can be
a very painful task. Who in the
audience has experienced this recently? Raise your hand.
Here’s some good news for you. If the original test was
written with the new API, this task
becomes much simpler. We still have to test our UI.
And we’ll keep this part in the integration test. This layer provides high
fidelity. And we don’t want to lose that
for our UI tests. The rest can directly go into
the unit test layer. Here, we gain speed. Since we can run off device. Let’s verify the refactoring
together. We still have high fidelity, that’s good. We gain speed, even better. And
the tests are decoupled and less complex. But wait, EF to we have to
choose an environment. It’s not always trivial to pick the right
one. We also have to work in multiple
source sets . The combination of run time environments, plus source sets
will lead to an explosion of test configurations. Just imagine how hard it will
get to choose the right configuration. And don’t we need to run the
entire test on our continuous integration server before we
submit? And you all know the feeling
after we kicked off the test run. Waiting eagerly for the results. Oh, no, one of the tests is
flakey. I’m pretty sure many of you have been in this situation. Flakeyness is
one of the biggest pain points for developing productivity. So what if there was a better
way to set up your test harness,
execute your tests in a reliable environment
with unified test results. Today, we are proud to announce
Project Nitrogen. The new single entry point for all Android
tests. Nitrogen is fuel for your
JetPacks. We had to build the fastest,
most reliable tests. Covers from set up to test
execution and reporting. Nitrogen tests behavior across
different systems. It will be fully integrated in
Android Studio. At Google, we already use
Nitrogen to test our own apps, such as
Gmail, Google Maps, photos, YouTube and many
more. Nitrogen is highly extensible. It customizes the testing
vocation at any point. Nitrogen will be fully open
sourced later this year. Let me give you an overview. Starting with setup, Nitrogen
connects your test to any execution environment. On device, off device or in the
cloud. It installs test artifacts. And if necessary, it can run
custom fixture scripts for you. This can be a real device. Once the device is ready,
Nitrogen installs the application. Stages
any tested independencies. It can all be done here. Let me
give you examples. Setting up a network tunnel. And much more. And finally prepares the device
for the next step, test execution. Here we have sprints, mid
distance and long distance runs. We also have different
environments. Trek running, road running,
cross country running. A unit test on your workstation
is a fast sprint, as fast as possible to the finish line. The test on your continuous
integration server is the marathon. The goal is to get to the finish
line and not to fail along the way. These tests are executed in
different ways. From the command line, from Android Studio or trick it automatically
when you submit. With project nitrogen, this will
become consistent. We provide a well-defined
protocol and a unifying all the Android test
offerings. Nitrogen uses our own device
infrastructure to run your tests. Android orchestrater and Android Unit
Runner. You can enable the
infrastructure device today. It has been available since Android
studio 3.0. The orchestrater collects your tests and kicks
off test execution. By running each test in a
separate process, shared state is
minimized and crashes are isolated. Moreover, your tests are
executed in a familiar unit environment,
provided by the UnitCupp runner. The orchestrater collects all
test results, additional artifacts and streams it back to
Nitrogen. Nitrogen provides a unified
reporting format for these test results. In addition, it provides a huge selection of test output data,
such as screenshots, profiling data, performance and much more. All artifacts are scoped per
test. This means, for example, a
locket snippet is reduced to the test method. No more hundreds of lines of
locket. So let me show you how Nitrogen improves the entire
testing flow on Android. First, Nitrogen finds the device
and configures it for the test run. Second, it runs tests in
isolation using the orchestrater. And finally, while tests are
running, nitrogen hosts infrastructure
will be streaming the test results from the device, pulling
down all test output data and feeding it back to you. With Nitrogen, all of this
complexity is hidden from you. Whether you’re running a
test on Android Studio or on the CI
server. Nitrogen is the single entry
point for all Android tests. It works from a sprint to a
marathon . Nitrogen can regress the
device in a test lab. Reliable lyy running your tests,
returning unified test results. Nitrogen supports Google Cloud. You can deploy and run tests on
the real device or the virtual device. Nitrogen works seamlessly with
roboelectric. We have made big improvements in
start-up time and memory consumption. It’s released today. Roboelectric 4.0. And if this is not enough,
Nitrogen supports your custom needs. For example, if you have an
in-house device, allow me to summarize. Previously, you had to learn
multiple approaches for doing the same thing. Tools lack the mobility to move
between the layers of the pyramid. You had to choose wisely. We have reduced a cognitive load
by providing you a single set of
APIs that work across environments for
both on and off scenarios. And a single entry point for
Android tests with the customizability at any point. JetPack is a giant leap forward
in test automation for Android . Write your test once, run it
everywhere. [ Applause ] This is just the beginning. Now, you have been both the
tools and the knowledge to accelerate your
testing experience. I strongly encourage you to check out our codelabs, especially, our
latest edition. Build addition, . Allowing you to build large
Android apps at Google scale. If you have further questions or
would like to discuss your testing
strategy, come find us tomorrow morning at
11:00 a.m. in the office hours in section 8
. We hope you enjoyed our session. We will love to hear from you. With that, happy testing. Thank you for joining this
session. Brand ambassadors will assist
with directing you through the designated exits. We’ll be
making room for those registered for the next session. If you’ve
registered for the next session in this room, we ask that you
clear the room and return via the registration line outside.
Thank you. .
>>SUMIER PHALAKE: All right. Hello, everyone, and welcome to
the Value of Immersive Design
Sprints. Have you designed and developed products for users
very different from you? Well, today, we’re going to go
over tips and strategies to deeply understand your user and
to create effective products for them at speed using
immersive design sprints. I’m Sumier Phalake. Building public WI-FI for users
and next users markets. >>BURGAN SHEALY: And I’m Burgan
on the Chrome team focusing on the
next billion users. Let’s jump right in and talk
about — oh, what is a design sprint? We define a design sprint for a
framework for answering critical business questions through
design, prototyping and testing our ideas with users to deny actionable
feedback and insights. The design sprints were created at
Google and they have evolved over time. There are a mix of
methods from design thinking, business strategy, psychology
and user research that were all selected and ordered specifically to
support divergent and convergent thinking and to drive toward a
targeted outcome. What’s different about a design sprint is that we set clear
goals and deliverables up front. We time box all of the
activities so that we not only speed up the learning and
development process but we also drive the right behaviors from
our users. And then, finally, we enable a wide range of
disciplines and stakeholders to participate in the design and
development process. Now, design is not an individual
sport. And design sprints are a highly collaborative process. We
want to be able to include multiple perspectives and points
of views. And to that end, we leverage sprints to include all participants’
voices. But the design sprint is a
highly flexible framework and you’ll want to flex it and adapt
it to your particular challenge or scenario. You can reduce the risk of
downstream mistakes by getting to insights that lead to a
better understanding of your user and your problem space
quicker. So today, we’re going to talk
about a way that we have flexed the design sprint to really get
a better understanding of users that are very different from
ourselves. Through immersion, intercepts
and cocreation, we’re bringing
actual users instead of representatives of those users
into the center of the design and development process. There’s a phenomenon known as
the co-creation effect. And shows when organizations and
consumers create something together, both parties are more
invested in the outcome. What does this look like in
practice? We’re going to start by bringing a team together a
very different disciplines, include to create a powerful team. This
is going to allow us to dramatically reduce the amount
of time that we spend communicating and increase the
number of viable ideas. We’re going to walk through the
five sprint phases. Understand, sketch, decide,
prototype and validate. Now, we have a mix of structured
individual work and planned group discussions. That support a clear diverge and
converge process that will guide the team to get on the same page
around the artifact it’s going to produce to test users . Now,
the ideal design sprint is going to provide time for team members
to reflect on the challenge, and also, opportunities to validate
or disrupt those ideas. So the goal for the first phase,
the understand phase is to gather all of the relevant data
and information at hand. We really want to create a shared
brain and align around a new world together and a new
understanding of the problem space.
Then, we’re going to move on to the sketch phase. And this is
when we’re going to generate tons and tons of ideas, tons of
potential solutions. And we’re going to use a couple of
exercises that are designed to get the ideas out of your
participant’s head on to paper and to really push them to go
beyond their first idea. They get better and more innovative
ideas. Now, once we have all of these
ideas, we’re in the middle of a sprint, we have tons and tons of
solutions, it’s time to start narrowing down and making hard
choices. So the decide phase is all about
those hard decisions and selecting what you’re going to
prototype with users. You can test your hypothesis and
validate your assumptions. Maybe you have a day to build
that prototype, it should only be as high fidelity as needed to
answer your questions and nothing more. Finally, we’re going to
validate. The goal of the test phase is to confirm the product
you prototyped is good and you prototyped the right thing. We’re going to discuss two case
studies to show you how we’ve used this framework to give
users the tool to engage in the ideation and creation process. Really, you can sprint at any
point and the life cycle of your product. You might be at the
beginning and want to use a sprint finish visioning or
creating a road map. Whatever point in the life cycle
of your product that you run a sprint, it’s important to
remember that involving users early and often can help
validate the problem space and create better ideas. And there are a couple of
outcomes from a sprint. I like to say, there’s never a
failed sprint. Because you always learn something. We have defined three potential
outcomes and the first is an official failure. It can be a little disspiriting
that something didn’t work. It’s useful to know. Saved all of that time, money
resources that you spent building out the product and
then learning it didn’t work. Now, most sprints fit in this
middle bucket, what we called flawed success.
Some things worked, some things didn’t. But you have a
path forward. You know what you need to do next. And sometimes,
you have an epic win. You’ll need to set expectations
at the beginning of the sprint so everyone is aware of this and
know what the outcomes might be. Let’s talk a little bit about
immersion. >>SUMIER PHALAKE: And the
reason that we rushed through that is
because we’re assuming if you’re here, then you’ve heard of a
design sprint before. You have most likely taken part in a
design sprint before and maybe some of you have even run your
own design sprints. So we’re going to jump into
immersion. Now, before we talk about
immersive sprints, let’s understand what immersion
actually means. To think about immersion, I went
to Google and looked up the dictionary. And here’s a
definition of the term immersion. Immersion is instruction based
on extensive exposure to surroundings or conditions that
are NAT i or pertinent to the object of study.
This is a lot to take in. So I kept Googling. Here’s another
definition, much simpler. Immersion is a deep, mental
involvement. Here’s the third one. Immersion
is the state of being deeply engaged or involved, absorption.
All right, there’s common themes emerging from all of
these definitions. Let’s bring this back to UX. How can
immersion guide your interaction with users? Especially, users
that you don’t know very well? I like to think of it in terms of
three steps. Immersion is about
awareness. So, first, doing the background research and study
that it actually takes before you go into a new environment
and users. They can provide you the context
you need to process that correctly. Observing without bias or
judgment. That’s awareness. Step two, immersion is about
listening. So you’re going to hear a lot of
new sounds, process a lot of new sights, you’re going to talk to
a lot of users you haven’t interacted with before. And
these users are going to tell you about their lives, about
their problems. Problems that you haven’t faced before . And so, it’s really important to
listen attentively. And this is actually pretty hard to do. If
you tried yourself right now, when you try to listen to
someone speaking, it’s very easy to let your mind wander and
think about what you want to say next. It’s very hard to
mindfully listen. I hope that’s not what you all are doing right
now, by the way. It’s really important to listen
attentively. That’s step two, and step three
is taking awareness and taking listening and bringing it to
empathy. This is when you’re actually
stepping into someone’s shoes and experiencing what they
experience, understanding their needs, pain points and hopes and
dreams. When you have awareness,
listening, empathy, all of the two things together, that’s when we consider it
immersion. How does this relate? — to design sprints and design? You’re making a call and
designing solutions for users based on
what you think is going to work well for them. Where does this
come from? It comes from intuition, right? And how does
it get built? It comes from a lifetime of
experience. It comes from knowledge and STOED, trial and
error, like launching a lot of designs and seeing what works
and learning from success, learning from failure — it’s a constant loop
of learning and tweaking. But when you’re designing for LURZs
that you’ve never interacted with before, you don’t have this
intuition. What happens in the context of a
sprint? In a sprint, you’re time boxed,
doing things really fast, it’s a high pressure situation. We’re often placed in
situations like this. Because we both work on products where
we’re distant from our users. We don’t understand our users very
well that often. What happens? You’ve seen that in sprints like
this, there’s analysis paralysis.
Because your sprinters are not confident about deciding how
to move forward with your design. You end up making safe choices.
So your design solutions are not strong, sprint outcomes suffer.
You just don’t see that big leap that you want to see at the end
of a sprint. So we need to help you and your
sprinters build intuition. How do we do this? So we think immersion is a great
solution to doing this. Good design and sprints is about
making educated and informed decisions about how to solve a problem that your
users are facing. With immersion, you’re helping your
sprinters gain the foundation to confidently design solutions. Let’s bring it all together. Guide your sprinters, drive
great outcomes and validate sprint outcomes.
If you don’t understand your users really well and you want
to design solutions for them in a fast-paced sprint environment,
then immersion is a great technique to set you up for
success. How do we do this? Going over a few techniques
today. Participatory design sprint. You
actually ask your users to come to the sprint with you and take
part in it as a sprinter. And then, last, instead of
saving validation for the last stage,
incorporate validation at every stage of the sprint. Make sure
you check in with your users at the end of each day so you know
you’re on the right path. We’re walking through two case
studies. First, I’m going to hand it
offer to Burgen to talk about an
interesting sprint she ran in Kenya. >>BURGAN SHEALY: We were
kicking off this app to help herders survive ongoing drought. Pastoralists move their
livestock between pastors depending on the
season. And rainfall patterns have shifted. Forcing herders to find new
routes and pastures sources of waters. Word of mouth, scouting,
ancestral knowledge have proven insufficient to maintain healthy
herds in this changing landscape. An organization called project
concern international, a nonprofit based in San Diego and
a Google. org grantee created a program
that relaid satellite data to
pastoralists that showed current vegetative conditions. The proof of concept was
highly SETSZly successful, we saw an
80% adoption rate in the local populations it was tested with. The delivery system was
cumbersome and inefficient. District livestock officers
would then pass them on to the locals. So this process was slow
and difficult to scale. So our question was, would it be
possible for us to cut out those middle men and simply deliver
the maps from the satellite straight to
smartphones ? As we planned the sprint, we
recognized the difficulty in properly understanding the
cultural, societal and usage scenarios encountered by
population so different from ourselves.
But we knew that the community as experts of their
own experience would have the historical context and
the knowledge of what was and what wasn’t important to the
community. Rather than consult the community, we decided to
involve it. Now, as designers and
developers, we often create products for people. Co-design is the act of creating
with stakeholders and users, ensuring that the people most effected by the
solutions are part of the development process. And this is
really based on the idea and the belief that all people are
creative. The design sprint gives us a
framework to put this into practice. And through the sprint
process, we can guide users to the ideation and creation
process to really give voice to their ideas. As you can imagine, we had a lot
of different perspectives around this challenge. Had to build the
right sprint team and quite a large team with a number of
diverse stakeholders. We had our geographic
information system specialist who was our mapping specialist.
We had local developers and a UX researcher from ihub, which is
an incubation space in Nairobi. And we had UX designers from
Google. But most importantly, we had two
pastoralists from different districts in Tanzania. Now, this team required the
integration of experts and users really working closely together.
So empathy between our participants was essential and
face-to-face communication was best. So we all gathered in ihub’s
office in Nairobi. Now co-creation can be applied
at any stage of the design development process through
different activities. And as we created the schedule,
we thought deeply about how to include our users at each stage
of this process. And by incorporating user feedback
throughout the process. We’re going to see the human
tendency to rely on that first piece of information. To make
subsequent judgments. Our users helped us understand
the problems phase through lightning
talks and documenting. You can see some of the topics we
discussed were technical infrastructure, offline
experience, battery life, traditional decisionmaking. It wasn’t something we would
talk about if we ran this sprint in San Diego or here in Mountainview. We learned the elders make a lot
of the decisions of where to move the
herds. But the younger generation are the ones with the
smartphones. We had to take those types of
things into consideration as we considered the design. To make the pastoralist
experience resonate with all of the team members, we broke into groups and created
user journey maps. These are really important. People are good in terms of
stories of people. We use crazy eights to generate
a ton of different ideas based on everything we learned in the
understand phase and to help our users with this activity. We
really gave some structure to it. Did this in four phases. And we
gave our users more power in the decisionmaking process with
extra votes and reminded our other
participants to evaluate each idea considering
what our audience experienced and what
they wanted. Depending on the level of fidelity you’re aiming
for, one of the most difficult phases to incorporate users can
be the prototyping phase. We considered letting our users
go during this phase. But they they wanted to stay. And their participation actually
proved crucial. When the designers were designing this
middle screen, they were struggling to come up with
iconography that would really represent some of the diverse
ideas that we were trying to get across. Like, this location has
no water. Or, scary animals were sighted
in this location, maybe you shouldn’t take your herd there. We were able to rapidly iterate
and show the designs to the users
and get immediate feedback so we were able to confirm which icons
resonated and which ones didn’t. So for user testing, it was
important to solicit feedback from a larger and more diverse group
than was just the sprint team. If you’ll remember, we had
sprinters from Ethiopia and Tanzania. But we were in Kenya.
Western able to go about an hour south of Nairobi and test our
prototype. And the approach we chose would
transcend tribal boundaries. The sprint process allowed us to
find common ground and develop a common language and work
together to fully understand the user needs from the outset of
the project. >>SUMIER PHALAKE: What an
impactful sprint. So cool. I’m going to switch gears a bit and
talk about the product I work on and that’s Google station. Now,
just as a recap, Google station is a product that’s bringing
high-quality public WI-FI to the user markets
worldwide. A little bit of a caveat, I can’t really talk
about the actual product that we were designing in the sprint,
but I can the talk about the process. And that’s what I’m
going to do here. So a while ago, someone on our team they — — we talked to this
team and that team got super excited.
We were all, like, ready to have this happen. But we didn’t
know how. We didn’t know what this experience would look like.
Great reason to have a sprint. We also had never worked
together as a team. So there was, like, no working
relationship. Another great reason to have a
sprint to bring teams together so they can establish trust.
Now, here’s another challenge. The Google station team is based
largely in mountain view. This other product is based in
Asia and at the time, the user we were designing for was in
India . So three different countries.
Really challenging. And going back to the slide that
Burgan showed earlier, we had an answer from the business side
how this product would work. And we also knew from an engineering
standpoint we could most likely make it work. But what we didn’t
know is this something that our users even care about?
We didn’t know if this was desirable for our users. So this big gap was something we
needed to address. So we decided that we should run
an immersive sprint. So the sprint challenge was
design and experience aligned with user
needs that would combine functionality of two products in
a very seamless way. But before we could even talk
about seamless and the interaction design and all of
that, I really wanted us to figure out does this make sense
for our users. Is there a strong user focus
strategy for this product? And this was critical. We didn’t
know if this would work for our users. And this product strategy
needed to come from actual user insights. We didn’t know our
user that well. So we added two more phases to
the sprint. Before the sprint, we added research. After the
sprint, we added research. Different kinds of research,
though. Before the sprint, we added foundational research. #1K7 And we were going to
validate the design solutions we come up with with the research
to validate it. So that was also a different style of
research. This extended the duration of
the sprint. And we invited all of our participants to take part
in the research. We thought this was really important because
that would be an immersive experience for them and lay the
foundation for them to effectively participate in the
sprint and come up with great design solutions? . So this was
also a lot of overhead in terms of planning. . So this was also a lot of
overhead in terms of planning. That meant travel, going out
into the field. Understand the user context during foundational
research. Use the user insights to drive the entire sprint.
And then, when you have design solutions as outcomes,
take them back into the field to validate them with your users.
So that was the process that we followed. How did we do the research? For both the pre and post sprint
research, we used a technique known as intercepts. By intercepts, I mean going out
into the field, finding a user, asking them if they can spend a
few minutes with you and then doing an interview. A lot of foundational
information about them or show them a prototype. This could be
a clickable prototype, it could be a live product. You can ask them to use this
prototype to do certain tasks and observe
how they do it. And this would be evaluative. It’s also in a user’s natural
environment. So instead of bringing a user to you and
asking them to come to a lab, you’re going out to a user.
You’re putting them at ease. And prototypes are totally okay.
One of the things I really love about intercepts is that they’re
really scrappy. You can put them together in a short time. And also they’re cross
functional, this gives your team your nonuser experience research
and design team, your PMs, engineers to take part
in research and come face to face with the users. That was the research part.
Let’s talk about that strategy piece.
To figure out if we had the right user focus product
strategy, use a technique known as Value Prop
Canvas. We asked users using Google
Station. We made a list of all of the features in that product.
And what the pain points of users were, using Google Station
then. So we had all of the user pains. Then, we made a list of
all of the gains that users are seeing by using that product. So we knew the user gains and
then went to the sprint, been through a lot of sketching,
diverged, voted on ideas, we converged, with came up with a
few solutions. We did the same with the
solutions. Any of the features in the
solution going to relieve the pains that the users are facing, boost the
gains that users are seeing? If both of these map, you know what
your designer is great for users. >>>You have to decide whether this
makes sense to move forward with.
This is what we use finish exploring product strategy. If I hadn’t taken the time to
add a strategy piece, we wouldn’t have been successful.
Think about the sequence of products, do a dry run, try to see if one
method feeds into the other, does the output of one method
feed into the input of the other. Is there a gap? Then address it.
Make stakeholders your partners, especially a broad sprint like
this, you want them to be your partners, want them to agree
with you, invest the time and money and resources and you want
to do daily check-ins with them. Get feedback, work on it. I’ve
never run a sprint always gone according to plan. Things go
wrong, people have questions, they question the very premise
of pure sprint. Things take too long to run,
methods don’t go as you plan. Logistical issues. So be nimble, be agile, be
prepared to change things around on the fly.
One of the great side benefits of a sprint is that
you’re bringing two teams together who have not worked
together and you’re going to drive a lot of alignment between
them and strong relationship between them. No matter what
happens, you’re going to get that and we got that.
And then, the last thing I wanted to leave you with for
this case study is that this was a super successful sprint for us not because we — we actually learned we did
not have a strong value proposition with this new idea. We decided we didn’t want to
move forward with this. The fact that the sprint helped
us understand very quickly if the time, effort and resources
required to build this was worth it makes it super powerful for
us. So you might be thinking, I’m
not going to Kenya and I’m not going to India. Why does any of this matter to
me? Have you designed for users in
regions different from where you live? I bet you have. Have youer ever designed or
developed? The kids, elderly, minorities, people less tech is
savvy than you? I bet you have. Have you designed for use cases
you aren’t familiar with? Have you made a product for the automotive industry or made a
fitness app and never been to the gym? You know? We do this
all the time. This is our job. Immersive sprints are perfect
for these use cases. And why they’re powerful is you
know have the opportunity to bring your team with you to learn instead of
doing research in a silo and bringing
a research report back, you can bring them with you. This is
where the power of sprints really shines through. They’re cross-functional,
participatory and democratic. So now, I’m going to present
some tips to run your own immersive
sprints with Burgan. So the very first one, I bent
over a little bit in advance. Intercepts. A great research
method. They’re very scrappy and fast. They don’t work well when
you need to recreate a lab setting. So if you’re doing — and you
need hardware, you want to recreate a
home, intercepts don’t work well. If you have an app, prototypes,
talk to users and learn about them, intercepts are great. Now, it’s important to know if
you go in a large group of ten people trying to talk too one
user, you’re going to scare them off. Don’t do that. Try to split
up into smaller groups. And then, split up and talk to
people on your own, come back, regroup and share your notes. There’s a couple over things to
be aware of. Be aware of cultural nuances. How to talk to people, address
them, do some research beforehand. Get good interpreters or
translators. There might be important nuances
that you would miss. If it’s possible and if your
user is okay with it, ask them beforehand if you can take a
video. If they say, yes, take a video of your interaction with
them. Because when you take a video
back to your team, it is super
powerful. Way more powerful than a research report can be. Paper prototypes and focus
groups are okay for this. You’re talking to so many users, you
correct for that. And then, watch out for selection bias. We have an idea in our head of
who our user is. So there’s, like, a bias to go and talk to
users that you think would be likely users of your app. Don’t
do that. You’re outside in the field. You have access to a
diverse range of users. So we try not to tell them
that, hey, I designed this, give me your feedback. They’ll be
nice to you and tell you this is amazing even if they don’t like
it. We try to save that information for after the
interview. >>BURGAN SHEALY: Those are
great tips and they’ll apply to co-design, as well. It’s a
fundamental change in the relationship between product
creators and users. Traditionally, experts have gone
out and observed, done interviews,
with mostly passive users. Co-design,
users are involved in the ideation and the creation
process. You really want to prepare your users in advance, let them know what
to expect in the design sprint, talk about
the different activities they might go through and help the
them reflect on their experiences that they normally
perceive as routine. So, perhaps, you ask them to do
a diary study beforehand or have them talk to other members of
their community to get a wider perspective. Logistics are also
quite important. You’re going to want to allow more time for each of the activities and
more time to explain them, as well.
Do you need a translator? Similar to intercepts, it might
be useful. Are you in the natural context of the
experience? Or are you asking the users to come into your
space? What would make them more comfortable?
What materials do you need to allow your users to fully express
themselves and to take their ideas down? Are they comfortable with
drawing? Would they be more comfortable if you gave them
something to build with? Different components they could
put together? Would they be interested in role-playing or story telling?
Think through all of the contingencies and the schedule
of your sprint. Now, facilitation is going to be
key here and you’re going to be
switching roles from translator to facilitating the users and
guiding them through the creative process. Finally, make sure you include a
diversive — sorry, a diverse and inclusive user set. And why do
we do this? This sounds like a lot of work. What are the
benefits of codesign? You’re going to get an increased knowledge and empathy for the
user for the full team. And that’s key. You’re going to have
all of your engineers, PMs face-to-face with
the users. You’re going to build better and more innovative
ideas. You’re going to have lots and lots more and better ideas.
Your decisionmaking is going to be more efficient and your
sprint participants are going to be more confident in the decisions that
they make. You’re going to have immediate validations of
concepts and ideas just as I discussed with iconography in
our app. We were able to immediately
validate what worked and what didn’t. You’ll be able to establish a
deeper and long-term relationship with your users . What if you can’t bring users
in for the full sprint? What if they’re not able to take that
much time off and participate in your sprint. We also often do something
called extender user participation.
So this is an alternative that we bring users in a key
milestones within the sprint. We’ll recruit users to do
check-ins at critical milestones at the end of each phase. We’ll do cognitive walk
throughs, look at sketches . You’ll want to plan time to
act on the feedback they give you. What if you can’t bring your
users to you at all or can’t go to them?
What do we do then? >>SUMIER PHALAKE: That’s when
remote sprinting comes in. And we’ll be honest, this has
not been successful. Sometimes, it’s unavoidable. Your users are far away, you can
use things like slides or docs to collaborate remotely, Google
forms and send out a venture survey.
One of the things we’ve done, especially when you’re
doing research from many time zones away is
overnight research. An example was we were doing a sprint here
for a product that was based in Indonesia. So we did a lot of
work all day, packaged it up, sent it over
there. Researcher ran research for us overnight. And we had
feedback waiting for us the next morning. That was great.
Obviously, this very situation-specific. So if you
can’t do this, you might want to consider taking a break and
then sort instead of doing all five days, do two days at a
time, break off, do research and then regroup the next week.
These are some techniques for doing remote sprinting. All
right. So the recap. We talked about intercepts, talked about co-design, talked about
extended user participation. And remote sprinting. With that,
we hope we’ve shown you the value of immersive design
sprints and given you the right tools to run your own successful
immersive sprint. We’d love to hear from you. To
please provide feedback for the session by signing in on Google.
com/IO/schedule. And if you want more helpful
tips, go to designsprint.com. That’s it
from us. Thank you. >>BURGAN SHEALY: Thank you. May 9, 2018 .
Google IO. Total mobile development made
fun with Flutter and Firebase. >>Welcome. Please fill in
the seats near the front of the room. Thank you. (applause).>>Hello and welcome to total
Dev made fun with Flutter and Firebase.
>>I’m Emily Shack.>>EMILY FORTUNA: I’m Emily
Fortuna.>>EMILY SHACK: Today we’ll
show you how fun and easy it is to build your very own mobile
app using Flutter live here on stage. But to kick things off,
we’ll tell you a little story about our friend,
Francine.>>EMILY FORTUNA: Francine was
the proud owner much a pet store and she realized the one thing
that she needed for her business was to write a mobile app that
would allow people to see her inventory, reserve if they were
interested, so when they came into the store, they would be on
hold and they could adopt them. Now, I know that Google I/O is
the worldwide gathering of aquarium enthusiasts.
>>EMILY SHACK: I’m not sure that’s quite right, but I know
we have a lot of mobile develop erpz in the audience, so I think
they’ll still appreciate it.>>EMILY FORTUNA: It’s still
relevant. Today we’ll be live coding a pet adoption app for fish for
Francine on stage. So let’s switch over to the devices, and
we can see the app that we’ll be live coding today. As you see, we have these nice
cards with information about the fish.
They’re nice animations as you can slide between the
information about the fish. Some of the fish are even
animated. And if you decide that a fish is the right one for you, you can hit
that cache button and it is reserved.
Then you can hop over and you can see that it has disappeared from the
view of the other phone, so no one else can reserve that same
fish. Now, if you have a reserve view, which Emily has,
you can see that the fish she’s reserved is available, and if
she decides that Dart is not actually the fish that’s right
for her, then she can swipe up and it has been cleared
from your reservation. And if you decide that you have
a regret, if you let loose the wrong fish, you can shake and
your fish is back reserved. And as you noticed, there are all
these great sound effects to go along with all of your actions.
So let’s hop back to the slides. So whether you are creating
an app to create a home for unloved marine animals or just
coding up that app that you’ve been talking about for years,
what we hope you’ll find today is just how easy it is with
Flutter to make your vision a reality. One of the great
things about Flutter is you can get a native look and feel for both iOS and Android all
from a single code base, so you don’t have to learn multiple new
technologies just to get an app that can serve all of your users
and not only is it a native look and feel, it is an actual
Flutter compiles down to native code, so you don’t have to sacrifice quality
to get a great-looking app. One of the reasons that I always
look forward to developing in flut is
hot reload. So all I have to do is type my code into the IDE, hit save, and then
it’s instantaneously on my phone. The only downside is I no longer
have time to make coffee because my
code is already running.>>EMILY SHACK: Flutter also
has many powerful components you can integrate with that go
beyond the scope of Flutter itself. In this talk we’ll be
showing you how to integrate Cloud Fire Store
which is an option for your server side component of your
app. You can also integrate the accelerometer, that’s how we’re
showing you shake to undo and we’re also
going to integrate those crucial sound effects for the flushing
of that fish. All of this is done in the same code base so
that you don’t have to deal with the hairy details of those
device platform-specific API’s. So I’m going to jump on over
to the code base and we’re going to give a little tour of what
we’re going to be coding before we go ahead and dive into the coding portion. >>EMILY FORTUNA: So we went
ahead and typed some class
recollections for you, typing is a little Borg. At the top we
have our main furksz our entry point, that’s where we’re going
to be building our top level widget or component that we’re
drawing in our app. Our top level widget is very
creatively called my app. My app is in charge of getting the information from Cloud fire
store or AKA Francine’s Fish Database and
then telling to draw that app to the screen. My app has a fish page and a
corresponding fish page state. Now, you’ll notice that fish
page is a stateful widget, whereas all the other widgets
you see are stateless widgets. If you remember when we demoed,
we had that undo functionality, so that’s some information that
we need to store and have a long-running per Sis stement
action to what we last did to the fish so if we decide that we
need to undo that, we have that. So for this reason, we’re
storing that in fish page so we can have that long running per
Sis stented state. For everything else, we have
stateless widgets, so that it can just draw on the screen
whatever information we provide it. Fish page has a fish options,
that is that center section that allows you to scroll through all
of the available fish that you are able to adopt and
then fish options is made up of many profile cards and that’s
the individual cards that you have for your fish. Now, there’s probably a few more
class declarations in here than — you
would want to have if you were code this for real, but we’re
putting it all in one file so it’s easy for us to jump exparnd
for you see so see what’s going on. We have one more file, u
tirks ls class, two classes in it. Fish data is just a
convenience class, so when we’re writing information to Cloud
fire store, it will be in the form of a map, so fish data takes
that map and converts it into a dart object and then when we
want to write it back to Cloud fire store, we convert it back
to a map. Local audio tools is just a
wrapper around the audio tools function or plug-in which allows us to play
audio files locally from our devices.
So let’s go back to main. Emily has our app running up on
the devices, so let’s switch over to the device view and see
what we’ve got. Okay. I think this is a pretty good app to
start out with. Obviously we have a long way to go here. One
thing I’m noticing right off the bat though is that the text in
that app bar and in that button at the bottom is a little bit
larger than I might have on apps on my phone. Why is that?
>>EMILY FORTUNA: That’s a good question. Actually this is
a good illustration of Flutter’s accessibility features. So if
you are vision-impaired or if you are giving a presentation
and you want to make sure the people in the back rows are able to see the text,
you can set default text size on your
phone to be larger and Flutter will respond to this and then increase all the
text size in the app so we increase the text size without
changing anything in our code itself.
>>EMILY SHACK: Okay. Soin that I can read this app but
there’s not really much here that I’m reading yet. Why don’t
we go ahead and get some content in there. I think the most
important thing to start out with is that we have
nautical theming in the background to really show we’re
shopping for a fish.>>EMILY FORTUNA: Sounds good.
We can do that by setting the color of that section. We’re in
fish stage state, there’s a build method, this is the method
that gets called when flulter wants
to draw this widget to screen. It exists for every single wij
elt that you have.
>>Here we are construct a scaffold which provides the
outline for its children wij elts and where they should be
displayed on the screen. Scaffold has an app bar, that
title bar at the top giving the name
of our app, then we have the bottom navigation bar that
allows us to technology between the two individual views, and
then at the bottom we have this fish
options as that main body. So to set the color, Emily will
wrap this in a container and set color of the container. Why are
we wrapping this in another widget rather than just setting
the color of fish options directly? This is actually a
common design pattern in Flutter to fairly composition over
inhair tans and if you can imagine, we could potentially
have parameters in fish options to set the color and width and padding but you could then
maybe have 100 parameters and it could get pretty overwhelming.
Having a composition in the way we build our widgets keeps our
API’s much leaner and it allows us to build a combination of
widget says in ways that the API designer maybe originally had never envisioned. Emily specified indigo,
specified, saving, let’s switch over to the devices, there it
is. Stare at that forever.>>EMILY SHACK: That was pretty
fast, sub-second I would say. So we didn’t have to rebuild our
entire app to get that to show up. So now I like the color but
I think it would be nice if we had some content on top of that.
>>EMILY FORTUNA: All right. All right. So we can do that by
adding that profile card. So in our fish options right now
let’s go to that build function, and right now we have a
placeholder container, so we get rid of that and then we’ll
construct a profile card. Emily is also constructing a default
fish data because right now we’re not connected to our fish
database on Cloud fire store so fish data just has some
placeholder images, placeholder name and information, so that we
can see what it looks like before we connected
to Cloud fire store. Now, if you’re familiar with Dart, which
is what we are using to code, you might be noticing, hey,
Emily is constructing this profile card but she’s not using
do anywhere. What’s with that? (Correction, new). Well, this
is a good example how the dart and Flutter teams have
collaborated. Flutter was saying hey we have a
lot of Flutter users noticing that
new is cluttering up their code because they’re constructing all
these wij elts. What do we do? Dart team made new optional,
part of dart team’s 2.0 so you can right a profile card. Emily
has it all written up, we’ll hit save, and it’s going to hot
reload, switch over to devices. >>EMILY SHACK: All right.
We’ve got a pretty empty card on here. I think what I want most
in my viewing of a fish is to see the big picture of the fish.
>>EMILY FORTUNA: Okay, okay. So let us modify the profile
card. So if we swap back over to the
code, inside profile card let’s look at their profile card’s
build function. That’s pretty simple, we’re just calling get
card contents and wrapping it in a card, that’s a material
widget. Get card contents does three things. It’s calling show
profile picture, which is clearly a falling down on the job, it is calling show data,
and then lals show button. Let’s go into show profile
picture and make it actually display a picture. Take out that container and
let’s call in fade in image. Usually we’re going to be
getting an image from Cloud fire store so that’s a call to the server that’s
potentially asynchronous data, so when we build we might not
have that data at the moment of building.
So for that reason, now, you could just pass an image, but once it
has that image datd, it might pop in, that could be visually
jarring, so instead Emily is making a fade-in image that once
we have the image, it will fade into
view very nicely. Lastly, we’re specifying the facility so images have a width and
height and in saying box fit dot cover, saying expand to fill all
the available size even if it means cutting off some part of
the image. So going to hit save, which hot reloads, and we’ll switch over
to the devices. It already faded in, you didn’t get to see
that. So fast. All right. We’ve got our big picture of a
fish, but I need to know more about that fish before I decide
to adopt T I want to know the fish’s name, its personality.
>>EMILY FORTUNA: Yes, that is very important for human fish
bonding. So let’s go back to the code. And we’ll modify show data and
we have declared three text widgets, first is the name
widget, second someone the music widget, fish’s preferred
music, and lastly preferred pH and to display those, we’ll wrap
those three in a column widget which is going to stack them
from top to bottom. Emily will hit save, it will hot reload, go
over the devices, do we see it? Yes, there it is.
>>EMILY SHACK: All right. So I can understand now this is Frank, his favorite pH is 7, but
the information is a little cluttered right now and I feel
like the name is probably the most important piece of this.
How could I set that apart?>>EMILY FORTUNA: All right.
We can do that by setting the text style. So instead how about we make it
bold and we could double the font size. That would help it
stand out a little bit more. That way people can look up
Frank by name. Let’s look at the devices again. Oh, much
better.>>EMILY SHACK: Okay, I think
we’re getting there, but I still think the Frank should be a
little bit apart from the rest of the information.
>>EMILY FORTUNA: Okay. Back at the code, how about we
wrap that text widget in some padding. Now, there’s a really
handy feature with visual studio code, which is what we are
currently using. This also works in Android
Studio and IntelliJ, you can wrap a widget inside noir
widget, it takes — inside another widget. It takes care of all the
indentation. Wrapping it in 8 pixels of padding, good enough
for us, we’ll hit save, hot reloading, and look at our
devices, much better. Can.>>EMILY SHACK: Okay. So I
think the card is in a pretty good place but I’m a little
tired of looking at Frank. What if we got some real datd in
there?>>EMILY FORTUNA: I like Frank.
But okay. Let’s connected to Cloud fire store. So at the top that my app, right
now we are parsing an empty list because we’re not getting any data from
Cloud fire store so instead we’ll be making our call to
Cloud fire store. There are two different ways you could deal
with asynchronous datd because we’re making a call to database
or Cloud fire store to get our information. The first way to
deal with asynchronous data is futures or proms depending on
where — or promises depending on where you’re coming from,
this is if you want one bit of data asynchronously. Streams
are your other option. Now, streams are if you need
multiple pieces of data. In this case we want a snapshot
of Cloud, all the information on Cloud fire store and then when
something changes on that database we want to get a new
snapshot of all the fish that are still there. So in this
case stream is the correct option, and then there’s this
convenient stream builder which allows us to build widgets from
the stream. So Emily has specified the source of the
stream, this is from Cloud fire store, we are getting a
collection of fish profiles, and then we are calling the builder so build widgets
based on that fish information. You might notice on the line
32 we have this interesting question
mark syntax. What’s going on there? We’re saying is data
null? If it is don’t call dot documents. Then you can return
that empty list or sign that empty list to the documents
variable. Otherwise, get the documents from Cloud fire store.
Then we’re taking that map and converting it to a fish data
object so we have a list of fish data, passing it into fish page.
Now, you might be wondering how are we accessing this fire
store variable? If you scroll up to the import, you see we
have these package imports, particularly Cloud fire store.
Where does that come from? Well, if you go over to pub
spec, this is just a Yamo file that allows
us to specify All of our dependencies for dart and the
pub which is our package manager is able to calculate all the
versioning things, pulls them down from
online, seat belts up all the paths correctly, then you’re
good to go, you can just import them. So back to our main file. We have one more change we need
to do because if you recall, Emily constructed that fish data
object that was default, it had no information. Now we want to
take that stream that we just got from Cloud fire store and
display it. So here she’s saying if we have data, just
call the first one because we’re only displaying one card right
now. So now Emily saved, it’s hot
reloaded, we switch over to devices, we’ve got some good
data. Okay. I’m looking at flow, feeling pretty good, I
think I might want to adopt her accident I’m not sure how I did
that from this view.>>Yeah, catch does not seem to
be implemented.>>EMILY FORTUNA: I think it’s
time to implement that show button. Right now show button our
unpressedful is not doing anything. On premised we’ll
pass in a function that’s going to be called when the user
presses it, in this case we want the button to function as a
toggle button between reserved and unreserved so we’ll be passing these, already
reserved, unreserve it. Otherwise, reserve it. These
call-backs haven’t been implemented yet because we’re
not writing to Cloud fire store so let’s implement those. Those
are in this reserve fish and remove fish. So we’re up in the
fish page state class right now, if you recall,
it’s a stateful widget. So in this case we’re going to be calling set state, that tems the
widget hey there’s something that might have changed in how I
displayed, so Flutter, you are might want to redraw me. Emily
is setting the reserved by field to this user ID that’s a unique
ID when we fire into Firebase and it’s unique per device.
Then remove fish is just the same except we set the reserve
by field to null. So we have to implement our save method,
that’s going to be actually writing to Cloud fire store. That’s inside fish data. And it’s pretty easy, it’s going
take our documents, reference to that individual fish and we’re
going on say set data and then we mapify our
object. So now when we hit save, our
button should be implemented, let’s hop over to the devices, hot reload should
be in action, we’ve got a button, go
reserve t Emily.>>EMILY SHACK: Okay, it’s
time. Hmm. It looks like I reserved Flo on the one device
but she’s still available to be reserved on the other, so I’m
not really sure if I’m going to come into the store and everyone
is going to think they’ve reserved Flo.
>>EMILY FORTUNA: Yeah, we could get had a fight.
Everybody wants Flo. I will hop onto the computer and you can
talk us through the rest.>>EMILY SHACK: All right. So
we have one fish showing, we can reserve it on one phone but it’s
not really showing a different fish on the other phone, so what
we really want fod is pass in the list of fish that
a user can reserve or a user already has reserved to that
particular user, rather passing in all of our fish blindly, we
want to filter that list. So to do that, we’re going to
use the dot where operator. This is an example of functional
operators in dart. You can use the dot where operator or if you
were paying close attention when I was implementing the Cloud
fire store piece of this, you would have seen the dot map
operator, both of these are examples of functional
operators, and in this case we are iterating over every item in the all fish list and
only returning a list of fish that meet the qualifications
that the user has either reserved the fish or the fish is
not reserved by anyone. Now, if we’re in that reserved view,
which is our shopping cart view, then we only want to display the
fish that are reserved by the current user.
So now when we hot reload and switch over to the devices, we
should be tiebl see a new fish displayed — we
should be able to see a new fish displayed.
>>EMILY FORTUNA: Emily, I broke it.
>>EMILY SHACK: It’s not looking good. We have the red
screen of doom of the it seems we’ve encountered ANR or, so
let’s try to debug this here. When you hit ANR or in Flutter
it will show up on your devices in this red screen with a very
heful error message which will be a little more concise than
the more verbose version you would see in your debug console
but let’s see if we can debug it from here. It looks like from the phones it
said something like type where it
rabl is not a subtype of type list. Probably what was
happening is when we converted using the where operator it was converting to n ait he is
rable, not convert to go hey list until it had to. We want
to explicitly convert to a list. So let’s go ahead and do that.
>>EMILY FORTUNA: I accidentally closed out of the
file. I’m rewriting it.>>EMILY SHACK: All right. This is example of why hot
reload is so handy. We appear to have accidentally stopped the
app and it’s going take a long time toll restart it. With hot
reload, you don’t have to wait all of that time. So I believe if we switched back
to the code, we’re still going to need to implement that dot
list. Or dot to list. I see. You’ve already done ha. All
right. How are the phones doing?
>>EMILY FORTUNA: We’re waiting for iPhone. >>EMILY SHACK: It takes so
long. Really good selling point for flurlt. I bet all of you
mobile develop erpz in the audience can relate to this
problem. This is why Emily is flincking so much less coffee and tea after
switching to Flutter and using hot reload. All right. Why don’t we switch to slide 19,
which is our backup.>>EMILY FORTUNA: We’re back.
>>EMILY SHACK: Nevermind. We’ll go over to the devices and
see it’s working so fast. Hot reload really saved us there.
(applause). All right. Now, I know that you
didn’t quite get to see this because we had a little bit of a
technical difficulty, but hot reload can actually hot reload
past errors. So if the devices had stayed connected and we had
been able to hot reload, it would have shown up just like
this instead of us having to quit out of the app after we hit
ANR or, which can be really handy. In this case (Hit an error,
which can be really handy. When we reserve a fish, the fish has
disappeared from the other phone, it appears we’re back to Frank.
Firebase probably had a little time reserve the fish. Now when we reserve Flo, Flo is
no longer available on the other phone. Perfect. All right. Something doesn’t seem right.
>>EMILY FORTUNA: Emily I’m not really digging Marlon Brando
I would like to have some other options to adopt.
>>EMILY SHACK: True, we’re only seeing one fish right now but as
we all know, there are plenty of fish in the sea and we would
like to give you have the option to adopt all of them. Let’s go
ahead and switch back to the code and we’ll try and implement
a cover Flo widget, which is a widget that we are getting with
another plug-in just like we got the fire store plug-in and that’s the simple cover Flo
plug-in. Now, cover Flo takes in two arguments, it takes in an
item builder and an item count. Item builder is going to take in
just basically the same code that we had before, we’re just
going to change that index in fish to make sure that when
we’re displaying multiple cards we’re not displaying Flo every
single time, then we’re passing in the length of the fish list
to item count so that we’re not going to scroll infinitely
through cover flow. Now we should be able to hot reload and switch over to the phones.
>>EMILY FORTUNA: This is so much better. Look at all these
fish I could adopted. I could adopt Gayle the whale
except I don’t think I have space for
Gail the whale.>>EMILY SHACK: I think you
might have gone a little overboard there. Why don’t we
try and see how we can remove a fish from that reserved view.
That seems like a pretty handy case for our users. Back to the
code, we’re going to implement two more arguments on that cover flow, we’re going to add a
dismissible items argument which will tell cover flow whether it
should be able to dismiss cards by dragging up or down and
having those disappear from the list. We’re also adding in a
call back, a dismiss I believe call back that will be —
dismissible call back that will be triggered whenever we dismiss a
card from the list. Let’s go back to the devices,
hot reload and see that in action. . And I think I saw.>>EMILY FORTUNA: Yeah, Gail is
available for anyone else to reserve. I don’t have space for
Gail the whale but I think my parents have space. I do want to adoment her.
>>EMILY SHACK: We don’t want to dismiss a fish and
instantly regret T we want to give the user the option to get
fish back N this is where we’ll implement the accel rom terks if
a simple undo functionality. We need first of all our fish
data to store the fish that has last been undone or the fish
that has last been removed from the list so that we can undo
that later on. We’re going to keep track of that variable and
then go ahead and set it whenever we remove a fish. This
is a pretty simple version of undo, we’re only going to be
able to remove one fish at a time but you can imagine some
more complex version, we’re removing multipill fish. For
our purposes, this will work just fine.
Now we’re going to need to actual trig are an undo with the
accelerometer. Here we’re implementing a nit
state. All stateful widgets have this
method but a lot of times it defults to the default
implementation. It will be called whenever we start up that
case N our case we want to introduce our own logic so we’ll
write it out. Here we’ll initialize a listener onto the
accelerometer events stream, that stream is something we’re
getting from the sensors plug-in. Now, we’re going to
listen to these events in the background and it’s going on
throw us back any accelerometer events it finds where the phone
has moved in some way.
In our case, we want to trigger a shake whenever the
phone has accelerated in the X axis by at least 2 meters
per second and we also want to
trigger an undo if we recognize the undo data is null and we’re
in that reserved view. So I believe, yes, we have to
impose our less than or equal to or greater than or equal to 2 to
make sure that we have some threshold there and then when we
go ahead and you think I’m about to say hot reload but I’m
actually not, there’s another function here that we can use called hot restart. Hot
reload is actually going to be stateful and bring you tbook
your current state. Hot restart is a little bit different. Because we have these anit state
calls and maybe some calls in our main function before we bill
the widgets, those aren’t going to get rerun when we do a
stateful hot reload. We want to fully restart the app to display
our new changes. But a hot restarted is better
than a full restart in that it’s a lot faster, almost as fast as
a hot reload. We’ve hot restarted and we can jump on
over to the phones.>>EMILY FORTUNA: So reserving
Gail, remove, and she’s back. >>EMILY SHACK: Perfect and
she’s disappeared from the other phone. (applause).
Fort fort you know what would really be icing on the cake
though? If we had sound effects.
>>EMILY SHACK: Okay. Yeah. Those are pretty important in
any fully featured app that’s trying to be a little bit
whimsical. So we want to incorporate sound effect using
our local audio tools class, which is calling the audio tools
plug-in. So here we are going to at the top of our main
function, we’re going to load a couple audio files, our
background audio and our removed audio. We’re also going to initialize
an add yoa loop on that background audio so that when we
start up the background audio, it keeps playing until we maybe interrupt it with a different
audio file. So this would be a good time to
pause and talk a little bit about asynchronous functions in
dart. You’ll notice that the main function is marked with the async
keyword, meaning it’s asynchronous ask will return a
future or a promise of a value that’s going to come a little
bit later on. It’s also calling two more asynchronous functions. We have this Firebase sign in
anonymously call as well as our audio stools load file call,
both are asipg nus, but we’re handling these no two different
ways. In the first case, we’re using the await keyword, that’s
saying we’re dealing with an asynchronous function but we
really don’t want the rest of our main function to continue
until this one has completed, until the future has completed.
So we’re going to call a wait and that will pause execution on
everything after that await call in our main function. For audio
tools we’re using the dot then operator. That’s going to pause
execution of everything inside of that body of the operator,
but it’s not going to pause execution on anything else, so
in our case, we want to wait for the file to be loaded before we
can play it but don’t care if the rest of the app builds a
little bit before the we’ve loaded our file.
We have one more case to incorporate audio which is that
whenever we remove a fish, of course we want to have a nice
sound effect there as well. So we’re just going to call play
audio and then we should be ready to switch back to the apps
— on the devices. So we can do another hot restart
this time, and then switch over, and we should be able to hear
some ocean breeze sounds in the background. (Flush sound) as well as that
crucial flushing noise when we dismiss a fish.
(Flushing sound) what do you think?
>>EMILY FORTUNA: Looks good.
>>EMILY SHACK: I think our sap pretty good, so why don’t we
go ahead and deliver this to Francine. Let’s go back over to
the slides. And this app is a huge success
with Francine! She’s so excited. She delivers it to her
customers, and it drives business by over 300%,
all because of Flutter. But how did we get here? Well,
using Flutter, we were able toll achieve a native customizable
look and feel for both iOS and Android
from a single code base, but not only was it a native look and
feel, we also did really compile down to native code under the
hood, so we didn’t have to make any sacrifices in quality or
customizeability. We also integrated with Cloud fire
store, which gave us all of that fish data that updated in
realtime and finally we integrated our device level
controls, the accelerometer and sound to get that app really
fully featured. And all of this, all of that
functionality, hard to believe, but it was done in 310 lines of code.
That’s everything we wrote in advance of the talk and
everything we showed you today. So that’s pretty low. (applause) .
>>EMILY FORTUNA: Flutter is currently in beta. What does
that mean? Well, it means we support right to left layouts in
case you’re writing a fish adoption app around the
Daddy Sea, it means we support internationalization, also
related for that Daddy Sea app, it means we
support accessibility H Flutter has apps in all sorts of
industries, we have finance, health, music, travel, did hear
are some of the brands and companies that are writing
Flutter apps. I want to highlight three of them for you.
First one is Hamilton. If you haven’t heard of Hamilton, I’m
sorry, you’re probably not — you probably don’t live in the
U.S. , this is the name of the hit
Broadway musical that is about Alexander Hamilton and their
official app is written using Flutter, it has over a million
downloads and over a half a million monthly active users. On the left you see AdWords
which is the multimillion dollar product
by Google that helps keep the lights on in Google and their
new app is also written in Flutter and will be released
soon. On the right we have Ali Baba,
which is the top 10 internet company sp their new mobile app
is written in Flutter and they’re in the process of
releasing it to users now. If you want to learn more about
Flutter, we have so many great resources for you. First check
out Flutter.io where you have information about how to
download it, set it up on your computer, there’s documentation,
we have a udacity Flutter course. If you want to learn
about it right now, run over to the codelabs, we have a whopping
nine codelabs about Flutter and dart and at various points
between today and tomorrow you might see
someone that looks kind of like this and someone that looks kind
of like this helping out. Also we have a sandbox, there’s the
Flutter, Firebase and Cloud sandbox, we have a great section
of goodies there, there’s this hot reload game, which is pretty
fun explings one of you played it.
>>EMILY SHACK: I have actually played the hot reload
game, also a great example of how you can get really good looking 2D animations working in
Flutter.>>EMILY FORTUNA: And there is
one more talk tomorrow that is about how you can set up good
design patterns for your Flutter apps so say you’re like I have
so many widget, how do I manage them and how do they talk to
each other? You should go to this talk tomorrow. There was
one more talk that happened earlier today, and if your time
machine is not in order, you should check it out on YouTube and pretty soon.
Here is the code that we wrote today, this link will also be out in
two slides. We would love to hear your feedback. Go to the I/O web page, then the
lipg that’s still there. Thank you so much, it was great to
share Flutter with you. We’ll take questions in the Flutter
sandbox, and adoment a lobster. (applause) adopt a lobster. (applause). (End of the session).
>>>>
>>>>
>>>>>>Pli
>>>> >>Brand ambassadors will
assist with directing you through the designated exits.
We’ll be making room for those who have registered for the next
session. If you’ve registered for the next session in this
room, we ask that you please clear the exproom return via the
registration line outside. Thank you.
>>>>
>>>>
>>>>
>>>>
>>>> May 9, 2018. Stage 6. Build Effective OEM-level Apps
on Android Things.>> gs.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
May 9, 2018. Stage 6. Build Effective OEM-Level Apps
on Android Things. >>>> # .
>>Welcome. Please fill in the seats near the front of the
room. Thank you. >>DAVE SMITH: Good
afternoon, everybody. Come on, I got nothing? Good afternoon,
everybody. There we go. (applause).
There we go. There we go. Well, thank you so much for
spending your time with me today. My name is Dave Smith, I’m a
developer advocate here at Google working on the Android
Things platform, I’m here to talk to you a little bit about
building apps for the Android Things platform and how you can
be more effective in the apps that you build using the Android
SDK. Whether you’re new to Android or whether you’ve been
building Android apps since the beginning, targeting Android
Things devices has some subtle differences from what you may or
may not be used to in working with Android. Understanding
these differences is what will ensure that you can build better
apps on the platform. Before we jump into too much of
that, left me do a quick overview of
what Android Things is maybe for the uninitiated. Android Things
is a fully managed platform for building connected devices at
scale. It’s a variant of the Android platform that is opt
miemsed for use in embedded devices, it enables you to build
apps for embedded in IOT using the
same Android SDK and Google Play Services that you use to build
for the Android mobile. You can develop apps using the same tools such as Android Studio to
deploy and debug your apps to devices as well. It includes
the Android Things Developer Console, this is a place where
you can securely manage your software stability and security
updates for your devices. You simply upload your apps, choose
the OS version that you want to run on your device and then
deploy those updates to those devices over
the air. Security updates are even deployed automatically to
those devices for you. Android Things also supports powerful hardware that’s suitable for
edge computing and production, capable of driving artificial
intelligence and machine learning out to the edge.
This hardware is packaged into system on modules that make
it easy for you to integrate into your final production
designs. When you look at all these
things together, the process is a little bit different than when
you were building apps on a mobile twice. Building a
typical app for Android devices means distributing a single app
binary through the Google Play Store typically, apps have to
work on multiple devices made by multiple OEM’s
targeting multiple versions of the Android operating system.
Typically requiring you to do various compatibility checks and
other things like that to make sure that your app runs well
across that entire breadth of the ecosystem.
With Android Things, you are the device OEM, you control when the
OS on your device gets upgraded and the various apps that are
bundled into that system image along with it and you do all of
this through the Android Things Console instead of through the
Google Play Store. This can greatly simplify your
code because you don’t need to incorporate a lot of those same
compatibility checks, but there are some things to consider that
are going to be a little bit different. Let’s start with
displays. In Android Things, displays are
optional. They’re supported and you can use the full Android UI
toolkit to build applications that have a
graphical user interface, whether it’s touch-enabled or
not, but we’ve removed a lot of the default system UI and
disabled or reworked some of the API’s that assume the graphical
displays are in place, because many IOT devices will not have
these pieces and we don’t want to place those requirements in
there. The best example of this in
practice is app permissions. So in Android Things, permissions
are not granted at run time by the
end user because we can’t assume that there’s a graphical display
to show things like this dialogue and we can’t even
really assume that user granting specific types of permission ss appropriate for
an IOT device. So instead, these permissions
are actually granted by you the developer using the Android
Things Console. Okay? So as the owner of your device,
you’re responsible for taking control of the apps that run on
this device and the permissions that those particular
applications have. Okay? Now, because of this,
permissions may not be granted by end users, so
that means you don’t necessarily have to check whether or not
those permissions were granted — or you don’t have to request
for those permissions to be granted at run time, but since
they are still granted dynamically, the best practice
is still for your code to verify that you have that permission.
Okay? Because that permission could have been revoked by one
of the console users and you don’t want your
application code to behave improper until those cases, so
you’ll still want to have checks like this one in your code when
you are accessing dangerous permissions that could be
granted or revoked by the console but you won’t have to
include the code that requests those permissions upfront from
the end user. Not doing this will result in
the same security exception that you would otherwise see by
trying to access those protected permissions if in fact that
permission is disabled. Additionally, in Android Things
1.0, permissions are no longer granted automatically on reboot.
This is and that we did in some of the earlier previews and is
no longer the case, so that means that as a developer, you
can’t simply just reboot your device to try and get all those
permissions brought into your app automatically. You have to
actually use the tooling to make that happen.
So during development, what you’re going to want to do is
provide the dash G flag when installing the applications on
your device. This will grant all the permissions requested by your app by
default. Android Studio actual disl this
for you automatically, so whenever you click build and run
out of the IDE, this process is taken care of for you. But if
you want to do this from the command line, you’re going to
have to add that flag yourself. Another option is to use the
PM grant command to individually grant or revoke permissions
inside of your application. You can do this during development
or maybe just to test what the individual behavior is of a
certain permission if you deny that inside of your
application. If you prefer to usms the Gradle
command line or perhaps you’re running automated tests or other
things where the IDE is not involved, you can actually add
this to your build. Gradle file using an ADP options
block to afly that dash flag anytime
your application is installed. Speaking of UI, we should talk
about activities. Most developers think that
activities are essentially screens. So if we remove displays, why do
we need to keep them around? Now, it turns out that
activities are a little bit more than that, and an activity
represents a component of user focus in Android. For devices
with graphical displays, that does mean that it will
render the contents of the view onto the window, but even for
devices without displays, activity also handle all of the
user input events, whether that’s coming from a touchscreen
input or maybe it’s a game control are or a keyboard or any
other external input device that you may have connected, all of
those events are going to be delivered to the foreground
activity. So even without a graphical
display, activities are still a very important portion of Android user
interface, even though the user interface may
not actually include the graphical UI.
It’s important to know when talking about many activity,
activities are still vulnerable to configuration changes the
same way they are on Android. As an Android developer, you’re
probably used to at least at some point having to deal with
an orientation change of a device and having that destroy
your activity and re-create a new snrans of it. That’s
effectively a very common configuration change on Android
mobile devices. While on Android Things that
specific instance probably is not very common, if it would
happen at all, there are still a number of other configuration
changes that might still happen on Android Things devices,
things like changing the default locale, or connecting or
disconnect a keyboard, a physical keyboard from the
device. All of these events have the
same net effect in that that activity will be destroyed and
re-created if it happens to be in the foreground. So generally
speaking with your — whether you’re working with activities
on Android Things, the same rules apply to activities in
terms of the logic that you put into those components. They’re
effectively just as fragile in terms of their life cycle, so
you’re only going to want to have view base logic or user
interface based logic inside of these activities. Try not to
put too much additional state into these components. You’re
going to want to push that out into other parts of your
application. Android Things even uses
activities to launch your primary application as part of
the boot process. We do this using the home intent, which is
the same intent used to trigger the app launcher on an Android
mobile device. This intent starts your app automatically on
boot and specifically the activity inside of that
application, start it automatically on boot, and in
addition to that, if that application crashes or
terminates for any reason, Android is going to restart that
application automatically. So this becomes the main entry
point into your application that is automatically managed by the
Android Things platform. So we don’t want to forget about
activities just yet. A couple other things about
Android Things devices. Android Things devices are also
relatively memory constrained when you compare them to an
Android phone. A typical Android Things device may have 5
megabytes of — 512 megabytes
of RAM or so, compare that with the multiple gigabytes of RAM
you would have on an Android phone like say a Pixel
or Pixel 2. For you the developer, there’s actually a
much lower per process heap size for your individual application.
So if you’re not familiar with this idea, Android sets a
fixed heap level on every application running on that
device, and it’s significantly lower than the total available
memory on that device. And since the Android Things devices
are relatively memory constrained, that per process
limit is significantly lower than it would be on an Android
phone. Because of that, if you’re
porting code fre an Android mobile device over to Android
Things, you just have to realize that if you’re using the same
amount of memory in your app, there’s going to be a lot less
free memory in that same process available to you. Okay? So you
have to keep that in mind, and you also want to realize that
this can also translate into a
significantly larger amount of garbage collection events
happening as you allocate new objects. Okay? So you want to
keep a close eye on object allocations, how often you’re
doing allocations, because you may run into that ceiling much
more quickly than you otherwise would on an Android device or
you might see the garbage collector kicking in quite a bit
more. The memory profiler in Android
Studio is a really great resource to help you key an eye
on what’s going on inside your memory, it will allow you to
track those allocations over time as well as see overlaid
into it the individual garbage collection events, so you can
get a really good idea of whether or not your application
is allocating too much memory and causing trouble.
Some of the things you can do to help understand your device
ailtz bit better is use some of the activity manager methods to
do some inspection on the memory capabilities of your particular
device. So for example, you can use the memory class attribute
on activity manager, this will give you the exact heap size
that’s available to your application, the value that’s
returned is the value in mega bytes that is how much
memory you have. Large class is how much your application would
have if you added the large heap attribute to your manifest.
I would caution you against doing this on Android Things.
Generally speaking, because Android Things devices are memory
constrained, the memory class and the large memory class of
these devices are generally configured to be the same value.
So adding this at tribility to your manifest is essentially not
going to do anything. Okay? You also want to inspect the low
memory threshold of this device to get a sense for what that
actually looks like. When the available memory on the device
falls below that memory threshold, the device is in a
state that we call memory pressure, and we’ll talk a
little bit more about what that means and why it’s important in
a little bit. But just keep it in mind for now.
So I want you to notice something else about this
diagram that I had up before. Because of this per process heap
limit that’s a fixed value for a single application, if you try to put
all of your application code into a
single process or a single AP-K you’re going to be severely
limited if your ability to fully utilize the memory that is available on this device.
Now, keep in mind, with Android Things, the only apps on
the device are your apps, so you should be able
to take full advantage of those memory resources as much as you
possibly can. The way to do that is to splilt
your application into multiple processes, because that limit
essentially will apply to each one of those processes
individually, so if you can fed rate the design of your
application out into multiple components that are actually
running in separate APK’s, you’ll have a much better ability to fully
utilize the memory available on whatever device ha you’re
running. To make the most effective use
of our device, we’re going to break this app up into multiple
APK’s with the primary activity running in the foreground and
additional apps running in the background with support services
running inside. The additional benefit of
running this architecture is that it actually insulates these
various components from one another, so in this
scenario, if a crash happens in one of these components, it’s
localized just to that element. And it won’t bring down your
entire application and have to restart
all of that from the beginning. So you can manage those
individual issues just within that component and leave the
rest of the applications or components running on your
device to be unaffected. It also means that you can
launch or relaunch these components individually as
needed by your application. So if you don’t need to load
everything at once at boot, you can launch various services and
components just as you need them.
Now, it turns out the decision to put some of your
components into background apps has consequences as well.
Android treelts foreground and background processes a little
bit differently and we need to be aware of what’s going on
under the hood here. Android marks application
processes by priority based on how closely they are related to the foreground
application, and this is very important because of a system
process known as the Low Memory Killer. The low Low Memory Killer is a
process constantly prowling in the background looking for new
processes to devour, its job is to ensure that the free memory
on the system is available to the foreground app at any
girve given time. So if the foreground app needs new memory
and the app happens to be in a process of memory pressure, Low
Memory Killer will go around looking for processes it can
terminate to Al loaf indicate that memory back to the
foreground. On an Android device like a
typical user-driven Android device, this can be somewhat of
a new San to developers because their app may get terminated
from the background but at some point the user will relaunch it
later and everything will be fine.
On Android Things, the Low Memory Killer could mean that
you have critical device functionality that is being
terminated out from underneath you and you didn’t even know it.
Perhaps there’s a device driver run ng that service, and Android
killed that because it thought it was low enough priority in
the background. Okay? So something to keep in mind as
you’re moving through this. In addition, Android Oreo
introduced execution limit for background apps, so applications
can no longer be started into a background state, they must
either be launched from the foreground app or bound to it in
some way. So because of these two things,
there’s a number of different common ways that you may or may
not have used in the past to launch components into the
background. We’re going to kind of walk through those a little
bit. The first that you might be familiar with is using the
boot completed broadcast to listen for a final boot message
coming from the Android framework saying that the system
is up and running, you can launch other apps if you would
like. Do not use this on Android Things. Approximate.
The primary reason is because of those background execution
limits, your background services actually can’t be properly
started into that state. In a lot of cases, it won’t even
work. Okay? In addition to that, this background broadcast,
this boot completed broadcast is very unpredictable in terms of
its timing. In a lot of cases, this boot completed broadcast
actually triggers much, much later than when the home intent
and the home activity are fully up and running in the
foreground. Okay? So if you’re trying to synchronize between
these two things, it’s not a very good mechanism on rely on.
In addition, I would recommend you don’t use start
service. For a similar reason, start service is limited by
those same background execution limits unless you are starting a
service in foreground mode. Now, foreground services require
you as a developer to actually build
in a notification that would typically display to the user
when that service is running. Well, we took away the system UI
where that notification would display, so you end up doing a
bunch of work for dissplaipg the service that doesn’t —
displaying the service that doesn’t actually gain you
anything. In addition to that, there are
some difficulties with started services when it comes to
managing their life cycle. A started service, if that
crashes for some reason, you don’t have a direct connection
to understand that that occurred and that you need to restart
that service so that you can manage that process a little bit
better. Now, Android does have this
thing that services can return this start sticky attribute and
that’s a way for applications to tell Android that this service
is important and if it crashes or terminates for some reason, I
need it to be restarted. However, Android usually only
does this about once or twice for a given service. Before
they just sort of giver up and realize that at some point the
user will launch this app again, maybe this will start again and
everything will be fine. That type of thinking doesn’t go well
for those background services that have critical functionality
in them like a device driver. Okay? So we recommend you use bind
service instead. By using bound services, this gives the
background processes an active connection to that foreground
app, so you’ll have a good indication of when that service
is running and when that service has died for some reason so that
you can manage that, relaunch it if you need to, do any of those
things. This also happens the added
benefit of a built-in communication channel between
the applications that are bound, so you can do some more direct
communication with that service without having to use intense or other
mechanisms like that to pass data back and forth.
So looking at this diagram again, one of the other
important reasons to use bound services is that pure
background applications like those that would have been
started by boot completed or just start service on its own
are very low priority on the scale. Okay? Whereas bound
service applications are almost as high priority as the
foreground app. They are literally the highest priority
you can get without being the foreground app.
So this ensures that those background processes stay safe from
something like Low Memory Killer if the device ever does get into
a memory pressure situation. Okay? So you get better
management of those services and you get better protection from a
memory management perspective. All right. Let’s take a look
at what this would actually look like in code. So if I have just
a basic example of a service here that has a device driver
inside of it, in this case this is just a device driver to take
some button inputs and convert them into key events like they
were coming off of the keyboard. All of this logic can be fully
en cans laitd into this external service and can run on its own,
so we can build this service component and then from the
foreground app we can construct an intent to that service
component and we can bind to it. Notice that I’m doing this
from the application class and not from the primary activity.
Remember our discussion from activities before and the life
cycle associated with those. If this is a service that needs to
be as persistent as possible. We want to bind to it from a
component that is expected to be around just as long. Okay? So
that you don’t end up with life cycle issues where your activity
gets destroyed, re-created and you’re rebinding to that service
unnecessarily. It doesn’t necessarily cause a major
problem, but it’s not the best idea.
In addition with bound services, this also means that
you get this feedback mechanism coming through the service
connection call-back. So when you bind to a service, you
provide this call back as a service connection and when the
service is up and running, you will be notified through the on
service connected method, so you know exactly when this is now
something you can interact with or communicate with if you need
to. In addition, on service
disconnected tells us anytime that service
stops unexpectedly, maybe because it has crashed or
something else has occurred, and at that point we probably need
to take a look although restarting this — at restarting
this, especially if it’s running some critical functionality on
our device. We now have the information we need to properly
manage this functionality from within our applications, which
we wouldn’t get with started services or other of these more
independent mechanisms. Here is kind of a final picture
of that architecture again. Android is going to manage for
us on automatically that foreground app using the home
intent, launch it automatic on boolt and relaunch it if that
application crashes. Then our application code can manage
these additional background support services through the
bound service mechanism. The last thing I want to
share with you today are a couple quick tips on doing this
type of development from within Android Studio or within the
development tools. So you can manage multiple APK’s
from within a single Android Studio project by adding each
additional package as a new module. You can add multiple
modules to the same project ask all those modules can represent
an APK or an individual app process. This allows you to
manage all of your code in one place, even though they’re
technically separate apps. Now, by default, Android Studio
does not allow you to deploy an app
module that does not contain a launcher activity. Okay? A
activity that has that main launcher intent filter on it.
This doesn’t work so well for background service apps that
don’t have any activities at all in them in some cases, but you
can modify this behavior. For a background services app, you can
he had it’s the run configuration and — you can
edit the run configuration option and simply adjust the
launch options for that particular module, set that
target to noggin stead of default activity.
This will enable Android Studio to deploy that service only app to
your device and it won’t complain.
You can also do this from the command line, and one of the
advantages of doing this way is that Android Studio
does require that you deploy only one module at a time by
selecting that module from the run configuration listed in the
UI. So if you have an application that’s constructed
of four or five different modules all as individual APKs,
it can be a bit cumbersome if you have toly deploy them all
individually all the time. One of the advantages of using the
Gradle command line is that by defaulted, when you run a
command like install debug with no other modifiers, it builds
and installs every module in that project. So with one
command, you can deploy everything on the latest version
to that device. And you can still do individual
modules if you would prefer to do that by just adding the
module name to the command as well.
Once you’ve got the modules on the device, the other thing
you can do directly from the command line that isn’t really
supported in Android Studio today is the ability to start
those individual components whether they’re activities or
services, so user rg the AM Shell commands, you can trigger
those services manually if you want to test out some of that
behavior sort of independently from the rest of the system even
though they may be managed by the foreground app in
production. All right. So let’s quickly review some of
the tips we’ve gone through here today.
Don’t assume a graphical UI. Design for your memory
constraints on these devices. Break your app up into
modules. Bind your background services to
the foreground app. Don’t use started services.
And use the Gradle command line if you want to have more
control over deploying your modules to the device.
Now, if you’re just as excited about Android thing as
we are, I want to remind every twhawn we’re doing a scavenger
hunt here at Google I/O. If you visit the link here or use the
app, you can follow the instructions to find various
items around the conference, and once you’ve completed those
challenges, you can then receive a free Android Things developer
kit to take home. To learn more about Android
Things, visit the developer site and make sure to visit the codelabs, office
hours, and other dem mows that we have here in the sandbox.
Also be sure to visit Android Things with Google.com to find
featured community projects and additional sample code. You’ll
also find a lot of the sample code available for some of the
demos that we have here at the conference on Android Things
with Google.com as well. So thank you everyone for your
time today, and I’m really excited to see the apps that you
build with Android Things. (applause). (The session has concluded, 3:57
p.m. ) >>Thank you for joining this
session. Brand ambassadors will assist with direct you through
the designated exits. We’ll be making room for those who have
registered for the next session. If you’ve registered for the
next session in this room, we ask that you please clear the
room and return via the registration line outside.
Thank you.>>
>>>>
>>>>
>>>>
>> May 9, 2018. TensorFlow in production: TF
Extended, , TF Had you been and Hub and TF Serving . Serving.. . ..
>>>>
>>>>
>>>>
>>>>
>>>>>>
May 9, 2018. TensorFlow in production: TF
Hub, TF Serving, and TensorFlow extended . . >>Welcome. Please fill in the seats near
the front of the room. Thank you. >>Welcome, everyone. I am Jeremiah, and this is are
TensorFlow in production. I’m excited that you’re all here
because that means you’re excited about production, and
that means you’re building things that people
actually use. So our talk today has three
parts. I want to start by quickly drawing a thread that
kind of connects all of them. The first thread is the origin
of these projects. These projects really come from our teams that are on the frontline
of machine learning, so these are real problems that we’ve come across
doing machine learning at Google scale and these are the real
solutions that let us do machine learning at Google.
The second thing I want to talk about is this observation,
if we look at software engineering over the years, we
see this growth. As we discover new tools, as we discover best
practices, we’re really getting more effective at doing
software engineering and we’re getting more efficient.
We’re seeing the same kind of growth on the machine learning
side. Right? We’re discovering new best practices and new
tools. The catch is that this growth is
maybe ten or fifteen years behind
software engineering. And we’re also rediscovering a lot of the
same things that exist in software engineering, but in a
machine learning context. So we’re doing things like
discovering version control for machine learning or continuous integration for
machine learning. So I think it’s worth keeping
that in mind as we move through the talks. The first one up
will be TensorFlow hub and this is something that
lets you share reusable pieces of machine learning much the
same way we share code, then we’ll talk a little bit about
deploying machine learning model says with TensorFlow serving and
we’ll finish up with TensorFlow Extended
which wraps a lot of these things together in a platform to
increase your velocity as a machine learning practitioner. So with that, I would like to
hand it over to Andrew to talk about
TensorFlow hub. Dl.>>ANDREW GASPAROVIC: Thanks,
Jeremiah. I would like to talk to you a little bit about TensorFlow hub which
is a new library designed to bring reusability to machine
learning. Software repositories have been
a real benefit to developer productivity over the past ten
or fifteen years and they’re great first of all
because when you’re writing something new, if you have a
repository, you think oh, maybe I’ll check whether there’s
something that already exists and reuse
that rather than starting from scratch, but a second thing that happens is you
started thinking maybe I’ll write my code in a way that’s
specifically designed for reuse, which is great because it makes your code more modular, but it
also has a potential to benefit a whole community, if you share
that code. What we are doing with
TensorFlow Hub is bringing that idea of a
repository to machine learning. In this case,
TensorFlow Hub is designed so that you can create, share and reuse components of ML
models. And if you think about it, it’s
even more important to have a
repository for machine learning, even more so than software
development balls in the case of machine learning, not only are
you reusing the algorithm and the
expertise, but you’re also reusing potentially enormous
amount of compute power that went into training the model and
all of the training data as well well.
So all four of thoarksz the algorithm, the training data,
the compute and the expertise all go
into a module which is shareable did
with TensorFlow Hub and then you can import those into your
model. Those modules are pretrained, so
they have the weights and the TensorFlow graph inside. And unlike a
model, they’re designed to be composable, which means you can
put them together like building blocks and add your own stuff on top,
they’re reusable which means they have common signatures so
that you can swap one for another, and they’re
retrainable, which means that you can actually back-propagate
through a module you’ve inserted into your graph.
So let’s take a quick look at an example. In this case we’ll
do a little bit of image classification. Let’s say we
want to make an app to classify rabbit breeds from
Photos, but we voanl a few hundred example
Photos. Probably not enough to build a whole image classifier
from scratch. But what we could do is start
from a general purpose model and we
could take the reusable part of it, architecture on the weights
there, take off the classification and then add our
own classifier on top and train it with our own examples. We’ll keep that reawsable part
fixed and train our own classifier on
tovment. So if you’re using TensorFlow
Hub, you started at TensorFlow. org/hub where you can find a
whole bunch of newly releasted state of the
art research-oriented and well-known
image modules. Some of them are including
classification and some of the them chop off the classification
layers and just output feature vectors, that’s what we want in
our own case, in this case, because we’re going
to add classification on top so maybe
we’ll choose NASNet which is an image module created by a neural
architecture search. So we’ll choose NASNet-A the
large version with the feature vectors. We just paste the URL for the
module into our TF Hub code, then we’re
ready to use that module just like a
function. In between, the module gets downloads and had
instantiated into your graph so all you have to do is get those
feature vectors being add your own classification on top, and
output the new categories. So specificallily we’re training
just the classification part while keeping all of the modules
weights fixed. But the great thing about reusing a module is that you get all of
the training and compute that’s gone into that reusable portion.
So in the case of NASNet, it was over 62,000GPU hours that went
into finding the architecture and training the model, plus all of the expertise, the
testing, and the research that went into NASNet. You’re
reusing all of that in that one line of code.
And as I mentioned before, those modules are trainable, so
if you have enough data, you can do fine-tuning with the module.
If you set that trainable parameter to true and you select
that you want to use the training graph, what you’ll
end up doing is training the entire thing along with your
classification. The cav yalt being that of course you have to
lower the learning rate so that you don’t ruin the weights
inside the module. But if you have enough training data, it’s
something that you can do to get even better accuracy. And in general, we have lots of
image modules on TF Hub, we have ones that are straight out of
research papers like NASNet, we have ones that are great for
production, even ones made for on-device usage like MobileNet,
plus all the industry standard ones that people are familiar with like Inception
and ResNet. So let’s look at one more
example, in this case doing a little bit of text
classification. We’ll look at some restaurant
reviews and decide whether they’re positive or negative
sentiment. And one of the great things
about TF Hub is that all of those modules, because they’re
TensorFlow graphs, you can include things like
preprocessing. So the text modules that are available on TF Hub take whole sentences
and phrases, not just individual words, because they have all of
the tokennization and preprocessing
stored in the graph itself. So we’ll use one of those, and
basically the same idea, we’re going to select a sentence-embedding
module, we’ll add our own classification on top and we’ll
train it with our own data, but we’ll keep the module itself
fixed. And just like before, we’ll
start by going on TensorFlow. org/hub and take a look at the
text modules that are available. In this case maybe we’ll choose
the Universal Sentence Encoder which is just recently released based on a
research paper from last month and the idea is that it was
trained on a variety of tasks and it’s specifically meant to
support using it with a variety of tasks and it also takes just
a very small amount of training data to use
it in your model, which is perfect for
our example case, so we’ll use that Universal Sentence Encoder
and just like before we’ll paste the URL into our code.
The difference here is we’re using it with a text embedding column.
That way, we can feed it into one of the high level TensorFlow
estimators in this case the DNN classifier, but you could also
use that module like I showed in the earlier example, calling
it just as a function. If you are using the text
embedding column, just like in the other example, that also can be trained as well
and just like in the other example, it’s something that you
can do with a lower learning rate if you have a lot of training data, and it may give
you better accuracy. So we have a lot of text modules
available on TF Hub. We actually just added three new languages to the NNLM modules,
Chinese, Korean and Indonesian, those are all trained on GINU’s
training data. We also have a really great module called ELMo
from some recent research, which understands words in
context, and of course the Universal Sentence Encoder, as I
talked about. So just to show you for a minute
some of those URL’s that we’ve been looking at, maybe we’ll take apart the
pieces here. Dl tfhub. did he have is our new — tfhub.
dev is our new source for selected partner modules, Google
is the publisher, Universal Sentence Encoder is the name of
the module. The one at the end is a version number, so TensorFlow Hub
considers modules to be immutable and so
the version number is there so that
if you’re doing one training run and then another, you descroanlt
a situation where the module changes
unexpectedly. So all modules on TF Hub.dev are
versioned that way. And one of the nice things about
those URL’s, if you pace them into a browser, you are get the
module documentation, the idea being that maybe you
read a new paper accident you see oh there’s a URL for a TF
Hub module, paste it into your broirs, see the documentation,
you paste it into some code and in one line you’re able to use that module and try
out the new research. And speaking of the universal encode
ler, the team just re: leased a new light version, which is a much
smaller size, it’s about 25 megabytes and it’s specifically designed for cases
where the full text module wouldn’t work for doing things
like on device classification. Also today we released a new
module from deep mind, this one you can feed in video and it
will classify and detect action says in that video, so in this
case it correctly guesses the video is of people flaig cricket playing cricket.
Other modules, there’s a general ative module trained on it
CelebA frrks also deep local features module which can
identify the key points of landmark images, those are all
available now on TF Hub. And last but not least, I wanted
to mention that we just announced
our support for TensorFlow JS. So using the TensorFlow JS
converter, you can directly convert a TF Hub module into a
format that can be used on the web. It’s a really simple
integration to be able to take a module and use it
in the web browser with TensorFlow.JS and we’re really
excited to see what you build with it. So just to summarize, TensorFlow
Hub is designed to be a starting point for reusable
machine learning, and the idea is just like with a software
repository, before you start from scratch,
check out what’s available on TensorFlow Hub and you may find that it’s better to
start with a module and import that
into your model rather than starting the task completely
from scratch. We have a lot of modules available and we’re
adding more all the time, and we’re really exiertd to see what
you build. So thanks. Next up is Jeremiah to talk
about TF Serving. (applause).
>>JEREMIAH HARMSEN: All right. Thank you, Andrew. So
next, TensorFlow serving, this is going to be how we deploy
modules or deploy models, just to get a sense for where this
falls in the machine learning process, we start with our data,
we use TensorFlow to train a model and
output our artifact there, these are saved model says, it’s a
graphical representation of the data flow
accident and once we have those, we want to share them with the
world, that’s where this TensorFlow Serving comes in,
it’s this big orange box, this takes our models and exposes
them to the world through a service so clients can make
requests, TensorFlow takes them and runs
the inference, run the model, come up with an answer and
return that in a response. So TensorFlow Serving is
actually the libraries and binaries you need to do this to
do this production grade imprints over trained TensorFlow
models. It’s written in C++ and supports things like GRPC and plays
nicely with kubernetes. So to do this well, it has a couple of
features. The first and most important is
it supports multiple models. So on one TensorFlow model
server, you can load multiple models. Right? Just like most
folks probably wouldn’t push a new binary right to production,
you don’t want to push a new model right to production
either. So having these multiple models in memory lets you be serving one
model on production traffic and load a new one and maybe send it
some canary requests, send it some QA requests, make sure
everything is all right and then move the traffic over to that
new model. This supports doing things like
reloading if you have a stream of models you’re producing,
TensorFlow Serving will transparently load the new ones
and unload the old ones. We’ve built in a lot of isolation. If
you have a model that’s serving a lot of traffic in one thread
and it’s time to load a new model, we make sure to do that
in a separate thread, that way we don’t cause any hiccups in
the thread that’s serving production traffic.
And again, this entire system has been built from the ground
up to be very high throughput, things like
selecting those different models based on the name or selecting
different versions, that’s very, very efficient. Similarly, it has some advanced
bachg, this way we can make use of accelerators and also see
improvements on standard CPU’s with this batching. Then
lots of other enhancements, everything from protocol are
buffer magic to lots more. This is really what we use
inside Google to serve TensorFlow,
there’s over 1500 projects that use it, it serves somewhere in
the nailed of 10 million QPS, which ends up being about
100 million items there predicted per second. We’re
also seeing some adoption outside of Google. A new thing I would like to
share today is distributed severing. Looking inside Google
we’ve seen a couple trends, one is that models are getting
bigger and bigger, some of the ones inside Google are over aim
terabyte in size. The other thing we’re seeing is this
sharing of sub graphs, TF Hub is producing these common pieces of
models and we’re also seeing more and more specialization in
these models as they get bigger and bigger. If you look at some
of these model structures, they look less like a model that
would belong on one machine and more like an entire system, so
this is exactly what distributed serving is meant for, lets us
take the single model ask basically break it up into
microservices. So to get a better feel for
that, we’ll say that Andrew has taken his rabbit classifier and is serving
it on a model server and say I want to create a similar system
to classify cat breeds, so I’ve done the same thing, started from TensorFlow Hub so you can
see I have the are TensorFlow module
in the center there. You notice since we both started from the
same module, we have the same bits of code, we have the same
core to our machine learned model so we can
start a third server and put the TensorFlow Hub module on that
server and we can remove it from the servers on the outside and
leave in its place this placeholder we call a remote op.
You can think of this as a portal, kind of a forwarding op that
when we run the inference it forwards in the appropriate
point in the processing to the model server. There the
computation is done and the result gets sent back, and the
computation continues on our classifiers on the outside.
So there are a few reasons we might want to do this, right?
We can get rid of some duplication, now we only have
one model server loading all these weights. We also get the benefit that
that can batch requests that are coming from both sides and also
we can set up different configurations. You can imagine
we might have this model server just loaded with
TPU’s, tensor processing ujts so that it can do what are most likely
convolutional items and things like that very efficiently. Another place we use this is
with large did sharded models. Dr.
There’s this technique of embedding things like words or YouTube
video ID’s as a string of numbers, we
represent them as this vector of numbers.
And if you have I’ve lot of words or you have a lot of
YouTube videos, you’ll have a lot of data.
So much that it won’t fit on one machine. So we use a system like this to
split up those embeddings for the words into these shards and
we can distribute there. The main model when it needs
something can reach out, get it, and then do the computation.
Another example is what we call triggering models. We’ll
say we are building a Spam detector and we have a full
model which is a very, very powerful Spam detector, maybe it
looks at the words, understands the context, it’s very powerful
but it’s very expensive and we can’t afford to run it on every single e-mail
message we get. So instead we put this
triggering model in front of it. As you can imagine, there’s a
lot of cases where we’re in a position to very quickly say
yes, this is Spam or no it’s not. So frinsz, if we get an
e-mail that’s from within our own domain, maybe we can just
say that’s not Spam and the triggering model can quickly
return that. If it’s something that’s difficult t can go ahead
and forward that on to the full model where it will process
it. So a similar concept is this
mixture of experts. So in this case, let’s say we want to build
a system where we’re going to classify the breed of either a
rabbit or a cat. So we’re going to have two models we’re going
to call expert models. So we have one that’s an expert at
rabbits, and another that’s an expert at cats. So here we’re
going to use a gating model to get a picture of either a rabbit
or cat and the thoij it’s going to do is decide if it’s a rabbit
or a cat and forward it on to the
appropriate expert who will process it and
will send that result back. All right. There’s lots of use
cases we’re excited to see what people start to build with these
remote ops. The next thing I’ll quickly
mention is a rest API. This was one of the top requests
on GitHub so we’re happy to be releasing this soon, this will
make it much easier to integrate things with existing services.
And it’s nice because you don’t actual have to choose, on one model
server with one TensorFlow server, you can serve either the RESTful endpoint or
the GRPC. There’s three API’s, there’s
some higher level ones like for classification and regression,
there’s also a lower level predict, and this is more of a
tensor in, tensor out for the things that don’t fit in the
classes phi and regress. — in the classify and regress.
Looking at this quickly, you can see the URI here, we can
specify the model, this may be rabbit or cat. We can
optionally specify a version and our verbs are the classify,
regress and predict. We have two examples. The first one you can seal we’re
scmpg the IRIS model to classify something in here. We aren’t
giving it a version, model version, so it will just use the
most recent or the highest version automatically. The
bottom example is one where we’re using the MNIST model and
specifying the version to be 314 and asking it to do a
prediction. This lets you easily integrate
things and earsly version models and switch between them.
I’ll quickly mention the API, if you’re familiar with
TensorFlow example, you know that representing it in
JSON is a little cumbersome, it’s pretty verbose here you can
see, there are some other warts like needing to encode things,
base 64. Instead with TensorFlow Serving, the RESTful API uses a more
idiomatic JSON which is much more pleasant, much more
succinct. Here this last example just kind
of pulls it all together where you
can use curl to actually make predictions from the command
line. So I encourage you to check out
the project at TensorFlow Serving, sthairpz lots of great
documentation and things like that, we also welcome
contributions and code discussion, ideas on our GitHub
project page. So I would like to finish with James to talk about TensorFlow
Extended. (applause).
>>JAMES PINE: Thanks. All right. So I’m going to start
with a single noncontroversial statement. This has been shown true many
times by many people. In short, TFX is our answer to
that statement. I’ll start with a simple diagram. This core box
represents your machine learning code, they say the magic bits of
algorithms that actually take the data in and produce reasonable
results. The blue boxes represent
everything else you need to actual use machine learning
reliably and scale tbli an actual real — scalably in an
actual real production setting. Blue boxes are where you’ll be
spending most of your time, it comprises most of the lines of
code and also the source of things setting off your pagers
in the middle of the night. In our case if we squint at this
just about correctly, the core ML box
looks like TensorFlow and all the blue boxes together comprise
TFX. We’ll quickly run through four of the key principles that
TFX was built on. First is flexibility, and TFX is
going to be flexible in three ways. First of all, we’re going
to take advantage of the flexibility built into
TensorFlow using it as our trainer means that we can do
anything TensorFlow can do at the model level, which means you can have wide models, deep
models, supervised models,
unsupervisorred, from tree models, anything we can whip up
together. Second, we’re flexible with
regards to input data. We can handle images, text,
sparse data, multimodal models where you might want to train
images and surrounding text or something like videos plus captions. Third, there are
multiple ways you might go about actually training a model. If
your goal is to build a kitten detector accident you may have
all your data up front and your goal may be to build one model
of sufficient high quality, then you’re done. In contrast to
that f your goal is to build a environment kitten video
detector or a personalized kitten recommender, you’re not
going to have all your data upfront, typically you train a
model, get it into production, then as data comes in you’ll
TensorFlow away that model and train a new model and then
TensorFlow away that model and train a new model. We’re
actually throwing out some good data along with these models,
though, so we can try a worm starting
strategy instead where we’ll continuously train the same model but as data comes
in we’ll worm start based on the previous state of the model and
just ad the additional new data. This will let us result in
higher quality models with faster convergence. Next let’s talk about
portability. Each of the TFX modules briptd the blue boxes
don’t — represented by the blue boxes don’t need to do all the
heavy lifting themselves. They’re part of an open source
ecosystem which means we can lean on things like TensorFlow
and take advantage of its native portability, which means we can
run locally, we can scale up and run in a Cloud environment, we
can scale to devices that you’re thinking about today and to
devices that you might be thinking about
tomorrow. A large portion of machine
learning is data processing, so we rely
Apache Beam which is built for this task, and again, we can take advantage of
Beam’s portability as our own, which
means we can use the direct, start with a
small piece of data building small models to confirm your
approaches are correct and scale up into the Cloud with a data
flow runner, also utilize something like the Flink Runner
or things that are in progress right now like a spark runner.
We’ll see the same story again with kubernetes where we
can start with Minikube running locally, scale up into the Cloud
or to clusters that we have for other purposes and
eventually scale to things that don’t yet exist but are still in
progress. Sco port ability is only part —
so porlt ability is only part of the scaleability story.
Traditional we’ve seen two very different roles involved in
machine learning. You’ll have the data scientists on one side
and the production infrastructure engineers on the
other side. The differences between these
are not just amounts of data but there
are key concerns that each has as they go about their daily
business. With TFX we can specifically target use cases
that are in common between the two as well as things that are
specific to the two. So this will allow you to have one
unified system that can scale up to the Cloud and down to smaller
environments and actually unlock collaboration between these two
roles. Finally, we believe heavily
in interactivity d able to get quick iterative results with
responsive tooling and faster debugging and this interactivity
should remain such even at scale with large sets of data or large
models. So this is a fairly ambitious
goal, so where are we now? Today we’ve open sourced a few
key areas of responsibility. We have transform, moondle Sis,
serving and facets, each one of these is useful on its own but
is much more so when used in concert with the others. Let’s
walk through what this might look like in practice. (Correction, model analysis).
Our goal here is to take a bunch of data we’ve accumulated
and do something useful for our users of our product.
These are the steps we want to take along the way. Let’s
start with step one with the data. We’re going to pull this up in
facets and use it to actually analyze what features might be
useful predictors, look for any anomalies so outliers in our
data or missing features to try to avoid the classic garbage in,
garbage out problem, and to try on inform what data we’re going
to need to further process before it’s useful for our ML
training. Which leads into our next step,
which is to actually use transform to transform our
features. So fif TF transform lets you do full
pass analysis of data, attached on
the TF graph itself which will ensure that you’re applying the
same transforms in training as in serving.
From the code you can see that we’re taking advantage of a
few ops built into transform and we can do things like scale, generate vocabularies or buckettize our base data, this
code will look the same regardless of our execution
environment. Of course if you needed to find your own
operations, you can do so. So this puts us at the point
where we’re strongly suspicious that we have data we can
actually use to generate a model. So let’s look at doing
that. We’re going to use a TensorFlow estimator, which is a high level
API that will let us quickly dine,
train and export our model. This is a small set of
estimators that are present in core TensorFlow, there are a it lot more available and
you can also create your own. We’ll look ahead to future
steps and purposely export two graphs into our saved model.
One specific to serving, one specific to model evaluate.
Again from the code you can see that in this case we’re
going on use a wide and deep model, we’ll define it, train it and do our exports.
So now we have a model. We could just push this directly to
production but that would probably be a very bad idea so
let’s try to gain a little more confidence in what would happen
if we actual did so for our end users. So we’ll step into TF model
analysis. We’ll utilize this to evaluate our model over a large dataset, then
we’re going to define in this case one
but you could possibly use many, slices of this data that we want
to analyze independently from others.
This will allow us to actually look at subsets of our
data that may be representative of subsets of our users and how
our metrics actually track between these groups. For
example, you may have sets of users in different languages,
maybe accessing different devices, or maybe you have a
very small but passionate community of rability
aficionados aficionados mixed in with your larger community of
kitten fanatics and you want to make sure your model gives
positive experiences to both groups equally.
Now we have a model we’re confident in and we want to push
it to serving. Let’s get this up and TensorFlow some queries
at it. This is quick. Now we have a
model up, we have a server listening on port 9,000,
now we’ll back out into our actual product code, we can
assemble individual prediction requests and send them out to
our server. If this slide doesn’t look like
your actual code, and this one looks more similar, then you’ll
be happy to see that this is coming soon, I’m cheating a
little by showing you this now as current state, but we’re
super-excited about this and this is one of those real soon
now scenarios. What’s coming next? So first, please contribute and
join the toafl.org community, we — the TensorFlow.org community,
we don’t want the only time we’re talking back and forth is
at summit says and conferences. Secondly some of you may have
seen the TFX paper last year at KDD, this specifies what we
believe an end-to-end platform actually looks like. Here it
is. And by we believing that this is
what it looks like, this is what it looks like, this is actually
what’s powering some of the pretty awesome first products
you’ve been seeing at IO and that you’ve probably been using
yourselves. But again, this is where we are
at OSS right now. This is not the full platform, but you can
see what we’re aiming for and we’ll get there eventually. So again, please download this
software, use it to make good things, and send us feedback. And thank you from all of us for
being current and future users and for choosing to spend your
time with us today. (applause). (The session concluded at 5:06
p.m. ) >>Thank you for joining this
session. Brand ambassadors will assist with directing you
through the designated exits. We’ll be making room for those
who have registered for the next session. If you’ve registered
for the next session in this room, we ask that you please
clear the room and return via the registration line outside.
Thank you.>>
>>>>
>>>>
>>>>
>>May 9, 2018.
5:30 p.m. Change the way you work with
Analytics. .. . .
>>>>
>>>> .
>>>>
>>>>
>>>>
>> Realtime Captioning on this
screen .
>>Welcome. Please fill in the seats near the front of the
room. Thank you. S.
>>RUSS KETCHUM: Hello and welcome, I’m Russ Ketchum, the product
lead for Google Analytics for Firebase and it’s so great to be
back here with all of you at IO, it was two years ago that
we launched Google Analytics for Firebase and in the time since
it’s been awesome to partner with so many of you across so many apps to harness the power
of app first analytics that’s tightly integrated into a mobile
developer platform. You’ve used Firebase and
Google Analytics for Firebase specifically to create so many magical,
intuitive, fun, useful, creative, I can’t
imagine my life before I had them apps. The range that
you’ve created is truly incredible. It’s also
incredible to have had so many conversations with you all about
the range of business goals that you have and life cycle
challenges that you face. Some of you are just starting out.
Others are established players. Some of you are focused on
growth. Others of you are expanding your focus to include
reengagement. For some, monetization is a goal
of the hopefully not-so-distant future, and yet others, you’re
onto your second or even your third simultaneous monetization
strategy. Now, while you have a lot of
diversity to the challenges and goals you have, there’s also a
lot of things that you have in common, and one of the big ones
remains. Let’s face it, building a successful app is still really hard, but
it’s no surprise when you ask the Google Analytics team, we
think that having good data is often the key to making the
process achievable and repeatable. But of course
having good data isn’t always easy in the first place. So mobile apps generate an
incredibly large amount of data, even small apps do, and that’s
why when we were building Google Analytics for Firebase, we built
it to scale, and that’s how we’re able to offer free and
unlimited event reporting even for the largest of apps. But
then once you collect all that data, that creates another set
of challenges. You shouldn’t have to be a forensic scientist
sifting through your data to try to glean some sort of
understanding, and that’s why Google Analytics for Firebase
surfaces key themes and insights and brings them right to the
forefront. But you can have all the understanding in the world
but if you can’t take action on it, then really what’s the
point? And that’s why Google Analytics is immediately
actionable inside of Firebase and across our other
app focused products at Google. Now, we believe these three
pieces are a good starting point for
getting your data right, but really
they’re that, a starting point. Today
we’re really excited to share with you how some of our latest
features we think will truly change the way that you work
with analytics. So the first is truly a
foundational change. So up until this point, if you had two
apps in a Firebase project, regardless if they were the same platform
or a mix of i ore S or Android, you had
to analyze them individually. We’ve heard from you all loud
and clear that this forced isolation doesn’t really map to
how you think about understanding in-app activity.
You think about what goes on in your project more holistically. So today, I’m really excited to announce project level reporting
in Google Analytics for Firebase.
So with it, the event data for all of your apps in a given project are
clctd together to give you a — are collected together to ghif
you a single and comprehensive view of what’s happening. We’re
excited about the possibilities this will unlock.
These possibilities are not limited to one moment in time. Being
able to take this holistic perspective has exciting
implications across your journey as an app developer. Across Firebase and Google
Analytics for Firebase we’ve developed a slew of capabilities
to help you glow your app. Now all those features become even
more powerful but also now that we have a core foundation that’s
inherently multiplatform, this is going to unlock entirely new
types of capabilities that we’re really excited to begin to
explore. We’re going to look at a lot
more of these today. Specifically, we’re going to
show you what this new holistic perspective looks like in Google
Analytics for Firebase. Then we’re going on Pivot and show
you how you can use it to build stable, well functioning apps, how to
monetize more effectively and how to grow and retain your
high-value users. So without further delay, I would like to
invite Steve and Mai up on stage to show you what this looks like
in action. (applause).
>>STEVE GANEM: Thanks, Russ. Hi, everyone, I am Steve Ganem.
>>MAI LOWE: I’m Mai low, we’re product managers for
Google Analytics for Firebase. Mai Lowe.
>>MAI LOWE: Today we’ll start with what Russ has already
mentioned, gaining a holistic view of your app.
So let’s dive in. For many of you, your faib
project represents your product and your business, and seeing the
analytics for each of your apps independently can be useful and
inspiring meaningful actions, but you really want to know how
your product is being used and how your business is performing
as a whole. And up until this point, the
only solution had been to integrate with big query and
then to do some kind of data manipulation elsewhere, possibly
in data studio or in a tool that’s internal to your company. Yet over 90% of the top 100 apps
on Google Pay are also available in the app store.
What this means, it’s just one of many statistics that highlights
how it’s important to see and understand
your app data across platforms because many of you in fact have
it across platforms. So I’m excited to announce that within
the next few weeks, you can view all of the an lilt iks for your
app at the project level. So instead of seeing your datd separated by platform, you can
now see your event activity combined. You will see things like first
opens, purchases, revenue, across your
Android and iOS app. And project level analytics as a
concept also aplieps to the other places in — applies to
the other places in analytics that you’ve become used to, such
as audiences, funnels and retention cohorts.
Of course you’ll still need to be able to focus in on one
app in your project to make sure there are no issues with key
metrics or efforts that you have that are specific to a platform.
And you can still do that. By applying a stream filter.
So a stream is just a source of analytics data, and it Maps
one to one with an app in Firebase. It’s
not an all or nothing situation. So you can filter on other
dimensions in addition to a stream. So maybe sometimes you
want to only see production versions of your app and not the noise of the test
versions. Our new flexibility filters control allows you to do
just that. The use case is just built from there, right? You
can see the concept applied to age or user properties, OS versions,
et cetera. Another place where you find a
particularly powerful to have project level insights is
realtime. So here the changes in the way you work with
analytics are twofold. One is around what we’ve been talking
about, which is project level data. And two is around where
you’re seeing the data. Some of you may remember last year we
launched stream view, which helps you understand what your
users are doing inside your product right now. It’s a big
and exciting launch for analytics. And at the time it
was its own dedicated section in our UI and it lived separately,
which made it hard to incorporate the insights from
that section within the rest of the context of your app.
So I’m excited to let you know that now as of today
throughout the whole product you see realtime cards to give you
that missing connection to your users, so you know what they’re
doing right at this moment in time. This gives you an idea if
your product — of how your business is performing right now
within the con tesks all the other app data that may not be
realtime in nature, and of course if you want to dive
deeper into realtime data, you can still do that, click through
and you’ll arrive in our Stream View
section. Okay, time to look at some of this stuff live. Steve
is it going to do just that.>>STEVE GANEM: Today we’ll
look at Flood-It!, which is a simple fun addictive puzzle game
available on Google Pay and on the Apple app store and it
leverages a number of different Firebase features. It also the public Firebase demo
project, so it’s a place you can go to check out all the latest
Firebase features, see what Firebase has to offer. To
access the demo project, just go to the Firebase console and look
for the prompt that says explore a demo project and click on it.
And you’ll be able to take a look at all the features, the
analytics features we’re talking about today, which will be
rolling out to your projects in the coming weeks.
Now, for a special treat for those of you who are attending
IO, if you’re interested in getting priority access to these
features, just head over to the Firebase sandbox at any time,
look for somebody wearing that bright yellow Firebase shirt and
let them know that you’re interested in that and they can
get you signed up for it. So without further atorques
let’s switch to the demo, please.
The demo starts here on the project overview for Flood-It!.
This is also being revamped and will be rolling out soon. Among
other changes, there’s a focus on a lot more analytics data
being shown here so that at a glance you can check the pulse
of your project. Previously there was one day worth of
analytics data here, nowr that’s been expanded to show two weeks
worth and it focuses on daily active users, day one retention
and revenue. Now, this is across your project but also
broken down by app in your project. Also as Mai mentioned before,
we’re spreading realtime data out throughout the product
showing it in context in other places so you can see even here
in the project overview we’re
showing a count of players playing food it actually
participate ng my demo now, 79 of them. It the project overview act as a
good springboard into analytics if that’s your destination. You
can click on any of these cards to jump right into the Analytics
dashboard. Am I’ve landed on the dashboard and
I’ll take a moment foinlt something out. Some of you who
are familiar with the dashboard might have noticed
there’s a small change in our user
interface, the app selector is gone. There used to be a
drop-down next to the word dashboard that would require you
to select the app that you want to analyze and by removing that
we’re actually showing together all the data from all of the
apps in the project together. This is project-level reporting.
Now, it may seem like a small change in UI but actually this
signals a fundamental shift in how Analytics is going to work
going forward. From now on, the events being logged from any of the apps in your
product are actually being collected in a single Analytics property and the way
that you configure your Analytics is going on change as
well. It used to be that you configure each app
independently. So for Flood-It! I registered a whole set of user
audiences, properties and funnels for the iOS version and
then had to go and do the exact same set yum for the Android
version and any changes I would make thereo I would have to go
and mimic copies on the other ones. Going forward we’ll
configure Analytics at the project level so you can
configure it just once for your project and it applies to all
apps in the project going forward. What we’ve done also is combined
the configuration that you have at the app level into a single
configuration and promoted that to the project lefl and our
engineers have worked extremely hard and I have to give them a
shout out, really, really hard to ensure that your historical
data and your configuration is high grating seamlessly into
this new world to make sure that you don’t miss a beat.
The results is an Analytics product which says simpler to
set up, easier to maintain and more powerful in its reporting
capabilities. It makes it easy to answer
important questions like how much money is my app business
making or how much time are users spending in my app across
plat formtz? The answer to those questions starts in this
dashboard. If you haven’t seen the dashboard in a little while,
it’s actually gone through a huge overhaul, complete
facelift. We center the reporting around questions that
people in your organization need to ask and answer on a daily
basis to do their jobs. Questions related to
conversions, engagement, revenue, stability,
your latest release, acquisition and
so forth. And we have realtime cards everywhere as Mai
mentioned, showing us that now it’s up to 86 users playing food
Flood-It! since I started. That’s 84 users across the apps
on the project. I recognize this is a developers conference
and there’s bound to be some of you in the awd jens going this
project-level stuff sounds great for someone but I work on just
the iOS version of the app. Are you telling me that I have to
look at all the data together and I can’t just isolate the iOS
version? And of course that’s not what we’re saying. We just
changed the way suddenly that you’re going to look at data
from one specific app. Going forward, you’re going to
apply a stream filter if that’s what you want to do. A stream is just a source of
Analytics data and in faib that Maps one to one with an app in
your project. So if I wanted to look at data just for the iOS version of flood t I could lick r click
that, or just the Android or the whole project. Flood-It! is set up pretty simply but I’ve
seen a lot of projects set wup many
more apps. For example some projects have one app for every
stage, beta, production version. If you’re set up like that, now
you’ll have the ability to actually just filter in the
release version of your app or just beta or just alpha so you
have more flexibility than you had before actually, you haven’t
lost any. So let’s look at how this works.
We put it together to answer some real questions. As a
product manager on flood t I want to know how much time are people
playing Flood-It! each daivment I’ll go to the user engagement card, shows me daily
user Engage support a minute and 4 seconds — is eleven minutes
and 45 seconds, average user playing each day. If I’m curious about iOS, I can
apply at that as a filter and see that it goes down to 11
minutes, 3-9D seconds, remarkably consistent, not too
shocking because there’s feature parity among the apps.
But I can go further than that. There’s new flexibility
here which will allow me to also apply filters beyond just the
stream, and if I’m curious how this metric holds up with people
on new devices, maybe I want to filter for iPhone 8’s and the
iPhone 10 as well, I can apply all those filters all at the
same time. You can see that engagement
actually goes up by a little bit. Maybe this is a small sample
size or maybe there’s some correlation between people on these latest phones,
and Flood-It! or maybe the experience is just better. It’s
hard to tell without diving deeper. But you can do that
with these calls to action at the bottom of each of the cards
letting you go deeper along those same lines of
investigation. So I could go into more and more
examples, but now that you know how to access the demo project
yourself, I encourage you to check out these features for
imrurs is get a sense of how these new features and analytics
will open up new ways for you to analyze your own app business in
the coming weeks. Can we switch to the slides,
please? >>MAI LOWE: We worked
really hard to meet your needs with our reports and dashboards
we’re building, we know sometimes you just have to take
the data and manipulate it yourself, maybe it’s an
additional analysis or joining the data with outside sources.
Up until this point, the only way to do that is to integrate
with big query and we love big query but it may be more than
what you need. So I’m happy to let you know that we’ve built CSV export into our
UI and now you can download almost any report that you see
with the click of a button in the overflow menu.
But CSV export is aggregates only, so some of you may be
interested in the raw underlying data and of course some of you
are already integrated with big query and are loving using it in
collaboration with data studio so you’re wondering what does
project level mean for me. I want to show you, we’ve got you
covered and project level data is also available in big query
and data studio, so you’ll be able to see and visualize
project level data without any telling of any kind. You can
just focus on customizing your dashboard. So let’s take a look
at these slides. Please switch to the demo. Dr.
>>STEVE GANEM: We’ll pick wrup we left off, and start with
access to your reporting data. Access to reporting data, you
download CSV. First keep in mind with CSV download is what
you see whraw get, so the CSV that you download will reflect
the report you’re looking at, including the date range and the
filters that you have applied to it. So in this case, if I want
to download the metrics for my dashboard for project level
Analytics I’ll remove the filters that I had applied for
the earlier part of the demo and now just choose download
CSV, that finishes downloading and take a
look at the data and it looks like this.
So we have the data you see on the dash borksd a section for
each of the cards and the metrics for every daivment you
can copy this into a spreadsheet to continue your analysis, I’ve
done that ahead of time because I’m crafty like that, and you
can see that my daily, weekly and monthly active users are
shown per Dale here in the sheet, where I can do a one-off
analysis. I’ll do one for example here today. Sometimes people want to compute
dowmow which is the ratio of daily active users to monthly
active users, this is a common proxy for
stickiness in an app. Maybe we want to do a one-off here. We’ll define DOWMOW say it’s equally to daily
divided by monthly, apply that down and then graph it, I’ve done a
one-off analysis where I can give an answer to someone asking
for it or check the pulse on some metric that I care about.
This is your reporting data, it’s ago galt as Mai said, there are some
tasks — it’s ago galt. If you wanted to combine your Analytics
user data with data you collect independently like from
a CRM data base you’ll need access to raw data. Would he
make that possible through our integration with big query. The
way you access big query, first you have to link your Firebase
project to big query, do you that start with this gear, go to
your project setting and in the integration section you’ll find
big query and you can link to that product there.
Once you do that, I’ll head over to big query to show you
the result. This is the big query user interface for those
you unfamiliar with it. And big query for those of you
unfamiliar with the product is Google’s tool for running
super-fast queries over massive datasets. When I have linked
any Firebase project to big query, previously what that
would do is actually create a dataset for each app so I have
highlighted the dataset for my iOS app and the dataset for my
Android app. Going forward and keeping with the team of project
level analytics, faib will export its analytics data to a
single project level dataset and all the data for those for all
of your apps will be exported into that single dataset. This
will make it much simpler to run queries across your apps and to
join data from your project with external data. I’ll run a quick query here just
to show you a simple example of how it works. This is a
calendar of eempts broken down by event name and platform and
I’ll run the query, it usually just takes a second and there
you go. It’s a count of accounts for Android and if I
scroll through you’ll see the iOS events as well. Some of you
may not be looking for a technical interface, you would
rather have something more visual to work with, but you
still want to access your raw data, that’s where our
integration with data studio really shines. It’s additive on
top of this big query export. Product managers from data
studio and gaks for Google Analytics for
Firebase have created this set of two
templates toll give you a quick head start to analyzing your raw
data in a visual way. Nothing technical required here in our
help center I can walk you through just a series of clicks
to get and you running and see these visualizations for
yourself. This is what our public demo
project reporting looks like. And there’s three pages full of
curated reports that give you insights into your users, location,
broken down by city, language, et cetera, filtererrable by
platform, app, et cetera, and it goes much deeper than this, and you can just make it
your own. You click the Ed it button and you can start editing
the graphics, give it your own branding, remove metrics rs add
your own, you can click on any control, change the dimension or
change the metric. When you have it looking just the way you
want it, you click share and you can share the report with any
party that’s interested in seeing it, and it keeps up to date as new data pipes into
big query. So now we’ve talked about how
project level Analytics will change your report and go we’ve
talked about how it also changes the way you get access to the
underlying data. Now we’ll Pivot and talk about
some features which will change the way developers work with
Analytics. Can we switch back to the slides, please? >>MAI LOWE: We all know
it’s critical to have insights into your user base, revenue,
growth and so on, foundational of course to have an app that’s
stable and actually works as you intend it to and we all know you
guys love Crashlytics, that’s why we’re dmietd Crashlytics is
now part of Firebase and the crash reporter for Firebase.
But if you thriekd Crashlytics before, just wailt, it gets even better
with Google Analytics for Firebase. We also know many of you guys
love our breadcrumbless feature, now
being access it in the it analytics base of
the console. We want to make sure you have all the data you
need at your fingertips when you need to troubleshoot. Steve
will show you in a second. Beyond understanding your
crashes, you also want to know how your releases are lapgd with
your users. You want to know the answers to questions like
how is my latest release affecting the it user experience
or how am I trending it release? Thus far we didn’t have
information for you. So I’m happy to show you that we have a
latest release report which was inspired by the fabric report of
the same name and here we show you how your latest builds are
performing. And to make things easy for you,
if you have any issues around stability in your latest
release, we deep link you into Crashlytics or crash reporting
depending on what you’re using so you can be in the right place
to troubleshoot right away. Coming up we have a case study
to bring these features to life and center stage is actually an
app that Steve built much let’s take a look.
>>STEVE GANEM: Before coming to Google I was a
lifelong game developer and one of the last games I re: leased
was transworld endless skater. Skate as we call is a
skateboarding take on the endless runner
genre, tracking my KPI and Crashlytics
to track my stability much I want to reiterate what Mai said,
we are so stoked to have crashlytic as part of Firebase
now, there’s not a better tool on the market to help monitor
the stability of your app. Most crash reporting tools at
least give you a list of issues and number of users it’s
affecting and also will give you a call stack related to each
issue. But as we all know, call stacks aren’t always enough to
understand where a crash is occurring or how to reproduce
it. For example, take a look at this crash report, this issue shown in
Crashlytics for my own app Skate. The issue I’m looking at
is identified as an open GL rendering crash,
the crash occurs at an open GL driver, basically some triangle somewhere caused a
crash. Good luck trying to fix that. If my own QA dprlz department
had reported this, I would say give me more info, what skater,
what level, what screens the user used before the crash
occurred and then maybe you can actually get started fixing it,
and that’s why I’m so excited about the bread crumbs
integration because that’s exactly the sort of data you’ll get, context
you’ll get from having logged analytics events already. So
you can see in this example after the sessions start this user
logged a select content event, identify the skater the user
chose and then a level start event at the start of the level
which identifies that the user started at the school level,
they were not using the game pad or logged in, basically give me
all this context about the answers to the questions I might
otherwise ask. And so this changes how
developers think about analytics, now they
want to log events because it will help them do their job
better on a daily basis F I’m an app dwoarp, I’m put these in
there not necessarily because it will help the business person
track success because r but because it will help me fix bugs
when they occur, really excite to go me to think it will change
the way developers work with Analytics. Crash report sg one
way to examine the health of your app, but latest release
report we launched approaches the problem from another
direction. This is a screenshot of the
latest release report for Skate and it compares the stability
and engagement of your latest release compared to previous
versions to make sure everything is looking good. It might be
that your app is just as stable but people are spending a lot
less time in it because you messed up and this is a case
study of when that recently happened to me.
Looking at the adoption charted, an area looks fishy. 7 day period an app was rolled
out and subsequently rolled expwak a new one pushed back.
That time period core ponds to a vacation I went on, right after
releasing a new version of my app, and I forgot to bring my
work laptop, so I couldn’t fix it until I got back
from vacation and it really stunk to
do this to my users, I wanted nothing
northern fix but at least I became aware of the problem
right away and I discovered actually using our own tool,
latest release, no joke. How did I discover it? The previous version of my app
users spending an average of almost 9½ minutes playing the
game each day and the new version of my app that went down
to 27 seconds. For those of you interested in the gorey details,
I had forgotten to associate my expansion file with my APK and it has all my game data in
it so basically users were update to go a new version of my
app that had no game data at which point they would leave and
often uninstall. I fix it and I can see that the
user engagement went up to almost 14 minutes a day. So I’m
happy to report that Skate is stable again and ready on make
money. Thank you. (laughter).
So like with many games, Skate monetizes using both in
app purchases and in app advertisements. Next we’ll talk
about some new features that actually will change the way
that you monetize your apps as well.
>>MAI LOWE: Nowf you have an idea of how to get both
broader and deeper insights into yoor app, you know how to build
something that’s stable and functions, so let’s talk about
the next step in some of your journey which is how to make
some revenue with your app. Our app integration is a break throughway — AdMob integration. We included ads re: new when we
launched as well as lifetime value and campaign reporting in
acquisitions card on the dashboard, but at the time
mediated revenue was missing and we know how helpful it is for
you all to have it included. Maybe some of you already heard
this announced yesterday, the launch of open bidding, but
nonetheless I’m excited to let you know that as of today you
can see mediated revenue for participating ad networks as part of both
revenue and lifetime value and of course like everything else,
also at the project level. And once you’ve begun monetizing
your app, you can really lean on your data to help you build the
best possible ads experience for your users. What does that
mean? Well, we’ve all been there in an app that has ads that are irrelevant
and annoying and they distract you
from what you’ve come to do, it’s really not good for
anybody. We want to help you build the best possible ads
experience for your users because we want your users to
discover new great content that dliets them and we want the
discovery to be seamless, natural and organic and to make
sense. Faib A/B testing launched —
fire basis A/B testing launched last year, and this is the data
driven way for you to make decisions around what kinds of
ads experience best for your users. You optimize on AdMob revenue
and totals revenue, in app purchase and AdMob revenue. The
beauty is it’s a win-win situation, your users have a
better experience, they discover content that delights them and
you make more revenue in the process.
But A/B testing isn’t the only way you can maximize your earning
potential. Predictions is in beta and it’s
one of the ways Google applies machine learning to the wealth
of your app data to bring you meaningful insights and
actionable calls so you can groal your business.
Your app event data is analyzed to discover
statistically significant correlations between something
like spend can other characteristics of your users
and your app. And machine learning is applied and Firebase can target an audience
deemed most likely to spend either with
remote config or a push notification. Take note that
machine learning is most powerful when you have a high
volume and high variety of events that are important to
your app. So the more meaningful and
contextualized you instrument your app, the better machine
learning can train the models and the more helpful the results
will be for you and in this case maximizing revenue.
All right. Let’s look at a real life case study on
visualize what all this means. As you’re trying to build the
best possible ads experience and effectively monetize your app.
Let’s take a look.>>STEVE GANEM: So Rockbite
makes the hit game Deep Town which I’m sure all of you have
heard of already and in this game you mine for metals and
gems using robots with skills and abilities. It has a deep skill tree and
robust in game economy which includes both items that you can
purchase with real money and items that you can earn over
time. Now, Rockbite wanted to optimize
their app for revenue like we all do and this is a study of
what they did and how it all worked out. For some background, Rockbite’s
Deep Town involves two type of in app
currency, crystals shown at the top that you can buy with real
money and there are chests which you can purchase with crystals.
Previously the team at Rockbite was doing experiments based on
in-house heuristics for how to organize this shop for each
user, but all their experiments were inconclusive. Using a
combination of Firebase Predictions and remote
configure, however, they were able to dynamically adjust the
layout in the shop based on users predicted behaviors, they
tried and test the multiple options but ultimately here is
what worked for them, if a user was predicted to not
spend by Firebase Predictions, they would show crystals at the
top of their shop, which again you can buy with real
money. The result was an increase in 24% revenue among nonspending users.
Conversely, if a user was predicted to spend, they would
move chests which you can’t buy with real money to the top of
the shop and put crystals which you can buy with real money
underneath those. That celled revenue among this
group by 25%. Now, at first glance, this may not seem
intuitive to you, the user is predicted to spend you might
think that will you’re going to put the items that they can
purchase at the top of the shop, but they tried and tested and
this is ultimately what worked, is it just goes to show you how
important it is that you have the ability to experiment and measure because intuition is
great, but data is king. The C he O from Rockbite was impressed — CEO saying this
faib predictions from remote config gave us the power to
target users based on their predictive behavior and rapidly
test different purchase flows in our bait appear. Without
Firebase we would not have become profitable in today’s
competitive mobile gaming industry. Sounds like Deep Town
is in great shape now, enviable position
where your engagement and conversion metrics are looking
good and now they can focus on acquiring the best users.
>>MAI LOWE: All right. Let’s move on to the last part
of the journey we’ll cover today, which is to grow your
users with high value users and to make sure they keep coming
back in your app. First let’s tack Elm the problem of finding
your high value users. We all know this is really difficult to
do. Many businesses have multiple efforts going on on
this front and they often increase in volume as your
business matures. Generally speaking when you try to find your high value users, you
start by identifying an event or action that’s important to you
and mark that as a conversion. This unlocks attribution
reporting and shows you what will drove that user to perform the action you’ve deem
as high value. Thus far attribution reporting
and Analytics for Firebase has only
included ad clicks as input. It’s hard to justify that only a
click will credit an ad network or
campaign with a conversion, especially in our age of rich
media where there’s plenty of mediums that may not neatly fall
into a click-only model. I’m excited to anuns in the next
few weeks our view through attribution feature will be made
publicly available, not only look at click data to credit a
conversion to source, medium and campaign but also impression
data, so you can have multiple lenses to judge your success by. Building holistic unbiased
and accurate attribution model is one of the cornstones of
Analytics product. We understand your need that you
want all your advertising efforts evaluated for
effectiveness by one engine and seeing the results side by side.
To address this need, we’re integrated with many different
ad networks that we think you would tap into as you continue to build on and expand your
growth efforts. We’ve been integrated with AdWords since
our launch at IO two years ago and a couple months ago we went
to the public with our integration with double click,
and in addition to that, we’re also integrated with all of the
major third party ad networks that you
all use. All right. So the last step once you’ve found
these high value users is figuring out how to get them to
continually come back in your app. So how do you do that?
As some of you who have already linked faib Firebase and AdWords together,
linked so you can create campaigns that
target Firebase audiences. Late last year we expanded on this
integration. We wanted to leverage the rich audience
builder functionality you’ve come to love in our AdWords
front front end. We publicly launched dynamic remarketing
late last year, this sends raw conversion and parameter data to
AdWords so that you can customize and configure your
audiences in the AdWords front end for your marketing campaign
purposes. We’re really excited about this launch because I
think it really show cases the flexibility of our data and for
us to have the ability to meet you wherever you happen to be.
All right. That’s it for Steve and I. Russ is going to
come back and close us out. (applause) .
>>RUSS KETCHUM: Thanks, Mai, thanks, Steve. You guys
covered a ton of ground today. We announced a fundamental
change to Google Analytics for Firebase
that will let all of you take a new
approach to Google Analytics. We saw it across our data export queries, big query, Crashlytics,
now enhanced with bread crumbs, how you can leverage A/B testing
prediction all in service of monetization and across growth
and retention. With that, I want to thank you
all for staying late, for coming out or for tuning into the live
stream, we definitely want to keep the conversation going. To
do that, we would love for you all to stop by the Firebase
sandbox, just look for anyone in a yellow shirt, we also have a
bunch of related sessions for you to check out both the live
one tomorrow and others on YouTube now and we would of
course love to hear your feedback. So with that, thanks
again for coming and enjoy the rest of IO.
(applause). (The session concluded at 6:09) .
>>Thank you for joining this session. Brand am bals dors
will assist with directing you through the designated exitsz.
We’ll be making room for those who have registered for the next
session. If you’ve registered for the next session in this
room, we ask that you please clear the room and return via
the registration line outside. Thank you.
>>>>
>>>>
>>>>
>>>>>>>>
>>>>Realtime Captioning on this
screen . >>Welcome. Please fill in
the seats near the front of the room. Thank you. >>Good evening, everyone,
welcome, I hope you’re having a great time at Google IO so far. I’m Avnish Midthuri, I’m a
product manager at Google.>>TONY CHEN: I’m Tony Chen, software jerks also working soft software engineer also
working on Google Play.>>AVNISH MIDUTHURI: I’m sure a
lot of you here are already excited about the potential for
e-commerce and rightly so. In the U.S. alone in 2017, retail sales near
half a trillion dollars with the highest year on year growth in
the last six years. We’re seeing similar growth trends
globally. But this growth is happening
DES spite the fact that online con verges haven’t really
improved, conversions across desktop and mobile devices has
been large the flat over the last couple of years. Thinlts hard to see that carted
abandonment during checkout is one of the liegd causes, when
users go to check out online, they see long checkout forms, 15
plus form fields to enter their card information, billing
address, their shipping address, log-ins, passwords, and the list
goes on. When users see all these forms,
it’s not surprise that go they drop off, especially when
they’re in a low speed data connection on mobile devices.
So we think there’s a huge opportunity here to really
improve online checkout. That’s what we hope to do with
Google pay. Thereto start of this year we
uniif — the start of this year we unified our checkout process
across all of Google under a single brand, Google
pay. Vision for Google pay is to enable users everywhere to
pay with their Google account anywhere they’re signed in.
Google Pay is more than just an app on a device or a button on a
website. We’re building it as a platform integrated as a core
part of a user’s Google account and available to them everywhere
they’re signed in to Google. In this session we’ll spend some
time to share some of the changes we’ve made over the last
year as we’ve launched Google pay and talk about how you can
implement the Google pay API and share product enhance mghts coming up.
A few years ago we launched Android Pay and started our
journey with enabling easy in app can
payments using cards users provisioned on the device. We
launched the feature in 12 countries and saw great adoption
from thousands of developers but we heard from you our developers
that you wanted more, you wanted access to many more users paying with more payment
methods and in several more countries.
So we went back to the strengths of Google, our billions of users
across multiple popular products around
the world. Our users have saved hundreds of millions of cards to
their Google accounts, making purchases on apps like the play store accident,
YouTube, even shock being the web using Chrome. Google Pay users can make
payments with payment methods stored in their Google account
anywhere they’re signed in, a consistent and seamless
experience. Google Pay is already the way
users pay cross Google, users see
Google Pay today when they check out on the play store buying an
app, renting a movie, or just on Chrome auto filling a
checkout form. Not only is Google Pay present
when users buy from Google, but it’s
also being incorporated into several
new services for transactions that users can make with third
party merchants. For instance, you’ll see Google Pay powering
checkout promotion experiences with Android messages former
chants that have used rich business messaging. And also from assist in enable transactions with Google
Assistant or on web actions. We’ve extended Google Pay API’s
to Google developers so users can pay with their cards on
their Google account when they’re on your sites or apps
without any additional setup or having to download a new app. We launched earlier this year
with payments enabled on Android devices, both native apps and on
the mobile web using Chrome’s implementation of the payment
request standard. In the last few months, we’ve already seen
great adoption from several developers around the world.
Take a look. Users can already use Google Pay on a range of measure chantsz to
sites, retail sites, travel, transportation sh ridesharing
and many more. If you have merchants live across the world
from the U.S. , Brazil, Russia, UK,
Australia, Japan and many more. Merchants with a global user
base can enable Google Pay for all their users with a single
API integration. For instance, a leading order
ahead app deliver U launched Google pail in over 12 markets
and Uber is rolling out Google Pay for all their users globally
cht. Early developers have seen great
results from their integration says. Franches For example, HotelTonight
converted of 5% thanks to easy payment experience. As I mentioned earlier, on you
newer Google Pay APIs enable many more users to pay with all
their cards in the Google account so when stub hub
updated from Android pay they saw a 7X increase in the number
of unique users paying with Google Pay . When Airbnb updated they saw
an 11X increase in daily transactions from Google Pay. Our APIs also enable merchants
to build streamlined onboarding experiences for their users. For example, new Airbnb users
when they set up the app can completely skip the payment
method setup flow and complete a booking with Google Pay without
ever having to enter any payment information or billing
addresses. We’re now taking a huge step to
make Google Pay available to many users on the web, last week
we announced it’s available as cross the web, from Android to
desktop and yes even on iOS devices. Through one integration with our
JavaScript libraries, merchants can enable Google Pay globally
available to all your users, regardless of what dwiels or
browser they’re shopping on. — what device or browser
they’re shopping on. We’ve also focus odd making our
APIs focused on making the making (?
). Reported it took one of their developers a week to
enable and launch Google Pay. >>In queepg our mission to
make inltd graitions as simple as possible, we’ve also
partnered with leading e-commerce platforms to make
accepting Google Pay on the web even simpler. For instance f
you’re a in Shopify merchant, all you have to do is go to your
merchant setting page in the payment provider section and
with a single click you can enable Google Pay. Shopify
launched a few weeks ago, in that time we’ve already seen
tens of thousands of merchants enable
Google Pay. Finally we partnered with several leading
payment providers, gateways and processors around the world to
enhance security and make it easy for their merchants to add
Google Pay. These providers have already
enabled Google Pay in their SDK’s and merchant solutions and
we have many more coming soon. To show you now how Google Pay
work on the web and how you can integrate the Google Pay APIs,
I’ll hand it over to Tony.>>TONY CHEN: Thank you,
Avnish. Can you switch to my iPhone please? So I can’t
really walk very far with this. So I’m going on quickly walk you
through what the web experience is going to be like. I’m on a
merchant site right now, I’ll buy something and check out with
Google Pay. As you all know, moarpz day is
coming up and I haven’t bought anything for my mom yet. As Fay
father of two, I would normally try to buy something for my wife
first but every time I try to pick out clothes for her,
somehow they always end up getting returned.
So I think my mom will appreciate this nice green
sweater, so I’ll go ahead and add it to the cart. I’ll click checkout and up pops
a buy Google Pay button. I’ll open this, this shows our
payment sheet, our payment sheet shoarpz the available cards and
shipping addresses associated with the particular Google
account. If you happen to be logged into more than one
accounted, you can easily technology it like so. As you
can see, this is a test account, so there aren’t any cards set
up. I’ll go ahead and switch it back, so I can easily check out.
I’ll ship this to my mom, so I need to add her shipping
address. Sometimes I really have a tough
time remembering exactly what my mom’s shipping address is. It’s
a good thing that Google Pay shows auto fill suggestions from
the Google Maps API to make even filling out this form
super-simple. Now that I’m done, I’m just going to go ahead
and click the continue button and now we’ve just made a
purchase. Google Pay also remember the
last selected card and shipping address to make your next
checkout easier. For example, if I open up the
payments sheet again, you’ll now see that my mom’s shipping
address is now the default. Users will see the same
consistent user experience across all major browser platforms, such as safari,
Firefox and coming soon Microsoft Edge.
One other thing to note is that this merchant page even
though we built this just for IO, we’ll actually release this
to the public and to all of you developers so you can play
around with our APIs without first having to integrate. We built this nifty little
developer console on the top right. There’s a little bug
little icon and you can easily manipulate the request. So for example, if I wanted to
not collect shipping address for whatever reason, I can change this to
false and if I for some reason can’t support all available
Cornett works, I’ll easily remove one like so and then I’ll
click on the Google Pay button, and
now you’ll see that the shipping address is no longer available
and my previously selected Visa card is no longer accepted. Can you switch back to the
slides, please. For that test site you just saw
go to G/co/demo. Nowf let me walk you through how easy it is
to integrate with the Google Pay API. With these four simple
steps, you can enable the same payment snearns I just showed
you for all Google users. The four steps are as follows.
You’ll download some Java scripted, check if the user is
on a supported browser, add the Google Pay button and then open up our payments sheet.
First you’ll add the script tag to your site, as soon as the
script is loaded, you construct the payments client object by
passing it in an environment field. We support two
environment fields, test and production.
In environment test, you don’t have to register with us.
You can actually play around with the APIs yourself and
integrate into your site. In environment test we do show the users real data. However,
whenever they make it a selection, we actual return you
a fake token. If you’re integrating with one of our
supported processors, we’ll actually return you a token that
you can actually charge in their test environment. Once you
complete the integration and you’re ready on handle real
payment, come register with us through our self-service portal
and then flip the environment to production. Now that you have the payment
coin constructed, first API you’ll
call is is ready on pay, tells you whether
the user is on a supported browse error not. At Google we
really focus on optimizing for conversion so rather than you
showing yet another payment option, we should only show
Google Pay if you think it will help improve your checkout
experience. So if is ready to pay returns
false, don’t render the Google Pay button.
In the near future, we’re going on make an en lancement to
the is ready to pay API so you can ask for ready to pay Google
users with an existing payment method.
Now that you know that the user is ready for making a
payment, you’re going to call our second API, create button. Create button returns a simple
HTML element that you can append to
the dom of your site. We recommend you should call this
piel over constructing your own button with static assets so you
can take advantage to you athe improvement we make to the
button over time. For example, in the near future we’re going
on automatically translate the button tolt users’ locale in
order to improve click through raitle. Another thing we recommend is
that you should use our default color which is black. However,
if you happen to have a site with a dark theme, we provide an
alternate White color that you can use. The minimum that you need to
pass this create button API is this
click event listener function. So once you add the button to
your site, the user clicks on the
button, you’re going to have to call — you’re going to call or
load payment data to actually open up the payments sheet. The
first thing you’re going to do is construct this payment data
request object, which is just a set of payment configuration
used for this particular transaction. For example, if
you need to collect a full billing address because you want
to do an ABS check or to collect a phone number so that you can
contact a user, you can easily configure it like so.
We highly recommend that you collect as little information as
necessary from the user as we’ve seen conversion rates tend to
drop whenever you ask the user for additional information.
One other thing to call out in this request object is this
payment method tokennization parameters. This object may
have a long name but all it is is a set of key value pairs that
we’re going to forward to your processor, so be sure to check
your processor’s integration
guidelines to find out what they need from us.
Now that you constructed the request object, you’re going to
pass it to load payment data, that’s going to open up our
payment sheet, the user is going to make a selection, and we’re
going to return to you a payment data object. The payment data object consists
of metadata about the user selection so that you can use it
to render the order confirmation screen. Also within the payment
data object is this payment method token. The payment
method token is what you’re going to actually use to
complete the transaction by pass it go to the processor. One thing to know is that
security is already baked into our product. Our payments sheet
is always opened within a separate popup window to eliminate user clickjacking or
any other security vulnerabilities. What that
means is if the site happens to be compromised, no malicious
content can be overlaid onto our payments sheet so it won’t
confuse the user by obscuring their data, what that means is
Google user’s datd is always kept in their control.
So what that means for you as a developer is that when the click
event happened, you must call load payment data
synchronously. Don’t make network requests to the server
or any other asynchronous calls in order to avoid having the
popup window blocked by the user’s browser.
And that’s our entire integration.
So it’s super-simple. I’m going on prove to you how simple
it is by doing a live implementation from scratch, and
all I’m going to do is basically copy and paste from the deck
that I’ve already presented to you to show the entire
integration. Can we stwoich my pixel book
switch to my pixel book, please? Remember folks this is a live
implementation, there is a pobilityd that things may not
work. Just in case, I have this cheat
sheet that may help me out, but hopefully I won’t need it.
All right. So I’ll do this implementation
on JS Fiddle which allows me to enter JavaScript on the fly and
print out onto this result page on the bottom right here. On the left we have HTML,
JavaScript in the middle, CSS on the right. We’ve already
downloaded the scripted tag for pay dot JS and already
constructed the payments client with environment test so I’m
just going to expand the JavaScript section a little bit so you have more room to
see, first sthing we’ll call is ready on
pay. , returns a simple bouillon that
tells you whether the user is on a supported browse error not.
So I’ll print out the response of is ready to pay , I’ll update the site rkts,
it’s taking a little time think about t I’ll try it again, wow, this
is — let me try that. Are — thawrks
Thank you for catching that, I changed this based on someone
else’s suggestion. Thank you for catching that. Let’s print that out again. All right. Run it again. The result shows you that we’re
on a supported browser. Next thing we’ll do is it call create
button. Create button returns a simple
HTML element that we’ll uses to embed into our site so I’ll ald the append a button
into the site and then need to fill in this click list
function which I’ll leave blank for now, updates again, and here
you go, here is our black Google Pay button. By default as I
mentioned, we always use the black color. However, if your
site happens to have a dark theme, you can easily change the button color to White like so,
then you’ll automatically get this White button. If you’re having to develop a
responsive page or if you just don’t have real estate and you need a
narrower button, you can easily change the button type to short
to get a shorter but tofnl as I mentioned earlier, you don’t
need to fill in any of this at all, by default, we’ll always
use the black button and standard size.
Now we’ll call payment data, call load payment data by
constructing this payment data request. So I’ll copy the request object
from these two slides, then I’ll
merge them together, and then we’re going
to fill in, we’re going to call load payment data on the button click event,
which I left empty, and then to show
that everything is wired together properly, I’m going to
fill in this process payment function. So the
process payment function is going to take in our payment
method token and the token itself is a
JSON object. So I’m going to call JSON dot
stringafy so you can see the content. All right. Here we
go. One click, wait a little bit,
click on the button, up pops the payment sheet, you make a selection, this
always seems to happen. All right. Update again. It’s going to work the second
time. We’re we go. Here is the exact token onto the
result page. (applause).
Thank you. Can we switch back to the
slides, snrees super simple, you can complete the entire
implementation within a matter of minutes. I just shared with you what
we’ve been working on for the past few months leading up to
this launch. I’ll share with you a couple things we’ll be
launching soon to enhance our offering to both users and for
you the developer. Let’s start with the user
experience first. As I mentioned before, we want
to have the same consistent payment experience for all major
browser platforms. However, we’re going to try and optimize
wherever we can. So for the browsers that support native UI,
such as Chrome with payment request, we can offer this nice
native payment sheet experience on mobile websites. Sloo tr a popup window, this
native sheet is impossible to clickjack because the sheet
itself is completely separated and detached from the Chrome app
itself. But unlike a popup window, this payment sheet is a
streamlined purchase experience because it allows the user to
stay in context of the payment within your site.
Soon we’re going to be expanding native support from
mobile onto the desktop with the payment handler spec.
So payment handler works by installing service workers into
the users’ browser. A service
worker is just a JavaScript interface between our payments
sheet and the native UI in the browser.
You don’t really have to understand what a service worker
is, nor do you really have to understand what payment handler
does because when you integrate with the Google Pay API, all
this will happen automatically. We’re going to do all the heavy
lifting for you. So let me walk you through a
quick preview of what that’s going to look like on Chrome
desktop. Can you switch me to my pixel
book, please? We’re on you’re merchant page
again. This time when I click on the Google Pay button, up
pops our payment sheet within this nice secure native frame. Very similar to the mobile
experience, this thing is not — you cannot
clickjack this. Also because it’s embedded nicely into the
popup window, you can tell that it’s part. Same sheet. It’s
native and secure, it’s not part of the website because you can
see that the native sheet itself is laid, overlaid right on top
of my tool bars. So I’m going to click continue just to show
you that it works, and there we go, we just completed payment
with payment handler. Let’s switch back to the slides,
please. So as you can see, we’re going
to always look for new ways to improve our payment experience. Now, for developers, this
payment handler example is a great way to showcase what our mantra is,
which is we want to ensure that you only have to integrate with
the Google Pay API once and after that you’re going to be
able to continuously benefit from all the improvements that
we make without you having to change any parts of your
integration. When you integrate on Android,
for example, we can make updates
through the Google Pay services to the 2
billion Android devices. This has always limited us to client
releases. Now that we switched to the web — now that we
launched on web and switched to this new JSON interface, we’re
able to take advantage of the continuous server pushes so that
we can bring new features and improvements on a daily
basis. As we look to expand Google Pay to additional
surfaces, we want to bring the same consistent unified
integration experience for all developers, so I’m happy to
announce we’re going to be extending the same JSON
interface and bring it from the web to
Android, AMP, actions on Google and more.
So let me walk you through a couple of these examples by
starting with accelerated mobile pages.
Now we’ve seen tremendous adoption of AMP from around the
world. AMP by design is a restricted
set of HTML with limited JavaScript support. By working
cloarsly with the AMP team, you’ll soon be able to enable
Google Pay by using these standard AMP components, all you
need to do is drop in the same JSON payload as you would to
load payment data and easily render Google Pay button within
your AMP page. The button itself auto mactdly
opens the payment sheet and returning the payment token. As
we all know, payments is no longer limited to just websites and
native apps. If you happen to be collecting payments through
Android messages or through the Google Assistant, you’ll soon be
able to also enable the same Google Pay experience with the
same JSON interface. Consistent and easy to integrate on all surfaces, we’re making
Google Pay as easy to integrate everywhere as it is for users to pay everywhere.
Now, if you’re interested in learning more about how to
collect payments on Android messages or through the Google
Assistant, go find my buddy Shawn who will be at the Google
Assistant sandbox from 12:00 to 2:00 tomorrow. All right,
Avnish, take us home.>>AVNISH MIDUTHURI: All right.
Thank you, Tony. So we’ve shared a lot of details
with you today about how you can add Google Pay to your site to
improve conversion, we’re excited to see how you’ll
integrate. We know there’s a lot of information out there,
nicely integrated in two websites there, there’s one so
you can learn about how Google Pay can improve things for your business overall
so that’s g. co/pay/business, if you’re ready
on start integrating, go into your developer docs there. Tony
and I will be available in our sandbox after the session
located in Dome G, please stop by, ask questions, we’ll also
have members from the Google Pay team vainl tomorrow if you can’t
make it today — available tomorrow if you can’t make it
today. So stop by, ask questions, try a
few demos out. We also if you want to learn
more about Google Pay, we vawls two more sessions tomorrow. First is led by our payments UX
team. Walks through design principles, optimize conversion
using Google Pay. And for learning about how you
can enables via the Assistant, check
out a section on adding transactional capabilities to
your actions tomorrow afternoon, hosted bill by our
assistant team. Finally, a hack-a-thon next month in San
Francisco in June, you can sign up at the link there or stop by
at our sandbox if you want more information.
We host developer events globally so if you would like to
see an event near you or if you want to stay tuned for upcoming
events, sign up for your developer documentation and
we’ll keep you posted. And that’s T thank you so much
for attending. Enjoy the rest of Google I/O.
Dr. (The session ended at .
(The session concluded at 6:59 p.m. )

Only registered users can comment.

Leave a Reply

Your email address will not be published. Required fields are marked *