Livestream Day 2: Stage 3 (Google I/O ’18)
Articles Blog

Livestream Day 2: Stage 3 (Google I/O ’18)

August 16, 2019


Aye a software engineer that
works >>And I’m Will. working with Flutter is
expressive, user experience. Flutter. Native mobile apps. and flexible, build high quality
experiences and iterate quickly. You can use Flutter today.being
used by developers and organizations around the world. library of widgets such as
containers, list views that are efficient,
scrolling, gestures and more. adaptable design system backed
by open designers and developers make
beautiful, usable products faster. in millions of appings and sites
all around the world. What’s new this year is material
seeming.Material seeming helps you systematically apply brand
throughoutior designs. flexible than ever, the shape, topography, the animation curves
are brand. You can even choose material pal letters from our
online color generator for you and topo glovey themes.
You — topography themes. theme of their own. shape. that Flutter is a first class in
material design. From today forward, you will
find documentation out for material suite and the material
guidelines. embraced each other. them? You can do everything we’re
about to do at home today.retail app. I love the clothes. I
love the home goods, but >>Mary: Who else has seen an
app like this. Yeah. Google Docs.similar. Many apps faintfuly follow the
material guidelines. Google’s brand. We want your apps to look like
your brand. theming to shrine and show how
it is customizable widgets, layouts
and themes. fields and back drop. And
delightful. Offering tools and easier sensibility of
components. In flutter, a budget is pretty
much everything. your cards, yourtectfields, all
your views are widgets. ones and we’ll show you how to
do that today too. Material components is a library of are expressive, enhanced and
delightful to work with. this app. We have already built
it. enhance it now. our designer gave us. Great.>>Will: Will you code? I will stand here and talk
because I have created the tension. imported the material Flutter
package in all of our apps files. Here is Shrine’s log in page.
here’s its home page. Both of those pages are routed
to the screen by the shrine app widget. is what gets passed into the
main function, which is run when we
start our app. material app budget in its built
function. Let’s take a look at that widget. set to a home page widget which
we built just for shrine. There’s also a theme property
and you passe theme dat in the apps
constructor, all the decendents in the app will have to them.
Let’s have the color theme our designer gave us. you need in that like primary
and secondary colors, background and a modules. It also has the colors you need
that are drawn on the primary, secondary surface colors. There
is a primary varyiant for any time the primary color is
against and that combination is
inaccessible. We’ve already copied the shrine
color copies.little swatches of the colors on the left side next
to the line number. will also appear in this way.
The material package already has red, but shrine’s designer is
using custom branded colors like
millennial Mary started a theme already. She will add those
colors to it. global level. She’s changing
some of the values. She’s adding a primary texting
from a she built herself. It’s going to use as a base the
existing primary text theme and then to change one of the values to
be brown. Now she’s adding an icon theme. start with the base theme and
then change the values that she wants to be different. 23, way is a common method T. allows to you copy in an
instance and you put inside. Now she will set the button
color for raised or contained buttons.
Reload.>>Mary. What do you think? Let’s work on the fond of those
titles, please. font for large type like titles
and headlines. I always thought that font family sounded so
wholesome. matches our logo now. It’s not
default Roboto for larger Now to the log in screen. it has a build method. In it,
you construct your UI. build function has a safe area
which can account for the top notch for the logo
phone.has it in a list view. We could have used a column widget, screens and when the keyboard
appears. Note that the buildtectfield
function returns a widget. This is called in our build function.
We can even pass in our own parameters this function. It is
not a static layout. Right now, the text field is global color themes primary
color as its active color. You can see when she’s typing, the place holder and underline
decoration is not very accessible. primary color or just change the
primary color of our textfield to the
primary variant. ride the ancestors theme by
wrapping it in a new theme. If we press the copy of the theme that has
the primary variant as its primary
color. you’re doing. Our material
theme has a shape story that’s angled based on our logo. the shape of the next button to
the beveled border widget. provides out of the box. It takes the size you want the
cuts to be. eight sided F. you’re working
without a designer or you’re a designer yourself, we now have a fool tool that can help you
generate pallets. We had one we were using
consistently to use more consistently. You can go to
this website material. to go with that color. shrine would look if it were
built from charcoal. colors earlier in our file.
Mary is changing to an alternate theme colors. See? Very different
alternative theme. Very easily. can express your brand, but what
about layout? has read new guidelines and
decided to make an asymmetric layout part
of our material name. moment, our product are displayd
in a simple grid of card with images. up into 3s with asymmetric
alignment, aspect ratios inspired by the angles in our
logo.custom designed just for our app. We will use a completely custom
widget made earlier. It is based on view components. By the way, you can find the
source everything we are doing in the material code labs. Mary removed the widget we had
and asymmetric view. It is built on top of a Flutter
list view, they’re recycled as we scroll. list views let you specify the
direction and we set it to horizontal. 100 lines. It is a very
expensive looking component that’s — Oh. What’s going on? that scarf. Thank you, Mary. Anyway, so Flutter let’s you
develop deployed in Android and I/O S. I want to see how it looks on
iOS. We will show how easy it is to
switch. simulator. Okay. Great. Will has showed us how you can
express your brand. about enhancements to material
can help make it easy for you to use existing and use guidance to build the
components that, well, you know, also
express your brand. border. Our text fields just
have a line on is a text field. The outline wraps the whole text
field with a border, making it obvious
that something that uses can fill in. Now you might be
wondering how do we better design system or that
material design is a great design system.research. And it turns out that the types
of textfields are much more
performance and likely to use them. They know it’s a call to
action. That’s why we’re using it here.also watch the talk
measuring material to learn more about material research. that looks great, but it doesn’t
look as edgy as the shrine brand really
S. with material design unless you
are using a cut cons border T. field outline. So that it can
match the buton and it can match our shrine logo. There.great. Well, we can see our password. know that my password is
YAZresearch. own password so that we can hide
our password. built our textfields using our
own function and it returns a widget. build function. Great. Now
that we spruced up our log in home page. Our home page currently is one
page with a menu bar.drop. A back drop allows you to have two
layers in your app. the menu and some content
rather than having a menu pop in from the side. remove the app bar from our
current version because the back drop
comes with an app bar. from guidance from material by
design. You can do layouts and
navigation can exist.he’s done, that he’s going to remove the
home page from our home property in our replace that with the back drop.
It is essentially just a stack. layering various components on
top of each other. For example, a footer. front panel and back panel. The front panel is the home page
and our back panel will be a menu page. It has a shrine edge. We added
that as well. icon and seeing whether or not
it goes up and down. No because we have not implementd that yet.
Okay. So let’s do that now. And to do this, we need to use a
transition or animation.a positioned transition.
Essentially, this is just a way for you’re — my child widget, you
start off at this size and location and you will and
location. For this case, we want to animate from closed,
which is the bottom to open, with the whole home screen open.
So let’s start with the panel animation. This is the way you tell your to start off with certain
location and size and where it should end. and a controller. So Will is
typing that out now. We have predefined what the
panel the height of the app bar should
be and also we need a pass and a controller. responsible for orchestrating
the actual animation. Every frame that is drawn is
determined by it. We now pass the panel animation
into a position transition widget.different transitions.
For example, fading, rotation, scaling and much more. of the box so all you have to do
is use them. It takes in a rectangle
property, animation and it takes in the child, which is the child
we want to animate up and down. try animating. There we go. controller. for drawing the new frames.
Well, let’s take a look at it. and this is the code that is
used. It is open source.can see that it animates from the value
0 and 1. You know how I said that Nutter
comes of — NutFlutter comes with a
lot of animation, well, they animate. Okay.material
guidelines also suggest that for a back drop users would like to
tap on the top.switch to the Android phone.button. I’m
sorry. It’s live. simulator. There we go. Back to the back drop version. the material guidelines say that
you want to tap on the top of your back drop and down as well as tapping on
the app bar itself. Currently tapping on the top doesn’t do
anything. Flutter comes with a lot of
gesture detectors.be extensible. They take in a function.
Functions are first class objects in that we can pass in any function
and have your tap and gesture do anything you want. change a variable, other things
like that. So Will is now setting that up. There we go. Now when you tap
on the back drop of, >>Will: Mary, I did it wrong. Classic Will. Mary >>Mary: That’s okay.>>Will: Oh. There we go.
Cool. We’re all set. Well, now you have seen how all
the enhanced new things for material have an app that is much more
enhanced and, well, kind of cool and much more
unique. Now we’re going to talk about
how delightful the experience is. you want to make changes to that
code. now our image and our text are
very close together. We want to put some space between the two. you know, you can look at the
image and admire the goods and then look down and see the price
tag.How do we do that? Well, we can use the Flutter and
inspenaltior. exact line of code where that
widget was being created. Now Will is going to demo the
inspector.Yougist tap on the inspect button. If you tap
anywhere on your app, it box will take to you the exact
widget for that app. it shows that — well, I think
your inspentor is running for — inspector >>Will: We have the widgetry. thing that is being drawn. widgets you have drawn. Not
only yours, but widgets that are being drawn by part of the
system. They are drawn in bold color. Well, here we’ll know this
actually which is responsible for drawing
the card with the image and text. widget. Remember this is given
to us and we didn’t build it. it. All right. Inside the
product card, there are two functions. Build image andtect.
arranged in a column. We want a space between the two. We look at our column and note
that it cross alignment and center. There is no main access
alignment. through reading — trying stuff
out, usually even checking in the inspector properties that
are being defined, you will find our main access
alignment is start. What does that mean? the direction you want things to
begin in the scroll direction. start at the top. that inside the space allotted
to our product card, maybe our text will center align itself to the beginning. There. There’s a little bit of
space. But that’s not enough. Let’s change that to end. We
want more space. There. That looks much better. between. Let’s also change your
image aspect ratio. images have an image ratio. I
want it to be one. I want them to be square. Okay.great. But who here has had a cat walk
across their key board? There’s more than that I’m sure.Well,
cats are the number 1 cause of typos in code. It’s a proven
fact. I never actually made a mistake
in code even today. It was a cat. your cat’s name? He’s my cat Nate. You loved it. negated the aspect ratio and so
now it is negative 1.happens? Your aploop is going to crash,
right? It is negative 1 and the compiler because it is still
Afloat. Only the part that the widget
like has a negative 1. widget crashes. Everything else
still works and your app still scrolls.menu. Let’s change that
back and reload and see what happens. There. All right. All fixed.
this is one the cool things about Flutter. We have been
making changes and they Only the widgets being changed
are being redrawn. you sought colors that would
show up immediately. Asymmetric layout. rebuild UI and try things out.
If it doesn’t work out, that’s okay. You don’t waste that much time. preserved during hot reload. We have been reformatting our
code.labels do the code. They tell you which wing it has just
finished. Let’s see what it looks like.There. See? Now you know you have a text.
You can tell what the braces mean. Great. Will, I think that looks pretty
good. What do you think? I think that looks lovely. It doesn’t look like anything
like the generic blue aprope when we
started. Flutter. our app can be. enhancements to material
components you can use today in Nutter.>>Will: All the Flutter
tooling made it a delightful experience for all of us. but there’s going to take too
much time. components, your designs and
code live in the same world. material theming, it is no big
deal if says let’s change everything that’s light blue to
dark blue.designer anymore. We want to you spend less time
on boiler plate and more time building the that make your apps
special. Flutter material package today. where you can code everything we
talked about in this talk. the code lab in I/O. reactive mobile apps with Flut
are. for you to ask questions and in
the Sandbox today.>>Will: Everything else you
need is on Flutter Flutter. IO and material.IO. . Thank you! Analyze your audience and
benchmark met ridges to grow on Google Play. amazing data into metrics urine
sights and This is is a really big job. I
will give you a good idea about that.There are loads and loads
about highway you can go about that and I’m
sure all companies think about this kind of challenge every
day. You’ll have your unique ways to do it. the way the times Dario and I
think about it is around mental model of a probably quite familiar to a few
of you. Now the user life cycle starts with acquisition. The first phase of building a
business and getting a user to your
Apple game.Discovery and acquisitions is about getting if
front of the user, persuading interesting game or app that you
have a really interesting product that you want eventually persuading them to
tap on that button and installing it. where the fun begins. After the
install, you get engagement and monittization. they get them, open them and
love all the work you and your teams are
doing and paying you as well.It will be great if this were the
end of the story if everybody you
acquired just again and again and again. But unfortunately it’s not. At
some pint, there’s also the The point where someone for whatever
reason decides to remove you from their then they become a lost user.
It would be really sad if this was the end of the story as
well. But increasingly, the user life
cycle ends like this with persuading people to come back
again. chance and telling you about the This is a sketch is how our
teams the information you need to be
able to optimize each part of this. come across to you, who doesn’t
find you. And for every person who finds and you there’s another for who whatever
reason doesn’t choose to install. person that becomes engaged,
there’s another that installs and you doesn’t use you all the
time.someone who does or doesn’t monetize. And then lastly, for everybody
who you come back, there’s another one
not convinced to come back and give
you a second chance. minimize all of the bad parts
and parts. And that’s what our teams do. We build tools and insights to
help maximize the good parts and
minimize the bad parts. Now the reason the three of us are
excited to be here today and to talk to all of you is because
over the last year teams have been building
successively more tools, more insights and more successful businesses on play
and on Android and that’s what we will talk to you today.
exciting launches we have made recently that we made this week
and that is optimize your user life cycles. start at the beginning of the
user life cycle. with user acquisition and I am
happy to welcome Tamzin. My name is Tamzin Taylor.
Sorry, guys.think I am the luckiest person here because I
get paid to work with every day. I get to help them understand
their businesses better, help them identify and improve their
business. In a way, I am living
vicariously through their success.that comes up with these
partners is the same question. Hue can I improve my
discoverability?can I get in front of more users? How can I
stand out from the crowd? I get more installs? Does this
sound like something you’re interested in? Excellent.
nodding of heads. You’re in the right spot. In the next 15 minutes, I’m
going to things that we’re doing to make or maximize your opportunities to
be found er to help you understand how
your different acquisitions channels
are Per Per Android users who downloaded a
game So what. What does this mean for you?
There’s a huge demand. did you get in front of those
users? So we’ve been working really hard to discoverable and access age.
Now, when we think about installs, you app into the Play Store once
it’s in production and hoping for the best. could download your app or game
while you were still building it? that they really wanted to
download your app or game that didn’t
currently exist Store? We’re working on a couple of
tools to answer these questions. The first is early access.So
early access is a collection in the Play Store of brand new
Android titles running open data. Those billions ofiz users I
spoke about earlier can download games in this collection.can
give private and actionable feedback to the developer before
you go live. tweak and optimize your app to
really maximize your chance of success when you do go live. With peeragestration, you can
put your It has a preregistration button.
Now, this is fantastic that. you tell them about your game or
app is ready and goes into production. and help you support start
marketing before you go live. And it’s been instrumental for
top like Ravio who help their fans
get really excited about an upcoming release. year, we launched instant apps. helpp in front of — app in
front of more people, quickly and with less friction. developer contreens, we conference, we launched Google
Play instant. instant app whether the try now
on the Play Store through browsing,
searching or — Oh. Thanks, Tom. With Google Play instant, you
can tap to install the entire package. Whether you are looking to
storm a master, it is just a really
great way to help users try your Apple game
without commitment of downloading it. So in this way, we try to help
your discoverability. And it really worked for hothead
games. We saw a 19% So you can see we’re really
trying to That’s all well and good.right
room and we might be biased but you can manage what you measure,
which Google Play instant and to help
you understand the contribution that this happening to you
bottom line. We’re proud to announce brand
new success of your constant app.
Feedback from developers is being that we want things more
discoverable. Woo put them front and answer on
the console. center on — front and center on the
console. And will because the stats are
in the console, if you love playing
with this, it means you can slice and dice. other key metrics like installs
and revenue. So it’s pretty exciting. track the success of your
instant app on your overall growth. to announce brand new
capabilities in this reporting. You can now track which products
browse ares, search and Play Store are driving most of your instant app
success. you have the fantastic
reporting. Go ahead and use it. there is so much more you can do
to increase your discoverability on and say if you want to learn
more, there’s an ad ad word session
right after this. all acquired users are the same. acquire the right user for your
business. The one that’s going to stick around the one who is going to spend a
lot of time in the app and maybe the
one who is your app with lots of friends
and spend money. So it’s really important to capture for your
business. But how do you know whether
you’re attracting the right users? whether you’re spending your
time, money and resources in the right places to attract those
users?And once you’ve got the users, how do you know that
you’re actually doing a good job?know you’re delivering them
value and where can you improve? Are these questions sounding
familiar? have some updates, the welcome help answer these questions that
will help you interrogate, understand
and acquisition strategies to help
you bottom line. So let’s take a look. updates to the acquisition
reports, core questions that both myself,
Tom and Dario hear from developers every day.you have
asked me these questions quite a lot. So the first one is: How
do people game on the Play Store? How valuable are the users that
are channels? And how am I doing?
Can I improve? So let’s tackle the first one. my app or game in the Play
Store? worked with. One of the
questions I get about this you hear that line that says
organic traffic, what is that? Who are these users?coming from?
People who come to your store listing page not through deep
links, but they browsing the Play Store,
checking out collections, seeing new games
and apps might be looking for specific
use case. Tom might be looking for apps with museum guides. we understand that this is is a
really important channel for and you for the makes up the line share of
visitors to your store listing page. store optimization is really
important. But developers like you haven’t been this
information to date. It’s nice to see. We don’t
really know where they’re can’t really optimize for it
until now. So in response to all your
requests, I announce the launch of organic
breakdowns in the user acquisition report in the play
console. Well done. So now you can see for users who
never installed your app before, which
can ask ones are coming from browsing the store and
searching. can leverage this data to better
present your app or game. meet age to focus on acquisition
areas that mattered. What does that mean?example. Let’s say
you all after this open your laptops, not now, I’m speaking.
the user acquisition report and you click on this breakdown and
you say oh, my God. organic traffic is coming from
browse. What should I be doing? some of these people who are
coming from browsing might not be familiar with your value proposition or what makes
you different from your competition. opportunity there once you then
to optimize your store listing. second. The other great news
with this data is that we’ve extended it so you can tract
across the whole funnel, across subscriptions, buyers and
retained installers. gives you more insights into how
those two channels are driving traffic and they are. And it gets even better. And
know if you’re a marketing person will be super excited about
this. Click through and see which search visitors to your store listing
and which convert best. So, that is super exciting. It’s a view into the Play Store
which share before and it is something you asked us for at
least 3 and a half years with this team. Thank you to the
guys who built it and certainly our other pilot
partners have. What we try to do is make it insights that help
you make better decisions as Tom mentionedertish. feedback from fun games for free
about the relevance of the long tail searches hear from everyone is how they
found data really actionable. going to help you make better
decisions to grow your business on play. Play. they loved this new feature.
And because we tested it with quite a that you’re here today and maybe
able to come along, I share with you some of the going to use
this information for in case it sparks some ideas in your head.
developers we tested with said they would use this data to
evaluate the user A lot of them said we use them
to optimize the store listing. screen shots and some even said
they would redesign their onboarding
flow. and whatever else people are
searching for reflected in that first time
user even more relevant for that
person when they install the app. So we are really pleased
our found it useful and I hope you have as well and I hope that answers
your question. have found that to be super
exciting.my favorite. If you care go paying people on time,
if you care about running a good you’re a CFO or Product Manager,
you probably care about acquiring valuable tackle the next question. How
valuable are the users that are channels? Now, a lot of the developers
love the report. part of the story. I am sure
you agree. They tell meare leading critical
indicators of that one metric you will care about.
Revenue. launching a brand new item in
the buyer acquisition report. Average, revenue per user. can see exactly how much average
revenue per user you’re getting from
different channels. And that includes organic
traffic breakdown, Google search and third party traffic. user acquisition strategies in
your responsible for making sure you
have a profitable company, this information we right decisions.
So no matter whether you are running classical CPM model, costco in
store or driving down the funnel, we
really think this information will help you strategies and
make better decision moving forward. there’s a few is of you in the
room who Tamzin, how am I performing
against my competition? How can I do better? How can I improve? benchmarks are really important
and we totally understand they’re really identify where the room for
opportunity is versus the market and where you are out of the
park and doing a great job. In other words, benchmarks are
great focus your investment on the
items which will give you the best return. were really happy a month ago to
launch the retained installers benchmarks in funnel. So now for all organic traffic,
you can easily see throughout the easer install conversion all
the way to 30-day retention exactly where the improve or where the efforts you
have had have really paid off. who have tested this with us
including Intuit and Intuit and Topler. they put in have really paid
off. One thing I mentioned earlier that I all these acquisition reports is
that they’re not just here to tell you where you’re going
wrong.They’re here to give you a reference point and give you
confidence in your which is why I was really happy
to see they love is the extra validation and their own
decision making. So the new tools that Q3
launched help get discovered. And if you talk
to us, we’ll say the most important about this is we
launched reporting to help you optimize
for it. Updates the acquisition reports
give had before to help you make much
better decisions around optimization, help you drive and acquire more
valuable users. Right. But how are you going to keep
them engaged? going to monetize them? Lfor
this exciting part of the talk, colleague Dario. It’s great to be here. So new tools help you get
discovered find and acquire more and more
valuable users. But what about helping you in
the next stage of the user life cycle journey like engagement
and monetization? more and earn more money from
them. I am glad to say we have some new tools to help with
that. first off, earlier this year, we
launched the events timeline. The events timeline gives you
information close to charts and statistics page in the Android
dashboard and other chart surfaces on the
play console. Showing you key events that
might (inaudible). For example, the rolling on the of a new
release. correlations between changes in
metrics and such events. there are changing average
rating correlates with the rollout of a new change is affecting and hue it’s
affecting the other revenue user or if you use this on top.is
such as mobile hold back is affects your sessions positively or
negatively. One told us they were able to
quickly spiked in a certain country. So
the advanced timeline helps you without having to dig in to
order dashboard or other data — other
dashboard or other data. might want to check how user
engaging with your app. Firebase helps with this
purpose. enabled just by linking the
analytics to your app and signing up to the service. box, it will give you already
meaningful metrics about engagement and retention of the
users.is also interesting is you can instrument your ap,
instrument your code matter the most to your app. Another really cool feature they
launchederter is Firebase predictions. our knowledge and tools to give
you the user — your users are
going to behave in your app. out of the box, you will get these predictions. You can also
instrument your app to get more personalized prediction you If you want to learn more about
Google Analytics from Firebase, you can check session later on
stage 6. So our friend Steve, (inaudible)
and Russ will be there. will be a truck for Firebase
tools. In particular, for Firebase, there in came you missed that are
available on YouTube — in case you missed
that are YouTube. There are events and let you
correlate with the metrics. They give you insight It is probably the most critical
phase because it is where you earn the money your business
starts succeeding. We have done a lot of work recently
especially on subscriptions.So I will present the latest things
that we launchedd their. subscription dashboard. Loads of developers use the
subscription dashboard on a regular basis. earning subscription businesses
actually on a regular basis too. But we heard that you wanted
more. additional features in their
there. Let’s take a quick look. decided to make it super easy to
compare codes, evaluate features like
free data that are important to you
lycra newals like renewals. group of cyber and analyze their
performance. For example, you can pick SKU
with a compare it with an SKU
inteproductry price. You can also focus on last year
big and compare it with this year’s and see which one
performed better for you. comparison to explore all of
this kind of intricacies. That’s not all though. data that we know you’re
interested in to make it easier than ever to take a is what matters to you. So
success rate and are the information about performance
are right there on the subscription
dashboard. And because the subscription repeated payments, we tell you
for each code you selected which building periods they reach. We also made major updates there
to subscription turn. So users can
cancel for many reasons out. But as we announced yesterday,
we now show the users survey when they
cancel explain why they’re cancelling. And there’s also
this survey available on the subscription dashboard. sights directly from the users
about why subscription with your service. back features like account hold
and grace would. check out if you haven’t already
because we heard from developers it really retention of the subscribers.
So this major updates help you some of you here in the oughted
with reference to better understand
and in the audience.was a discussion
yesterday about new advanced feature about — the
new for subscription specifically. If you are
interested, you might want YouTube. beginning the last two phases
are uninstall and winback. provides you with reports about
installs. For example, daily install
metrics or or installers acquisition report
in which you can check how long
users keep before uninstalling. We heard you want more and we
understand because it is such a critical phase business grows.
We don’t have enough information for you at the moment for
winback.hard on this aspect ask we will introduce later this year some
more for the steps of the user life
cycle journey. For example, you will be able to unique users are uninstalling
your app over selected period and how
many users your app again after having
uninstalled at point in the past. So stay fine for more up
updates. Thank you very much for your
attention and now back to Tom. Hello again. As Dario and Tamzin shared, the
teams really hard to build more tools
and metrics to help you run better have a higher and larger number
of gamers. It is fair to say we are extremely who delivered it.
But all of this work also in some ways causes a problem. that all of you are really busy.
You have so many tools both provided by us and others. let’s of different places you
can go and loads and loads of cause on your time.you’re busy
and we understand that you need it to be easy to see what the and what’s important and what
can be passed over. We understand this problem. really happy to say that
yesterday we launched the new apps dashboard
in the Google Play Console.Let’s take a really quick look at what
it’s like. The apps dashboard is the
landing page.sign in to the console. You land here if you select an
app.your key data. Previously, there wasn’t quite enough data.
We heard from all of you.wasn’t really well organized and we
changed all of that. really easy to see trending
information. if your installs are moving in
the right direction, it is super easy to see that. see your crashes are reducing,
you can see that easily as well. So trends first thing.to glance
at. Secondly, we understand and we heard and uninstalls should be
displayed together. You aught to take a look at your We sectionalized the dashboard.
We have a lot more information and put it into context. any of the questions on the left
side of the screens feel familiar, the questions specifically asks and
specifically answers. Now, I’m sure that many of you reading
you know what, that one is really important to me. I don’t
know. want to understand what people
think of my app through reviews and how
my going. Maybe you choir slightly less about how your
pre-registration campaign is going.understand that having all
of this on the dashboard is sometimes overwhelming not all of it is relevant to all
of you. That’s why the new dashboard is
also customizable.section you see on the dashboard can now be expanded or collapsed. interested in your revenue, you
can have If you are less interested in
preregistration, you can have that section collapsed.Remember
your preferences so it stays exactly as you leave it. each and every one of you
personally.everything. And a lot of the developers who
we’re with are happy with the changes. If I’m honest as a PM, I am not
sure how to take this quote. the lead engineer from Blue
Apron was really, really happy there was a simple Product Manager so they could
see the questions they always ask really easily And similarly, doc you why sign,
very busy people were very happy with
the they give. It doesn’t mean that garbboard is focused for
what you need right now.short overview of all of the changes
you made to the app dash. you see the new board, you will
be able to see all of that and how it helps you lives. And
frankly, how it helps you see the metrics and insights your
console has to really help you. Dario, Tamzin and myself think
about tools we can build to help you
optimize. The new events and timeline and
even are more information coming
soon. We think that all of this taken your success and ultimately your
success is what we work towards. That’s what we’re here to help.
And all of this and other things like technical performance are
wrapped dashboard as well. So we’re almost at the end of
our time things to say. If you liked this session or
even if to the URL and tell us what you
think. It really helps us make the sessions after year. And
lastly, thank you so much for listening. It is great to see many of you
here thank you very much. right after this session and in
the Sandbox. Thank you. sand Sandbox. 10:30-11:15 a.m. Make your word Word Press site progressive. Word press Super cool to be here. I am —
I bet that you have heard progressive. Today we’re going
to be talking about how to make progressive more precise and
have heard the term progressive probably in the context of progressive
web apps that have been developed using
mobile web capabilities or in the
context of capabilities into your sites in
a step by step way inch stead of all at once. the term progressive to capture
the notion of progressive web
development, development of user first web experiences using
modern web performance best practices. And the key here is the word
user of progressive web development
is user experience. So let’s start at the core.experience? User experience refers to a
person emotion when they are using a product. case, that product is our
website, our WordPress site. So it’s all about how users feel
when are engaging with the content that we create and that
we publish. in a nut shell what I want my
user to have that expression on their faces site a little bit. But what are the qualities in
web experiences that actually bring joy to users? There are commonly present in
what we UX experiences. Users love
applications that are fast and reliable. and they behave well once they
are running. They want to feel safe when they
are and the transaction to the
apps. Users love applications integrated they can take advantage of what
the device has to offer and they are using engaging. They love
that kick of wanting to go back to the application and the its very easy to come back to
them. For a long time, building applications satisfies the four pillars of
delightful U.S. was not easy or not possible at all. reason was that for a long time,
the main power of the web was the URL, the capabilities. As you compare it to platform on
the behind in terms of the things
that you can do with it; however, the good news has been evolving quickly in
receipt years. Now it is fully possible to build experiences only using web
capabilities. This here is a subset of the web
APIs available in the web platform today. You can do a
lot of things with them. features like the vibration, the
gyroscope. You can implement like features
like and so on. Even augmenting reality is a
thing today on the web. development workflows, a lot of
tooling and powerful Java screen frameworks that ledgers that allows us to build
very, very complex and awesome experiences O. have yourself, the web developer
eager to harness the power of capabilities. is the complexity is such that
truly taking advantage of what the web has to difficult. system has increased, we have
also seen call the capabilities to use
such gap which simply is the difference between and what is actually done. For example, if we go back to
let’s not developing a web at the
time, but the gap was not big at that time.we could do on the
web. There was HTML, CSS and there
was only develop. Now today, the complexity has
grown so much. There are so many tools, so many and so many
things and we have to develop applications that have to run infinite number of form factors.
The problem with this gap is if we the web has to offer, it’s very
difficult to build applications that delightful UX. We see evidence of this a lot in
the wild. prevalence of the applications
that run very slow. They exhibit very poor load time
performance. For example, 19 seconds long
time on 3G on mobile. 75% of mobile takes 10 second or
more to load. And also, there are a lot of
load applications, too many, that exhibit poor run time
performance. Things like unresponsive pages
or content (inaudible) in front of your with it. And the bad
things is users hate this. And we see evidence of that as
well. these have on the revenue. More than half the users are
going to 3 seconds to load. 75% takes more than 10 seconds,
then you figure it out. between load time and
conversions. There is an inverse correlation between revenue and load time. But for me, the worst collateral
effect of not being able to take advantage of known in sociology of the accumulated
advantage. It gets reached. given the complexity of the web
platform it seems that only super start that have capability to afford
high large quality are the ones that have a to take advantage of
the web. That is not what the web is ail
about. experience they just described,
even super stars have time to do it. drive to close the capabilities
usage gap. And there are mainly two ways
that we can go about it.One is on us. Given that the web
capabilities are there for us to use them, if we all do thing,
things should be okay. Right? We should not see those
experiences; said than done. The reality looks more like
this. You have to matter so many things to applications that are delightful
that is very challenging. with the critical rendering
impact. Load resources. Do code splitting, et cetera.
caching, so many things. The landscape is so complex that it
is thing the first time. Imagine doing the right thing
all the application in all circumstances
all the time. It’s very hard. think about a progressive web
Eco system. It’s a platform, a web Eco
system when thing is easy. Right? And it involves several
factors. The first one is an evolving web platform. added, probably existing APIs
are improved. tools that have developers
automate complex tasks. We have frameworks that add
layers to applications and a key point in
a progressive web Eco system that we take and that we get
from using the tooling and the frameworks and we see
what works back into the web platform. So this is a better
option. We want to pursue this. focus our efforts to achieve
this? In order to answer that question is about the web in terms of a
very, very simple model that has three
layers. inconsistent we have every day.
Then there is a second layer conformed management systems.
There are basically software platform content. Okay? And on top of
that, the bottom ledger In is nothing else. That implementation or the power
of the web the implementation of the web by
APIs. You can see the content inconsistent divided in two.
That’s very important. Now a days, about 50% of sites on the some sort of font management
system. And the of the ones by some titles and frameworks and so on. closing the capabilities usage
gap pursuing significant across the
web Eco system. focusing on closing the usage
gap in the area of content management systems.all systems
are very important. We want to help to make them all
progressive platforms. work, we are focusing
specifically in word press. WordPress will cause an impact
not only for the word press platform but
as the web as a whole. First, historically, WordPress
has been seen as a blogging platform. evolving steadily since it started back in 2003.
You name it. shopping, et cetera. If we close the capabilities
usage gap the experience for users.
That’s very important. The second one WordPress is
certainly pleasure in the content system. 59% of open source CMS market
share for me staggering, nowa days,
WordPress powers 30% of the web. When I think about word press like all of them to be able to
get a delightful user experience.consuming it. So
going back toes definition of progressive web development,
when I say WordPress, what I mean is a
WordPress platform where modern workflows
coat workflows coding and performance
web practices is common place. How in the world given the size
and scope of word press, how can we make progress quickly? to have a progressive WordPress
size. But fear not. I am sure you have heard about the paging or amp for short. It is
open source library that makes are very compelling and load
instanttaineously. In a nut shell what amp does, it
is to of the box a set of powerful capabilities and performance
that address three things. performance, run time
performance. I don’t want content in front of my and
usability. I want a smooth transition
between the action when I engage. integrate amp, with the word
that’s platform as a choice, we would be all WordPress developers, the
cape ability of what amp has to offer to compelling under very easy. a nut shell and we will see the
integration in action. Sorry. So amp — developing amp pages
entails three components. amp HTML. You can think of it as both as a
set of HTML. satisfy reliable performance,
you have to impose certain constraints. (inaudible) because amp is
essentially a web components library that provides a functionality that you can have
on your finger tips in the form of constant elemental task.
second component ises amp run time system that is a piece of
JavaScript and has, you know, logic to
control custom elements, a lot of performance scheduling and so on. And then we have the amp cache. develop web pages, but it is
like content distribution network that serve I’m pages available and used by
everyone. I say that it is not necessary, but it of optimization with the run
time on amp that provide that snappy load
time that users love. finally the use of amp in the
web Eco system that encompasses the across a variety of platforms
that are seeking the same that we are
seeking, experience of those platforms get. Things like, for example, the
top news those product features. We use the linking and our
WordPress what amp has to offer. in one or three places. Things that have to do with the
look ta have to do with the
functionality that the platform has to offer. we are referring to an amp plug
in that is integrated on the web
platform and WordPress site. All the plug
ins that may be installed. WordPress. Now, the plug in for WordPress
was bioneared by an open source project It was started at the very, very
early ages of amp. They were one of the earliest
adopters of the word frame. They were affected by the
mechanism called the (inaudible). piece of content in a pose,
there was a (inaudible) for word amp. of content, you have two
versions. Half one main limitation that it was enough engineering capabilities
to work on the result, you usually got a
lot of visual disparity between the
original content and the amp version. So after the version plugging 0. of the plug in, we join forces
with XWP. And basically we set our eyes or
site what we call Native experiences
in WordPress. A world where there is no need have the same
content because there is not visual disparity, you can have
only the all the benefits without
sacrificing the flexibility of the WordPress platform. That
was our goal. version and it indeed, enabled
many experiences. good; however, it is still
Orientd to power users. It is not as easy as we would like to
order to make it mainstream. We are here at I/O and we will we have in the area of tooling
and also in the editorial workflow of configuration in
WordPress N.second, we will talk about that. After this, after
1. 0 plug in, we have it on the
Alpha then comes a lot of word that we want to mention in the
end that has to do with content. we do that, I would like to call Thierry Muller on progressive
WordPress. The latest work on the plug is
ways. The first one is by providing a framework to empower
developers to WordPress Eco system. The
second one is by providing tools components in the brand new
workflow in WordPress. To put this in context and better how
we’re closing the capability usage gap, we will look at some real
world example. 2015 theme which comes by
default bundled in WordPress. It is active on half a million
today. But what would it really take to actually convert this theme to
be specifically provide a AMP
experience. Let’s look at the theme first and The 2015 theme is all about
content. It is designed for clarity. with a MobileFirst approach. One thing we want to pay
specific which it is JavaScript, custom
Java script. On the other side, if we look
closer widget, it is allowed in line
JavaScript. Or allowed in AMP at least. If we look closer, it chooses an
image tag and not an AMP image tag. On the (inaudible) side, it
loads threestyle sheets. If you look at the Java script,
there is multiple JavaScripts loading.If you take a closer
look at the function, it includes or contains the and widget area we saw early on
in mobile. challenges, custom JavaScript,
too much of AMP in HTML. Let’s see how
we can convert this. It comes with a set of amazing can really help us overcome
these challenges. It will do most of the heavy
lifting for us.first thing we would need to do is to declare the so-called AMP theme
support. All it takes is to add the code
snippet you see on the screen right now in or in the trial name. Once the AMP theme is declared,
the AMP will consider the active theme. vamp citizen and apply really
nifty features. the AMP plug in is capable of
converting the WordPress built in components and incomparable. By that, we mean widgets and
(inaudible), literally everything WordPress components. the archives widgets. AMP conversion, we have this in
line allowed line in Java M. plug ins that changed that to
function. It is part of the library and that that will pass validation. So if we look at the invalid
AMP, what same results. That’s what we see on the screens. On
the right-hand side, we can see it AMP validation. Only 50 kilobytes are allowed to
enter maximum page speed. WordPress. A lot of themes
contain a lot more than that. To address this issue and make
it the AMP plug in includes a tree
shaking mechanism which has the ability
to only on any given page and remove all
the unnecessary CSS. If we look once again at the
stripped version of the 2015 theme code,
they are two style sheets. that CSS alone is 96
kilobytesst . It is printed in line using
the prescribed by AMP. That will apply validation. If we
look once again at the invalid we can see that we achieve what
we are after, which is exactly the same result. from 114 kilo bytes of CSS down
to 9 kilo bytes. This is ten times smaller. Predicting what thousand of plug
ins render on the page is extremely
difficult. And to address that, the plug in which is capable of validatingd
page outputs and entering income
pattibility wherever possible. may come across some
functionalities Efollow when JavaScript is
involved. little bit in more detail. If we look at the HTML before
the there’s an image. If we look at the bottom snippet, it is able to convert this image
using the image deck. That is considered as a safe
change. The functions that (inaudible)
in and validation, it would have to
be removed. And the plug in is capable of
doing that. look at the invalid AMP version,
the invalid works the menu toggle,
but on the menu doesn’t work anymore.
And then we have a problem. Right? make AMP integration as easy as
possible and to improve the user experience but any mean. That’s why we still need a
validation workflow which allows developers to all the features
the plug in includes, but also have control and be able to
manage it. the plug in will convert and
apply all the conversions which are considered as safe. the AMP HTML conversion and with
changes that we could introduce regression. through a validation workflow,
which looks like this. For example, for the CSS tree
the developer can go ahead and view the front and as if the
plug in was to apply this feature. we can see and we saw early on,
it works perfectly fine. At that point, we can go ahead and the plug in please enable it so
that it passes and validation. early on that it actually breaks
the menu. And the plug in will not apply this not to introduce regression.
That will give developers a chance too go Use the state function to toggle
the Dallas class. team by overriding the head of
the PHP. Once it is proven to be working,
the go ahead and tell the plug in now that we fixed it, you can
remove the pass the AMP AMP validation. the 2015, which is a popular
name, we AMP team support. We had to apply a fix by using
the AMP buying components. validation workflow and told
them to apply the conversion after reviewing them.end result.
What we’re looking at right now is a Native M version of the 2015
theme. great as the non-AMP version.
Of course, if we look at it on mobile, it looks similar as
well. we can see just now, because we
applied the M bind, the menu is still
working. to look at the console, we can
see it passed successfully. side. Let’s draw the lighthouse
audit without the M plug in. We get a score of 66 with a 4. meaning for prints. If we run
the same lighthouse test the AMP conversion, we get an
88 with the first meaningful paint of 2.8 second. We see how the latest
development work done on the plug in is helping product, existing sometimes and
plug ins used by people. content creation side. content creation spans in
WordPress. It replaces half a dozen
inconsistent WordPress bringing it with the
modern standards. And aligning with web
initiatives. the gap between the back end and
the front end offering an amazing visual So Gutenberg has the concept of
blocks and on the other side AMP has the components, micro libraries to
build pages. But what about combining the power of Gutenberg AMP blocks? This is
exactly what we did and it We worked hard and pushing the
boundaries to take the Native AMP further leveraging all the
latest work done on the plug in as well as a range and blocks. This is what it
looks like. What we’re looking at right now is a Gutenberg
powered theme. It’s inspired from the AMP start
functionalities that we see on the page is powered by WordPress and
Gutenberg. If we look atsingle adventure,
we leveraging some of the AMP components. It sliced through through the images. We are also using some of the comments to actually allow users
to submit reviews. Or the WordPress search to
actually We can really see how the
content management system shines to manage the here. On mobile, it is the same
version of this theme and it looks just as great out of the
box. What we’re looking sat Native
AMP, but it is seamlessly integrated. adventures are categorized. We
can see that it is pretty well organized. adventure, we see other AMP
components to make the user experience
better. It is secure and seamlessly integrating and it is pretty I perfectly aligned with a
delightful user experience. What about the content creation
side of it?What about the content creation experience that
we spoke about early on? Gutenberg can be used to rebuild
a different version of the home
page. page a title. And then we’ll have access to a
set of blocks. For instance, custom blocks.clicking on the
little plus button, it will open a drop down. home page is just click on the
hero block. As we can see, what we see
immediately resembled the front end and we
have access to really cool things like in text. This is new in WordPress and it
is pretty amazing. from the three suggested block,
which is the most used. Right there we have a second
block on page and we can continue editing
our page, for example, by changing
the sub-title.more third block. The featured destination block. Once again as we can see, it
pieced the content automatically.
Click on the little icon on the land side and boom. up and is now under the heyward
block. This is a pretty amazing editing perspective at least. Let’s preview what we just
(inaudible) in 30 seconds. A different version of the home
page. Native AMP was accessed of the
amazing AMP components and amazing content experience powered by Gutenberg.
So where are we at today? 0. released last week. And 0.7 is
really the first step towards Native AMP. toward power themes built from
scratch. We have the AMP plug in 1.progress. That will
include validation workflow we have today. Gutenberg blocks and a range of
custom Gutenberg blocks which integrates the AMP components
But the future is also very, very invite Alberto back on stage to
tell you more about it. Thank you.>>ALBERTO MEDINA: Thank you,
Thierry. Oh, my God. That’s so exciting.saw it before, but
that’s awesome. We have certainly come a long way on progressive WordPress, but we
are not where we want to be. There is quite a bit of work we
have do. We are pursuing significant
efforts in three main areas. WordPress. You may have heard
of it as the app shell architecture, which is a pattern s in a way that is tactics and
sections that do not change across navigation. easily so that when you navigate
the page, they can be loaded very quickly nice onward experience for the
user. Part of the pages that are changing rendered on the client or on the
server. What I exactly (inaudible) about
this corporation or the service
workers API into the word press core platform.this particular
area. Another area has to do with
remembering progressive web development mods.need tooling to
help user developers to do complex things in an easy way. is tooling to facilitate the
development of WordPress themes. this in a second. The last
part, which is like an Uberarea has to do with Eco system
adoption. saw WordPress has a consistent
area of thousands of plug ins and
thousands of themes. with the community to push
WordPress towards the progressive load. here and stay tune for them. Now, our friends have been
working progressive theme boilerbait.
Essentially it is a theme development or makes it easier for WordPress
to go from zero to a full progressive theme very quickly. boilerplate comes with modern
development workflow, it uses tools browser sync. It comes with a range of coding
and performance practices baked in. example, synchronized loading
resourcing. Lazy loading of images and it
has an immigration. So the theme integration.get go. So the main
developer of this project, which is preparing an awesome course that
will teach you everything about how to use this boiler plate.
That is going to be free. Okay? Another thing is if you can take
three the following. Success in the web is all about
user experiences. experiences. To do that, we
need to start to close the capabilities usage gaps T.do the
right thing in the web. And the topic of today, it is
possible to get experiences without
sacrificing the Fidelity of their site, the
content of the WordPress platform. Remember those three things. you-like to code and you are
passionate WordPress Eco system better,
you may consider working with us exactly on doing that. There are useful resources here
and I thanks for the whole team and
automatic for all the awesome contribution to this really
looking forward to what we’re going to achieve in the coming
months. Thank you very much. What’s new in web accessibility Thank you for coming. I am a
product manager on the next at Google. We want to use this
session to tell >>, Indonesia and places around
the world. in from India for I/O? Awesome.
Okay.very late right now for you all. So we’ll try to keep you
up. Anybody from Indonesia who is here?Okay. Couple. Cool.
Brazil? Great. Okay. I’m sure there’s a lot of other too.
Thank you all for coming. interview some of the folks on
stage is how this initiative and focus
area for Google got started. been interested in building
products for everyone. Looking around the world is literally
new people come online for the first time that. Changes how we think about ways we think about. Some of
the things we got right and some of the things we got wrong. think about exist product. or Maps or some of the other, we
have a few others. Today in *EUPBD Jack, 28% of the see are Voice searches, which is
amazing. It’s growing — Voice is
growing Brings up interesting questions of how you interact
with computing devices, and others. We have also seen in Google
Maps, our new two Wheeler code feature. Really popular . That’s a little bit about how
we started thinking about our existing Smartphone prices drop around
the world, Connectivity becomes more affordable, come online and it opens up more
use cases. That’s what our three pannists
have panelists have been doing. We will talk to you about
product we months and a lot of what has
gone into them from the engineering and design and
research side.what I will do is introduce them briefly and then we’ll get into some of
the questions.mics in the room. If people want to come up, you
can come up and ask questions. I want to introduce PANKASH. There’s a whole lot of stories
here can share. have DIVESH leads our Files Go
team. It let’s user free up space and find also share files offline when
you’re nearby people. So it is very interestthat went
into that. And next we have NITHIA. She has worked on every
product at fashion. She will share a lot of lessons doing
research in the field, but also we have local teams based around the
world that build a lot of these products for us. next billion users group really
thinks about how do we staff people in the as they’re emerging and give a
lot of the small teams a lot of
latitude to go problems. We’re an experimental group and the
three of them will talk about some of have launched. Let’s start with PANKASH. Give us an overview of what this
is. means fast in India and some
other region languages.into bank protocol that was introduced in India called UPI, universal
payments interface.almost all Indian banks have implemented
this. It is 24 by 7 instant bank to
bank protocol T. puts a brand new very fresh user experience
on a variety of payment use this protocol. So a taze user links
their bank account with their Google
account and them payment address, which is called a
virtual payment address. So any service or product that
is using that has to share this mechanism.the neat thing is any
payments that are done from this address or into
this address are done are done from
the bank account directly. You don’t have to to load up a
digital first and then transact from that wallet. The of the neat thing is that
the and made the transactions free of cost. There’s a big push by the
government digitization. What you have is instant 24 by 7
free your finger tips on your
Smartphone. So why don’t I show a demo of some of these
features. We were having some Wi-Fi issues earlier as they usually do with
the demo.phones here. If I could get the wolf vision.
Okay. So this wolf vision one is my
phone. going to get into Taze. It uses
the OS lock. So we will get here. about the orientation. We will
do this also on the phone on the left here. app. Again use the OS lock to open
it. And now we have what you see
right button. Let me explain this. We call these cache mode
payments. phone and they walk into a shop
and there is a (inaudible) in the shop. phones nearby and one of my
phones is working on end to end
international roaming. I hope it works. Ah, let’s put this into the
receive and see if the phone discovers
the other phone. So what should happen, if Wi-Fi
was one phone would be discovered by the other phone.
Let’s see. Boom. on the left has discovered the
phone on the right, which is my phone. payments work. What you can
also see now just focusing on the Chrome cast, phone on the left, you see these
chat heads. These are profile pictures of
the I. I can tap on someone, say
this person. It will give me the entire history of payment
transactions that have happened between me and this person. So
that’s really convenient.Now if you go down further, you see
this section called businesses. phone provider, nie internet
provider and we recently integrated build
payments. due and I tap it and it’s done
in an instant. Likewise, you can check balance and as well.
So that’s kind of like a quick overview of the app. >>nice.
You can tell a little bit more about the cash mode and how you
guys built back story there. Is it started with user insights
and fact that paying by cash is
still quite widespread in India and other markets. it, paying by cash has its
benefits. So let’s like pseudoprivate,
anonymous we wanted to do a digital mode
of which is kind of like cash in
many senses. And so that’s why we call cash mode. actually there are various ways
to make the nearby proximity payments work. Like in the U.S. various parts of the world,
generally NFC has been the way to go. don’t have NFC. Certainly
you’re trying to make two phones who have never seen other, ever, want to exchange expolice it
wily some addressing information. information for pairing on
audio. Actually using ultrasound. sends a shot ID at broadcast a
shot ID identifying itself saying this is me. This is me. And this ID is broadcast using
direct sequence spectrum and it’s a
spectrum technique. doing is multiplying the data
signal by pseudorandom noise signal. on an inaudible frequency,
nobody can hear it, what’s being transfered
is important for security because
then a snooper can get access to the signal, of T. however, the
receiver can use the same and multiply it with the signal that
it same pseudorandom noise, code, and recover the signal.
So the receiver recovers the ID, the ID and does a look out to
see who does this ID belong to. information to the receiver
phone just like the other phone got you’re paying tap. That’s
the user confirmation. That’s how it works. I think it’s been great for us. can imagine, there’s lots of
challenges in making this work in terms of tuning and
configurations. broadcast or configuration,
various statistics about this when we
launched and Taze launched in September last year. devices and hand tuned them in a
lab and since then, we have evolved this
process it’s a control system like
feedback loop in which we just adjust automatically in the field and observe how
that goes. Today we have nearly 6,000
unique successful cache mode
transaction. So, you know, I think it was
extremely get the coverage that we
eventually ended up getting and that’s why
we chose >>Interesting. Nice. Maybe one last question
for you. about what you learned working
with a new app, a new protocol in UPI,
a lot of I think there’s a war story
here. banks in India as our partner. You are building any distributed components and you have to build
some help monitoring and other high
availability things. we do is a couple of things just
to give it a quick example each. when this grass has been created
— address has been created, when the bank the other bang. bank. At payment time, what’s
important is stock state. We build ML
predictd models to figure out whether any of the banks are
five a transaction. The way the protocol define its
and any of them is almost down, then we bring it up and say
please try again rather than having the risk of
the little bit of a stuck state
where nobody knows where the money has gone. >>PRESENTER:
Nice. Great. I want to shift gears a little bit and talk
about files go.you can tell us just briefly what it is.>> It is in India markets and it
haven’ts peer to peer sharing using technology.use internet.
So the way it all started was we started looking at what the
users in needed and we realized first
that mobile data in the markets were very
expensive. that meant was users in the
markets were not able to consume content on their primary only device for them
most of the time because of the expense. nearby sharing was born. Then
as we were researching more, we access to more content, they
could get from their friends. storage. They started running
out of storage when they started getting content from friends.
The other problem that we saw was users are not able to find a
content on really because most of the existing file managers out there show off all
the which is complicated for new
users who are coming to the mobile first time and laptop and
(inaudible) computer before. So we decided to build something
that them to use and would allow them
to keep their phones clean and allow them to their friends.
You want to show it? pulling up the demo, when this
feel field research, we found a stat
that was shocking. One out of three Smartphone users in out of storage or see a low
storage warning every day. If you think about that, something
million people are constantly up against the boundaryf running
out of storage. switch over to the Wolf Vision
and you can show them how it works.>>first right on top, we show
the storage available on the phone and what’s the total
storage. The casheds that you see here
are personalized for the users on what they storage. Let’s go and take a look at the
last. These are the large files on the phone. these are videos that users
shoot of the content they get from their friends. a one click, you can just get it
off this user back to (inaudible)
data. Let’s take a look at files. a tough time trying to find the
content on their phone.We came up with the scheme of
categorizing the content based on how we associated in users mind. For example, images. We don’t
show a filedder view. that here and then the other tab
show what the users think where the images came from. for videos, again, it’s
categorizing across and they can identify with. Right down here, you sigh see
two buttons for send and receive. This is the nearby
sharing mode to to or from their friends. Interesting thing is doing a
nearby simplistic enough, but there’s a
bunch of stuff involved there because there of phones out
there on the market. We use a combination of
Bluetooth, hotspot type of technologies.
Based on the phones, we are able to negotiate their capabilities and
create the fat fastest connection
possible. Connect on (inaudible) network
which identifies transfer data, GB,
which is very impressive. shouldn’t have happened if we
just go on the path of the Wi-Fi, which is the default
path. The other thing I wanted to talk
about was media names. So one thing that we noticed in
our research again was that users in
markets use chat apps quite a bit. from their friends. Most of which is good morning
messages, means jokes, which they don’t keep because it fills
up their phone. At the same time, they receive content which might be pictures of each
other they took on a trip together. through all these chat messages
and deleting stuff manually by
figuring all useful and what was not. We came up with this (inaudible)
that library to base detection and (inaudible) to detect means or
images And these are some of the things
that get detectd. These — the user can delete it
and keep the rest of their useful
messages in tact without strong go
through them manually. Present is the average user, the first
time they freeing up over a gig of storage. So that little
bubble boy is dancing a freeing up. So it’s become a really
interesting kind of sort of daily and weekly habit gotten
into. Maybe one last question for you. Before we move on to NITHIA.you
learned so far? This app has only been out four
months T. launched at the end of last
year. the fact that this app is
getting used a whole lot in U.S. Europe, which we did not expect
when we launched it. The product talks about building which you can bring back using
other markets which you did not build
the product for. fulfilling. Nice. Nightia, you
can take us home with the last set of questions.to us a little
bit about some of the work you have done around
connectivity sort of research and design
experience. mission of bringing the
internet to main products. Google station
which provides high Wi-Fi. And deitaly is a mobile
application which helps users get more value out of their
data.So we’ll talk about station first. Station was launched in 2016 and
is and Mexico in helps of hotspots
like train stations, parks, malls and
public venues. users people come online for the
first time on the station network. constraint internet more
manageable and accessible. useers can understand where the
MBs and GZs are going. which helps users get more
value, extend their data packs. Users can be online more often. And the underlying insight for
Detaly,money because it is expensive, slow and limited. As
a results users resort to practices off mobile data when
they’re not using it to cut costs or hesitate to
get new fields in existing applications.
So that limits their participation online. functionality to help users ease
into their experience of getting more value from the internet. Can you talk about these
products and >>NITHIA. To simplify things, the people,
context and devices. internet access is growing all
over the world, technology is touching
new societies. the people that are coming
online are increasingly diverse. languages, income levels pro,
feagueses, geographic spread —
professions, first billion users need
rethinking. It may not hold true that users
are relatively wealthy or English
speaking even. Many people around the next billion necessarily English speaking
primarily. But may prefer to use phone UIs
in is seen as a language of upward
mobility and aspiration. design interfaces that help
users with the use of simplified English
and more interfaces. With regards to context, like I mentioned, internet is often I. mittent and constraint. (inaudible). How do we think
about designing for networks and treating offline is
not an return, but as a Enumative use case. about economic factors like
purchasing power and the prevalence of cash
and factorings and the role of religion in using technology. you can buy an Android phone
from $30 to $40. It is very likely to have a
small low processing power. But it is the first computing device
that many people have access to. many of these devices don’t get
replaced as often as they might here in
the west of frugality and gift giving.
How might we think about hardware and low end costs. your UX hat, if you had to pass
along learned along the way over the last few years, what would
you share with everyone? collection of research and
design methods on design. You can find it as part of the
Google design website and we regularly share methods as well as product
stories. Behind the scenes look. innovations, I’ll talk about it
through the lense of a product life cycle.building a product
and understand the problem space and we want to create a we really. A foundation of rich
insights. And understanding of people and the technology and
the unmet needs. So here we have introduced
techniques graphic areas. People are done in the various
context such as way they live, the way they where they work, where they have
fun. And by doing several of these, we of people in their context and
they use the technology. Wi-Fi station in India, we spend
a lot of time talking to passengers, talking riding trains ourselves to
understand what the experience was like and were embarking on these
journeys. And really what we wanted to understand was who is
in the station? What is the content of using the
Wi-Fi? Wi-Fi? And these understanding
led to principles such as the service,
the to the network has to be
efficient and quick. The service has to be trust worthy
and secure. saw in our research that women
were hesitant to give out phone numbers to log on to a Wi-Fi
network. And that because of the hetero train station, this is really
the cross-section of the entire country, the service was browser based. any mobile application. design techniques later on in
product development is when the product is nearing launch. created a technique called
trusted studies. These are large panels of recruitd to make use of the representation and provide
feedbacks. studies for Detaly where we had
Mondays of payment ants using Detaly for
three to four months.had understanding of what are the
core use cases and the value proposition. we’re running into that we need
to fix before we launched the product. We have about five minutes left. here is talk to you about a few
different projects and what we’ve been learning along the
way. if you got any questions about
Taze, Google station, Detaly,
research, UX, then everyone can hear. Yes.
Go ahead. I’m sure that a lot of regions
and network connection. I wonder how Taze tackles that.>>Right now it requires both
sides to have network Connectivity,
but we are one side could use the other
sides network Connectivity. But right now, there is that
limitation.The good thing is that we are continuously
optimizing the amount of device and the server. It’s a
very good point. Thank you. Audience question. since it came out. One of the
challenges for me is how do manage my downloaded music? Spotify and (inaudible) to
understand to and be able to remove that
without having the fear of like hey, I’m playlist, but there might
somebody songs on there that I really like from that something that I would like to
see on files go and wondering if that’s been working with. I
think that would be something that Spotify and Google music
would have to involved with. The primary focus for us early
on was in terms of the consumer media. We started looking at
music more manage — sometimes we report
some large files for users that they want to keep.that model.
So we’re trying to work down the path (inaudible) based on that. So hopefully should hear
something in future. Thank you. Go ahead. Awesome job on trying
to solve a lot tackling from payments, from
storage point of view, from (inaudible) point of view. I am curious if there’s an over
arching (inaudible) that Google is trying to solve for the next
billion users. >>PRESENTER: Great question. We as the company think about it
in a few different layers. Fundamentally there is access
primitive. Millia talked a little bit about that talked a
little bit about that. They want to make it useful and
accessible to people.Access comes in. The second part is platforms. You can think of something like
Taze showing. He showed how business is and
other entities can start to integrate into that.something we
think about from a platform side. There’s apps which are
very targetd to focus — I mentioned at the
very beginning big repeating problems that we millions of
users have in the world. That’s where something like files go phone is filling up every two
days or so like you said. Those are really what we try to do
apps. If you think about access platforms and apps, that’s kind
of a rough think about what bets to take
and what to experiment with. Great question. All right.>>Them is for your point for a
lot of people using the post computing device they’re using. (inaudible) that you see do not
work for this audience?are complicated. (inaudible) space I struggle
with the other people, that might be
other things we take for grantd that they don’t understand. Assumptions around having the through desktop the mobile phone
fail because mobile is now the gate way to of people now. So that leads us to think about
various factors such as some of the may be lower literate. So then how do we think about
more browsable interfaces, visually reach Google has made a lot of
innovation with bilingual. hesitant to go to their phone
settings and switch origins languages. cultures and local notions of
aesthetics and imagery? We have done a lot of work by working
local artists and commission stickers and education material
and collateral. around the world in their local language, whatever that might be
visually. Connectivity, we can’t ignore
the infrastructure that’s underneath it all showing progress indicators and
state transitions as we go across switch networks. All
right. Yes, sir. I am from the Facebook fault
team.guys. One thing that we learn is that
design expectations change from country to country.am very
curious about material design. How do you find it working in
India in developing world? add some stuff. understand how material is being
used.work. Today I would encourage to you stop bite next billion users user
experience I believe it’s at 2:30 in the
dorm. And there are also several Sandbox talk about latest innovations
for accessibility as well as gender equity for global
context. design team that’s there. You
can ask them. is building off an earlier point
that these are users especially for the first really learning about all of
these components in UI elements really from scratch.the parts
and material design that have done really well in our testing
have much more explicit. Have a real world tangible
aspects and abstract users struggle with.
We are working with the design team. You can have more chats in
detail. Maybe one last question. We slip it in. I am
also from India. I have one question. things like thinking developers
and solving the next billion users problem. the internet providers role in
geo changing everything in India right now. applications like the
(inaudible) to grow and maybe target the next billion users
easily. you want to talk to anyone else,
they will be offstage.Along side with you guys, we’re adapting to
the geo effect. as it is broadening access to
large parts of India. Previously couldn’t afford it or
as much of the data that’s provided. So we’re still
learning a lot with Taze and others. stage or on the side and we can
tell you more. Cool. That’s it. Thank you all. Plause This text, document, or file is
based on live transcription. Communication Access Realtime
Translation (CART), captioning, and/or live transcription are
provided in order to facilitate communication accessibility and
may not be a totally verbatim record of the proceedings. This
text, document, or file is not to be distributed or used in any
way that may violate copyright law.
*** We’re going to start the
presentation shortly. Well, as soon as we have everyone
settled. Good morning. My name is Brooke Thomas. I am a member
of the DCVLN. I EEM a compliance consul TANT with
Lockheed Martin. We are very happy to have everyone here
today for this presentation. Our presentation is focused on
really getting our corporate employees and members together
and also community leaders to talk about how we can expand our
disability efforts in the workplace. So we want to get
information from everyone, your feedback, best practices, some
of the challenges that you’re facing from your standpoint, and
how can we improve. We hope that today you make connections
that will last beyond 1:30, that you will exchange information so
that we can build from this day forward.
To start out, we will have some Representatives, Sue Weber
will be here on behalf of DCBLN to give opening remarks. We
will have Rikki Epstein and Deborah Warren. She’s also the
executive director of Arlington community services board. Thank
you. And I’ll have these wonderful ladies come up.
>>SUE: Good morning. I want to welcome you. How many of you
are familiar with the DCBLN? For those of you who are not, just
quickly, the DC metro BLN is an affiliate of the U.S. business
leadership network, or the USBLN. And that organization,
we are one of the affiliates, and actually the affiliate of
the year in 2017. [ Applause ]
so we’re very excited that you’re here. Our mission is
basically to get people with disabilities employed and valued
and promoted and thrive in the workplace and that companies can
benefit from the diversity they bring to the table. Without
further ado, I want to invite Rikki to the podium.
[ Applause ] .
>>RIKKI EPSTEIN: Good morning, everyone. I’m the
executive director of The Arc of northern Virginia. For those
who may not be familiar with our organization, we are a
56-year-old nonprofit. We are also part of an affiliate
network from The Arc of the United States is a national
organization with 650 chapters around the country, and we’re
the chapter for northern Virginia. We provide direct
services, education, and advocacy supporting people with
intellectual and developmental disabilities. And one of the
areas of focus that we have spent a lot of time supporting
people with disabilities and their families throughout their entire
life span.have been really thrilled
that for the And employment is a key focus, and we past four or
five years, we’ve been partnering each May with the DC
metro business leadership network, bringing together
employers, HR professionals, EEO professionals with nonprofit
employment provider organizations that provide
support to help onboard and maintain employment of people
with intellectual or developmental disabilities.
We’re really pleased to be here today, partnering with a number of ***
This text, document, or file is based on live transcription.
Communication Access Realtime Translation (CART), captioning,
and/or live transcription are provided in order to facilitate
communication accessibility and may not be a totally verbatim
record of the proceedings. This text, document, or file is not
to be distributed or used in any way that may violate copyright
law. ***>>GLEN SHIRES: Hello and thank
you for joining us today. I hope you had a great morning and
a good lunch. Hopefully no one’s in a food coma. >>CHRIS RAMSDALE: I’m Chris Ramsdale.
>>GLEN SHIRES: And I’m Glen Shires.
>>CHRIS RAMSDALE: Today we’re going to talk to you about
a couple of things. We’re going to talk about our
software development kit, and how to use the SDK to extend the assistant to work
with your devices. So just to calibrate a little
bit, last year at Google IO, allows you to embed into
whatever you’re building this is the
technology you use to get the features and functionality into
your hardware. -to frame the conversation a little bit,
because it’s an eco-system, and there’s a platform and things
involved, so let’s talk a little bit
before we go deeper into how this relates to the other pieces
of technology that we’re building. So at the core is the assistant
service. This is the artificial, AI, and
MO that’s powering the virtual assistant into your home.
That’s at the core. That we have been FILDing for years and years and years. Then we talk about ways of
extending them. One way of extending is bringing in third
party cloud services to the assistant building for years and years and
years. Then we talk about ways of extending them. One way of
extending is bringing in third party cloud services to the
assistant. So you can do things like order a pizza, book a ride,
and extend to things that we’re not necessarily doing. We might provide calendar and G geoand others may do ordering a
pizza or a ride and others may do
ordering a pizza or a ride. We’re going to talk about
the Google Assistant SDK for devices.
Let’s talk about what people are using the SDK for. We’ll talk first about
commercial OEMs. Recently in January, we announced
integration with the LG TV. We brought the assistant to the
experience inside the TV. You’re talking to the push
button on Tthe remote, and it helps you with what does my day look like and what’s
on TV. As we go deeper into what the
SDK is, LG wrote a thin client on top of
web OS, and they were able to do an over
the air, OTA update to reach
millions of subscribers of their TVs. It gave them the ability
to take new technology, the Google Assistant, and work with
evidence-market devices that they already had. Then we announced back in
February, integration with the Nest
camera, Nest Cam IQ. We were finding that households that
have bought a camera end up buying multiple cameras. What
that does is give you this ubiquitous experience in your
house where you’re walking through, you ask for help, and your virtual
assistant helps you out out where you are. Going from my living room has a
Google home to a JBL speaker, to I have
the assistant streaming throughout my house. So those were commercial OEMs.
We have had fun experiments with our friends over at deep local.
This is an agency out of Pittsburgh that we have worked
with over the past year and a half to build really cool
experiments. If you were out at Google I/O last year, they built
a pop-up donut shop, which is really cool. You could walk up
to it and engage with the assistant. It would give you donuts, and at random, it would give you a
Google mini . At the consumer electronics
show in January, they did a giant gum
ball machine. And then over in the developer
sand box, we have a poster maker that you can go and interact
with the assistant and generate a unique poster that you can
then take home with you. Not commercial devices, but
still a lot of fun and it’s fun to
innovate with that theme and fun to innovate with the larger
maker community, which has been awesome.
This has been a long tail of developers taking our software
development kit and using it in ways that we would have never
thought of. I want to insert the YouTube
links if you want to check them out. We have had a candy dispenser
and a ton of robots. And then we had one maker embed
the assistant inside of Mac OS to
bring it to the actual laptop experience. So a lot of fun
there. >>GLEN SHIRES: That’s a bit of
an overview and framing, there.
>>GLEN SHIRES: As Chris was saying, you can build the Google
Assistant into all sorts of types of devices, and there’s
starter kits that you can get. For example, the IMNX7 is for
Android things, and we have the AIY voice kit
available at several retailers. You’ll see the URL to get that
at. What that is is this card board
box. If you look closely, you can see two microphones as well
as, of course, a speaker. So we’ve got the microphones up
here on top. And we’ve got the speaker here. I’ve got a big
battery here. You can plug it into the wall or whatever you
would like to do. Inside this card board, the
whole thing comes in the kit. Inside
is a small little computer called the raspberry Py . How long is the golden state
bridge. >>8,981 feet.
>>It’s doing better than I am in terms of speaking, I
presume. How long is that in meters? >>1 foot equals 0.035 meter.
>>So you’ve got the Google Assistant inside an embedded
device. Turn on hot word. >>Accepting hot word.
>>So now I don’t have to trigger it. I can simply say hey Google,
pick a random number from one to 100.
>>Okay. [ Beeping and electronic sounds
]. 73.
>>Hey Google. How tall is mount kill monojar
row. >>19341 feet tall .
>>Turn off hot word. >>Sorry. I’m not sure how to
do that. [ Applause ] This box is an AIY do it
yourself kit with everything you need. The kit connects to the
Google service via Wi-Fi. There’s two different types of
software you can run this. The SDK supports two ways, one
is a way to run it on almost any
platform, any operating system, any programming language. You
run all of your code directly on here, and we have sample code
that does exactly that. In this case, I’m actually running some
python sample code on the box that implements the entire
client in that Python code, so you’ve got the entire sample. The other way is when you say
hey Google, turn off hot word… I’m pressing my luck, I guess.
The other way, when it’s running that way is using a library that
runs directly on the assistant — I’m
sorry. Runs directly on the client . That runs on either Linux or
Android Things. Okay? >>CHRIS RAMSDALE: I think you
had some coding to do? >>GLEN SHIRES: Yeah. Let me
show you a little bit about when you’re running the library.
It’s quite simple to use. You can see there are simple
functions you can call to start it, to turn the microphone on
mute. You can also, rather than starting it with the hot word, you can start
the interaction program mat aticallyatically. You can see a
lot of events that you code can handle, or there’s no need to.
Another thing I mentioned, there are two microphones. And
you may notice that also on Google home, there’s only two
microphones on Google home, yet it does a wonderful job in
background noise, with people speaking to it from a good
distance away. And the way that we do that is a
technique we call neural beam forming. What it does is it’s
very similar to the way people have two ears, and they’re very good at picking out
speech out of noise. We’ve used machine learning and run this on the server to get a very
robust, noise robust far field experience. What that means is
the client side has minimal processing power, so we can
really keep the clients low cost. Back to you, Chris.
>>CHRIS RAMSDALE: Cool. So building on what Glen was
talking about, let’s go deeper into what the SDK actually
provides. At the highest level, there’s a cloud-based API that’s built on
a GRPC protocall. It uses HTTP2 to go back and
forth to give you streaming support, which is important with
audio because you want it to be fast and low latency. The API
is available from just about any platform. Just like the LG TV example that
I gave, they have Web OS and a
number of partners are actually coming with their own platforms
already out in the market, and they want to know how to bring
those platforms to the assistant. We can create a very
thin client that communicates with the cloud API. And out of the box we provide
GRPC bindings, those are the thin clients that are built on Python, Node. js, or any of the other things
you actually have. That API is really good for
push-to-talk support. So when Glen was with the box and he
pushed the button, that’s invoking a think client that’s
talking to our API. If you want to have an experience more like
the Nest Cam where it’s hands-off, because the cameras
are typically mounted above you like in your ceilings and whatnot, we call them
farfield or hands-free experiences and they use technology like wake word or hot
word, the whole okay Google built. We have client libraries
built for Linux, specifically Linux 3.18
and above. And because it’s a SDK,
obviously there’s going to be docs and samples and tools that will allow you to
embed the assistant, debug the assistant, and test it as well.
We have hardware kits. Glen mentioned the AIY kit other
here, that’s a twist on DIY, so
artificial intelligence done by you. Is that the IMX?
>>GLEN SHIRES: IMX7 kit. >>CHRIS RAMSDALE: So kits
that allow you to start building, and we will expand
over time to bring more and more software on chips to the market
for developers to get up and running with the assistant.
So with all of that, framing this is our goal is to bring the
ubiquitous experience to everybody. We’re not going to
build all the hardware out there, nor are we going to build
all the experiences. We have done a fairly good job with
speakers and whatnot. There are appliances, auto, things in your bathroom, so we are trying,
why Glen and I get up in the morn something to come and figure out KWHA what
are the user experiences we want to bring to market with partners
and what technology do we actually have to build to make
that happen? When we think about it, one way
to categorize it is to think about your day. And this is a little bit
trivial, but it gives you the idea that we’re trying to give you the holistic
experience from streamlining your morning when you want to have your coffee made or
stream NPR news to know what’s going on — or maybe you don’t
want to stream the news. And when you have moved from your
house to on the go, I forgot to set the security camera or I
left the garage open. Or you’re coming home from the grocery store, and you want to preheat
the oven on the way home. To helping you relax. To hey, I have kids, so no more
screen time for kids so turn off the Wi-Fi in the kids’ rooms or
in the house, or dim the lights because we want to watch a movie
or TV. Trying to figure out what the user experiences are
that add value to you and extend figuring out the technology
behind it. When it comes to integration
paths for doing this, if you’re building hardware, there’s two
paths that we have coming into the assistant. And it’s a
little bit of marketing speak, we call works with assistant and
assistant built in. Works with assistant is if
anyone has a Phillips Hue light, they can
be controlled by any other device that has assistant
embedded inside of it. If you want more information on that, tomorrow at 11:30 on stage five,
they will be talking a bit more about how you can integrate with Works With.
Glen and I are talking about built in. So it’s kind of a controller
versus controlee. You’re building a controller, a device
that can control other devices as well and interact with the
assistant service for knowledge queries and things like that.
So let’s talk a little about developer benefits of the
assistant and assistant SDK. First of all, minimum hardware
requirements. If you’re doing push to talk scenarios and you
want to integrate with our cloud API, there’s very little needed
on the client side. It’s all up to you. Whatever you’re running
on your client, you can keep running it. It’s the effect of making a
simple REST call. Beyond that, if you want to integrate and
have hot word detection so you can have a hands-free
experience, we have minimum hardware requirements. Two mics and you’re good to go,
and we can use neural beamforming to
figure out what you’re trying to say. From a RAM perspective, it’s
only 256mg of ram to get up and
running. OEMP time, we’ll start looking
at microcontrolers as we move into
the appliance spacever time, we’ll
start looking at microcontrolers as we move into the appliance
space. We have hot word support.
You’re off to the races. You’ve got okay Google, and everything
will pick up. We’ll take care of the rest, the library will
take care of bringing in the audio, transmitting it to us
in realtime and streaming back down the response.
Google is a global company, and we know that we need to
continue to flesh out our languages and local story.
We’ve done a great job since last year moving into 14 languages and
locals. But we want to see over time to expand this map to get
into other countries. We know that people who are building,
whether you’re a prototype or maker or commercial OEM, you need to meet
your customers where they actually are, where your end
users actually are. So we will continue the momentum
behind this. In terms of actually, when
you’re a commercial OEM, as we’ve LESH learned how to go from prototype
to commercialization, if you’re in that space, I wanted to give you
insight into how that’s working right now.
We’re still early stages working with a few commercial OEMs, our
goal is to be more immersive and go deep with
them so we can build the foundation on
which we can start building more voice technology on top of. You start prototyping using
assistant SDK to build an idea, build a concept, and you submit
that to us for a review, and we’ll iterate on the device
itself, how it fits into the larger
eco-system, what are you trying to bring to market? We will assign an account
manager to you, and beyond that, you go into certification both
in terms of is the voice recognition actually working on
the device? Does the marketing — does it meet our marketing
guidelines? Is the branding correct? And step five, launch,
have a party, and be good to go. That’s the path that we’re
taking right now . We will work on really
scaling that up in 2019 .
Since last year, we have been hard at work and we’ve
added a couple of things. We added visualization support to
the SDK, so you now can enable your device and say it’s capable of — it’s
a display enabled device. You can get things back like
knowledge queries, sports and scores, weather, personal photos .
>>GLEN SHIRES: Rather than using an embedded device, you
can use your lab top with Chrome to test out the SDK and get your
application running and test out different parameters. For
example, we support, as Chris was mentioning, several
different languages, and you can set the parameters and just test
it out. I can say what’s thewet the weather in SFRAN San
Francisco. >>Today it will be cloudy with
a forecasted high of 62 and low of
53. >>GLEN SHIRES: This is input
and output. I could say those if I wanted to. Or if I want to
click on it, I could find out the weather for this weekend.
>>In San Francisco, Friday, it will be mostly sunny with a
high of 70 and a low of 59 degrees Fahrenheit. Saturday
and Sunday, it will be cloudy with lows in the mid 50s. Highs
will be in the low 70s Saturday and mid 60s on Sunday.
>>GLEN SHIRES: There we go. Of course, this can do things
Google home can do, such as search. I can say, “who is Larry Page?” >>According to Wikipedia, he is
an American computer scientist and
internet entrepreneur who co-founded
Google. >>GLEN SHIRES: And I can also
look at the requests I made this is a
JSON request. And I can see the responses that I got back from
the server. You can see the transcript as it
was forming, as I was speaking, it’s showing the transcript. Later on it’s showing the HTML
coming out, and the audio is being streamed back as well. Beyond search results, I can
also do personal results. For example, show me my photos.
>>This is what I found in your Google photos.
>>GLEN SHIRES: Sorry, I wasn’t on the right screen when I did
that. >>Here’s what I found in your
Google photos. >>GLEN SHIRES: So there we go.
And I can scroll through and see the different photos of a white
water rafting trip we did recently. And zoom in full screen. So
that shows what we can do with this developer tool as well as the
visual output. Let me show you how that works. What we’ve done here, let me put
up the slide to show you how this is working. What this is
doing is using the service API. We can run on any code, any
platform. So this is running in JavaScript, and it’s using the
Chrome browser as a client. The service is generating the
audio response in addition to HTML5, and of course the browser is displaying
the HTML5. >>CHRIS RAMSDALE: In terms of
new features, one of the things that’s very important for us,
especially when thinking about third parties building hardware
devices is to get parity so that they can build devices that
actually work just as well if not better than some of the
devices that Google is building. So one of the things we lacked
for a long time in the SDK was the ability to do notifications,
so to really have the assistant service push out updates to
devices. So in this case, I have a trivial example of hey
Google, ring the dinner bell, which will wring all of your
devices. This helps us with things like
OTA updates, so over-the-air updates
when we want to update a language
package on a device. Now we’re happy to add this to
the SDK. We’re also making an endeavor
into music. So we’re starting with news and podcast support,
so now you can actually access those news feeds, so NPR news, for example, or your favorite
podcast. I happen to be a “This American
Life” fan. Glen will show that to us.
>>GLEN SHIRES: Let me demo that for you. I will be using the AIY
cardboard box. I will simply say, “Play the
news.” >>Here’s the latest news.
>>GLEN SHIRES: And it brings down the file, and — >>Live from NPR news in
Washington, President Trump’s –>>Stop news.
>>GLEN SHIRES: And the other thing we wanted to show is
notifications. One thing that you can do with
Google home and other embedded devices is have one device talk
to other devices. So for example, you could broadcast things or you can say something
like, “Ring the dinner bell.” >>Okay. Broadcasting now.
>>GLEN SHIRES: So it’s broadcasting from one device to
the other device. [ Dinner bell ringing ].
>>It’s dinner time. >>GLEN SHIRES: If you want to
call your kids to dinner, all the devices in your house can say it’s time to
come down. >>CHRIS RAMSDALE: And it can be
any device. >>GLEN SHIRES: Exactly. Let’s
show you how that works. That notifications doesn’t
necessarily have to be between two cardboard boxes or embedded
devices. It can be between two Google assistants logged into
the same account. I can actually use my phone to ring
the dinner bell on an embedded device.
And so, that’s how notifications work and how this works. >>CHRIS RAMSDALE: All right.
To round out things here, one of the features I have been excited
about is what we’re calling device actions. When we initially launched the
Google assistant SDK, the feedback was
this is great. I can build a Google home clone
now. How do I make it do custom things? That part of the
community got it and understood where we were going and where we
should take it. This was one of our answers to that question,
which is like okay, cool. Let’s let you embed the assistant and
let you extend it to control that device. And so, this
breaks down into two ways of possibly doing this. We call
them built-in device actions and custom device actions. Bear
with me for a second and I’ll go a little deeper into these.
So they are built on top of grammars and things you can say to a
device, where Google curates those. So a lot of the home
automation, if you have a Nest device or
Phillips Hue, turn on, turn off, make it hotter. These are
things you can say to a device that we curate. They’re not static, they are
dynamic. We can change them over time and internationalize
them on your behalf. You can leverage our built-in
actions and know that the grammars that we have there will
continue to grow. An anecdote that I like to tell
folks is we’re going good with home automation and we rolled
out to the UK. We didn’t see nearly the traction that we saw
in the U.S. when it comes to lighting, and we didn’t know
why. A lot of people would pop on and pop off the lights. Not
everybody. But there’s a segment of the population that
would pop on and pop off was not something that we had known
about. So we were able to do our due diligence, research, and
then change it. We changed it on the back end and none of our
lighting partners had to do anything. It just started
working for the UK customers. Should we have figured that out
first-hand? Maybe. That’s debatable. That’s some of the
benefits of going with the built-in route. All the devices
and grammars and traits will evolve over time. We’re not
going to build every device and not going to understand
everything you want to do on that device. For that we offer
custom actions where you as the developer or device manufacturer
can provide the grammars and the commands mapping to us . These are the things that can
be said and this is what should be done, do a dance, do the Macerana, or whatever your
device does. Keeping with our theme, I will
kick it to Glen. >>GLEN SHIRES: I would love to
give a demo of a toy robot
specifically built for prototyping, and it operates
By Bluetooth. Let me turn on this robot . I will ask my favorite AIY
device, connect to robot. >>Sorry, I can’t help with that
yet. But I’m always learning. >>GLEN SHIRES: Let me try this.
Connect to robot. >>Sorry.
>>GLEN SHIRES: Okay. Let me try this one. Sorry. [ Multiple assistant voices
overlapping… ] >>GLEN SHIRES: Let me show you
how this works, and then I’ll give that a shot in just a
second. What this is doing is we can
switch to the slide, thank you. This is
using the Google service, but we are implementing a custom device
action, so I can say things like connect to robot, and what happens is the Google
service will understand my speech and send back a command
to the AIY box, and the box will, at that point, send a
Bluetooth command. It receives JSON that it can
parse, and if it wants to connect to
robot, it sends a Bluetooth command to connect to the robot. We’re using the same library
that we’ve used in the past. We added a little bit of code to implement the custom actions.
Connect to robot. >>Sorry. I’m not sure how to help with
that. >>GLEN SHIRES: Geez.
>>CHRIS RAMSDALE: I can just keep talking.
>>GLEN SHIRES: I’ll try it one more time, okay? Connect to
robot. >>Connecting robot.
>>GLEN SHIRES: There we go. >>CHRIS RAMSDALE: Nice. [ Applause ]
it gets better. Trust me. >>GLEN SHIRES: The light did
turn green. Set color red. >>Setting robot LED to red.
>>GLEN SHIRES: So it sent a Bluetooth command to do that.
Robot, get up. >>Robot getting up.
>>GLEN SHIRES: So it’s a self-balancing robot . Go forward.
>>Robot moving forward. >>GLEN SHIRES: Turn left.
>>Robot turning left. >>CHRIS RAMSDALE: Don’t fall
off the table. >>GLEN SHIRES: Turn right.
>>Robot turning right. >>GLEN SHIRES: And there you
go. We can control a robot. The point of custom actions is
you can build an appliance or anything where you can actually
set your own grammar and then parse the commands and have them
do whatever you would like them to do.
>>CHRIS RAMSDALE: Awesome. Good job. [ Applause ] >>GLEN SHIRES: You can
define these using a lot of the tools that you may have used for
regular assistant actions such as the actions on Google tools
and dialogue flow. What those will generate are, in
this case, a JSON file that you can install into your device, and when your device is talking
to the assistant, it’s not like you have to say open my robot app and tell it to
turn right. You can simply say turn right. The first thing to define is the
grammar. When I said set color red, here
is the intent that would allow me to say set color red, to red, robot set
color red. There’s a variety of ways to say things. On the next
slide you will see what the response is, the fulfillment.
First of all, I can define the text-to-speech. In this case, it’s setting robot
LED to red. And then the execution where I can parse
these parameters and I can see that I’m setting a color to red . >>CHRIS RAMSDALE: Our goal,
glen and I and the team that is helping us out is to provide the software
development kit, the tools and technologies to help you build and embed the assistant
into hardware devices that you’re
building. If this works, you might have one more demo trick
up your sleeve? >>GLEN SHIRES: Yes, we do. Do
a dance. >>Robot is getting down on the
dance floor .
>>GLEN SHIRES: Very good. Thank you very much.
>>CHRIS RAMSDALE: Thanks. >>GLEN SHIRES: And we will
be in the assistant sand box if you have questions.
>>CHRIS RAMSDALE: Awesome. Thanks again. Enjoy clam ! Realtime captioning will appear
on these this screen. [ Video ]
>>I’ll tell people I work at Google, and they’re like what do
you work on? I design search. And they pause and they are
like, what is there to design? [ Dial-up modem sounds ]. [ Upbeat music ] . >>Oh I know. It’s cat. [ Music continues ]. >>Hi. How can I help? P [ Applause ] >>BRENDA FOGG: Hello, everyone. I work in Google with other
teams across the company on product or
technology experiments and sometimes finding new ways to
talk about some of the innovative work that’s going on
inside of Google. And at some point over the years, I’ve worked with each of our
panelists on some of those projects that you probably saw
in the video just now. So we’re going to talk a little bit about some things that you may
have seen, a little bit about design at Google and what that
means as a fundamental framework and connective tissue between
the things that Google makes and making them as useful and
accessible to as many people as possible.
So let’s start with some introductions. We have Doug Eck right here to
my left, who leads a project called Magenta, which is all
about exploring machine learning and creativity. We have Isabelle who is responsible for
the design of Google home and
worable. Over there is Ryan Germick, who
is also known as the doodle guy. He is generally in the business
of delighting users everywhere. Let’s start by letting everybody
talk a little about what you do at Google. We’ll start with
Doug, because you’re sitting next to me. You head up a
project called Magenta. For anybody who doesn’t know exactly
what that is, maybe you can talk about that. And maybe touch on what inspired
the group and the focus of the team.
>>DOUGLAS ECK: Okay. Sure. I lead a project called Ma
general da. The MAG stands for music and art generation. We
started by trying to understand the capacity to use AI,
specifically machine learning to generate music and art. And it
took us about a month to realize that that’s asking the wrong question, because if all you’re
doing is trying to generate music and art and you keep
pushing a button and the machine keeps making music and art for you, it
gets boring. So we pivoted to have machine
learning to enable artists and musicians to make something new
and different. It’s sitting on the same idea of technology and art interacting
with one another. So, yeah. I’m all about AI and art and
music. >>BRENDA FOGG: So you’re not
necessarily trying to replace or duplicate creativity, but more
providing the tools to enable people to do
that? >>DOUGLAS ECK: I think it’s not
just because that’s what we choose to focus on, but creativity is
fundamentally human. If we take communication out. A computer
generating new things, but what makes creativity work is how we
respond to it and how we feed back into that process. So I think it’s very much a
societal communicative act. >>BRENDA FOGG: And that idea
of creating new things, maybe some things that were not
necessarily possible before, that were not humanly possible
to create. There’s an — I don’t know if you want to talk
about this, but this example that was shown in the lead-up
into the keynote yesterday, the project
which is one of those things that’s sort of augment what human creativity
can do. Do you want to touch on that?
>>DOUGLAS ECK: So. So for Nsynth was played on
stage before the keynote and discussed there. The main idea
there is can we use machine learning to allow us to generate
new sounds and sounds that are musically meaningful to us. And
one thing to point out is we already have ways to do that.
There’s a bunch of great software. I have a piano at my
house, and there are lots of ways to make sounds. What we
hope we can get is some kind of expressive edge with AI,
something we can do with these models, an intuitiveness, a new mobility by
having a new tool. I don’t want to take up too much time,
because there are a lot of other great people on stage, but I
like to think about the film camera. The film camera was not treated
as an artistic device, but as something to capture reality, and it was
transformed into artistic medium by artists.
>>BRENDA FOGG: So turning something into creative, Isabelle, you’re trying to
create something that appeals to everyone through its design, but
everybody’s different, right? So there’s — these are physical
products that share physical space with people that use them,
and sometimes you have to cohabitate. So talk a little
about how you approach that problem.
>>ISABELLE OLSSON: I have the utmost respect for people’s
homes, and they’re all different. I think next to your
body, your home is your most intimate space and
the place you share with your loved ones and your family. So
to enter that space with our products, we have to be super
thoughtful about what we do there . For us, the most important
thing is to be inspired by the context in
which our products live in. When we were designing Google
home mini, the goal was to design an
assistant for every room. That means your bedside table. And
that’s where you put devices that help you see better or a
book that helps you dream. So that space is just so
special. We wanted to create something that was beautiful, that fit into the
home, and didn’t take up too much attention and kind of faded
a little bit into the background.
>>BRENDA FOGG: And you’re also responsible for CMF at
Google, which is colorColor Material Finish,
right? >>ISABELLE OLSSON: Right.
>>BRENDA FOGG: I have heard the story about testing 150
different versions, color palates for the mini. Is that
right? >>ISABELLE OLSSON: Yeah. I
think for us, the developing the color palate for the Google
family of products and individual products
is a combination of art and science, I would say. And we start, usually two to
three years before the products come out, so we have to do a lot of
anticipation of where societies and trends are going and take all of those inputs into
account to make sure that when we release a product, it makes
sense to people. In addition to that, of course when you design
for the home, you have to think about the fact that, you know,
there’s going to be light hitting the product. How does it stand the test of
time? We want to make sure the products are beautiful for a
long time, so we have to go through a lot of iteration to
get it right. And then also, especially as
we’re developing fabrics, for example,
depending on where you put it, it takes — it looks different
in different lighting conditions. So, when we — when we designed
mini, we went through, I think 150 iterations of just the grey color. So, it was a lot of fun. And it
was about finding that right balance with what is too light?
What’s too dark? And the other day I got this lovely e-mail by
someone on the team who had picked out his couch to match
Google home Max. So I took that as giant
complement compliment, because we were
trying to do it the other way around.
>>BRENDA FOGG: What is the intersection of intuition you
use as a designer as you approach these kinds of problems
with the iterative testing and the scientific materials
examination? >>ISABELLE OLSSON: It’s a
hodgepodge, and the process is not linear. It’s pretty messy,
usually. But we have fun with it. I think the key is gather as
much input as possible, and then digest it and then come up with
prototypes and ways of relating to how this
will fit into people’s homes. Right next to my desk, I have a
big bookshelf that we place random objects from all over the
world for inspiration, to put our stuff there quickly to see
how does it feel? And how does it feel over time? It’s not only
about creating something that you are first attracted to, but
it has to be things that you can live with for a long time.
>>BRENDA FOGG: So Ryan, you lead the Google doodles team,
and this team is unique in a lot of ways, namely one of them is that you regularly and
willfully break the brand rules — it’s unusual. It’s the core of
the brand. And that’s something that seems to keep working and
working and working over the years. So talk about why that’s — why
you think it’s important to have the ability to just mess with it.
>>RYAN GERMICK: Google’s mission is to organize the
world’s information and make it universally accessible and
useful. I believe in that mission and I think it’s a
powerful and good thing to do for the world. We hone in on
the idea of making things accessible by creating an
emotional connection with users. And sometimes mucking up the
standards is collateral damage for people getting a positive
charge or learning something new, then we think it’s
worthwhile. And yeah, on a human level, there’s things that are more important
than consistency. For us, it’s more about using our creativity and craft and make
people feel welcome in the space of technology.
>>BRENDA FOGG: So making people feel welcome in the space of
decknology technology, you lead the team
for the Google assistant. How do you create a personality? There’s the transactional things
that have to happen when a user is interacting can a digital
assistant that they will be delivered the information that
they asked for. And you felt like it needed to go farther
than that transactional relationship. But people have,
you know, a little bit the way we were talking with
Isabelle, and everyone has different things that they like
to interact with. And some people like small talk and some
people don’t. And some people think things are funny that
others think are totally not funny at all. How did you — will you talk
about that? >>RYAN GERMICK: Technology that comes in a smart speaker or
smart display or on your phone is
really personal. We have a different set of design challenges than if it’s
objective. When you invite this technology into your life, we’re
using this conversational interface as a metaphor. You
can talk to it, and it can respond to it. And as soon as
you hear the human voice, it not only opens up an
opportunity to have a character, but it’s an obligation for
designing a character. If you don’t design for it, people
assume that you don’t have much of a character, but there’s
implicit character. So we took the learnings we had
from doodles and it being an implicit character for Google. We get creative and nerdy and
excited, and we try to transfer that over to the Google
assistant where it’s a character that you want to spend time
with, because it has things that you get excited about or it
really wants to help you. Not just be something that you want
to use, but something that you want
to spend time with. A surprising number of
principles for doodles were applicable for the Google
assistant. There’s a lot of pieces of the
buzzle puzzle. >>BRENDA FOGG: So each of you
talked a little about how technology
interacts can humans and vice versa and
how those two things have to co-exist. So good design and thoughtful
design is a means to make technology, in this case, more
approachable and useful and usable and friendly and to make
people comfortable with that. You all approach your work and
problem solving in this way from a very human perspective, right? A very — you inject empathy.
We’re going get real and talk humanity and empathy, right? Injecting inging inging empathy
into your process. Doug, can the work you do with
machine learning allow a machine to express art in a human way? Let’s start
there. >>DOUGLAS ECK: Yes. With some constraints on how
this all works, I think what we realized
early was that we need at least two players in this game, so to speak . Part of the work is building
new technology. We’re taking on a role that one
might take on by building a guitar or
a tech might take on in building a new electronic instrument. I
think there’s a thought process that goes with building
something like that that is very creative, but I think you’re
also in some very real way constrained by the act of
building the thing to understand it in a certain way. Like it’s
your baby. You built it, so you wrote the operating manual, so
you know what this thing is supposed to do. And in most cases, what we see
is that for something to become a truly expressive artistic device, it
has to in some very real way be broken by someone else. And I
think it’s almost impossible for us as the builders of that
device to also be the ones that break it. And so our dream at magenta is
to connect with artists and musicians and people! People
that don’t know how to code. People that don’t necessarily
care much about computation and draw them into this
conversation. And so, what we found is that, you know, we started by releasing
machine learning models in open source on GitHub as part of tensor TensorFlow with instructions
like please run this 15-line long
Python command, and you will get 100Midi files in a temp
directory in your machine, right? And that’s now how people
make music. So part of our work, even on the technologist side, even as
guitar makers, LOOUTiers, part of our
job is to make good design oto build interfaces that people can use . Getting there requires moving
parts, a large component of which is good design.
>>BRENDA FOGG: I like the notion of breaking things . You talked about an electric
guitar and it’s similar with the dissonance that people create
with electric guitars? >>DOUGLAS ECK: That’s right. I tell the same stories like
grandpa . The electric guitar was
designed to be a loud acoustic guitar and
they tried so hard to keep the
distortion at bay. Imagine a world where electric
guitars have no distortion. >>BRENDA FOGG: One of the
things that I find so interesting about your work is
that it’s just as important or maybe more important how people
feel about these things rather than
just what their utility is. How do you — what kind of
considerations do you make for — we’re starting to sound like
hippies now, but people’s feelings and empathies
and the way they co-exist in the space with these things?
>>ISABELLE OLSSON: I think a good tool that I use a lot is I
put stuff in front of people and ask them what they think it
looks like. It’s a fun game. You don’t always get back what
you want to hear, but it’s a really good way of testing what
the object you have created, does it have positive
connotations or negative connotations? The first time I showed a
prototype of Mimi to a French person and it
is like a macaroon! And I love macaroon. And I think something con con otating something sweet and
delicious, and we surround ourself with
food, and I knew we were on to something there. And food is
generally universally appealing. So that’s one exercise out of
many. I think the key is just to
really make the thing real really quickly,
to translate the big idea into something tangible and then ourselves
living with it for a while, too. S and then also think about not
only the food analogies but the objects we design are
understandable. We understand what it is. With mini, we want
it to look a little like a speaker, a little like a microphone, but not too much
like either, but be honest with that function, and THN and then conotate that it goes
into the home. >>BRENDA FOGG: And it has to
have the human touch to it. >>ISABELLE OLSSON: Yeah.
And the beauty of it is when you find these solutions, a lot of
the times they enhance the function or help with the
function. Fabric is this excellent material that is, you know, most of the time
is audio transparent. You can have lights through it. You can create this calmness in the object itself. And I’m
really passionate about trying to design pieces of technology
which hopefully people think about their stuff. And not as
technology. But that can live out in the open. There is just
way too many pieces of furniture that are purely designed to hide
technology. So my goal in life is if we can get rid of those
things. >>BRENDA FOGG: And Ryan, the —
that sort of human touch is pretty
evident in most everything that you do. So we can talk about the Google assistant. It was a fundamental human
interface. And through the force of the
work on creating a personality, talk about how you found and
stared through the land mines of what kinds of — aside
from the transactional things, what kinds of things are people
going to want to talk about with their assistant?
>>RYAN GERMICK: Yeah, I think this may be a bit cliche, but it’s
still early days. For us, a guiding principle is
is it a feeling thing? Does it feel like a character that you
want to spend time with, like I mentioned earlier? As far as like, um, finding
things that we wanted to steer clear of, it
was really interesting to look at the different queries that
people ask Google search, and we ask Google assistant. At
Google, there’s a lot of people with a background in information retrieval and data ranking and
things — search ranking and things like that. And it turns
things on their head when people are asking questions
like, “What’s your favorite flavor of
ice cream? ” And “did you far? ” Those are more common than you
think. A sizable number of the queries we get are first date queries like do
you have any brothers or sisters? It’s really sweet.
>>DOUGLAS ECK: What is her favorite flavor of ice cream? I’m sure that everybody wants to
know. >>RYAN GERMICK: That’s an
illuminating question, Doug. Thank you for asking. We
basically set up principles. We have one principle where we want
to talk like a human and take advantage of the human voice,
but we don’t want to pretend to be one. If you would ask what’s your
favor flavor of ice cream, we would do what we call an artful
dodge. We don’t want to deny the user
of, “I do not eat ice cream. I do not have a mouth.” If you’re
exploring new technology, that’s a shut down to the new
technology. We don’t want to lie and be like salted caramel,
obviously, which is a position that is disingenuous, because it
does not eat ice cream. So we would say something like you can’t go wrong with neopolitan.
There’s something in it for everyone. We would take that
question and understand the sub text that I’m getting to know
what you are and what your capabilities are. And we would yes and, and
continue to play the game and use it as an opportunity to make
a value statement that we’re inclusive, and we
want to reflect an ice cream that is good for everyone is
good. >>BRENDA FOGG: How much
dialogue goes on when you’re — within
your team when you’re trying to, you know, when you’re talking
about okay, what if someone asks the Google
Assistant, do you fart?
>>RYAN GERMICK: I knew that we already one when we found out
that that question was going to be answered. There was a school
of thought that you would say I don’t fart. I don’t have a
body, end of story. And that seems like true, but
not in line with, you know, keeping the game going. So we would have a lot of back
and forth. And we would then, like, take
that answer and say at least you could say I don’t have a butt.
At least you would be a little more specific.
>>BRENDA FOGG: Start there. A butt.
>>RYAN GERMICK: But we ended up with something more playful
and addressing the sub text which is of the school of whoever smelt it dealt it, which
is you can blame me if you want. I don’t mind. If the user is
asking about that let’s take it a step further and put them on
the spot. >>BRENDA FOGG: Are you all
going to ask your Google assistant now?
>>RYAN GERMICK: I think there are 25 different answers.
>>BRENDA FOGG: Keep asking. Keep asking. Let’s talk a little about how
this humanity plays out in the context of a brand like Google. So Isabelle, the home Minineeded
to be a speaker and a microphone as well as an assistant and behave like an
assistant. needed to be a speaker and a
microphone as well as an assistant and behave like an
assistant. How do you from there to the
idea of personality and a brand. In Ryan’s work, his work talks,
and the personality comes through that way. In your work,
it comes through the materials and the things. How do you
consider the personality of the Google brand in the work that
you do? >>ISABELLE OLSSON: It’s a huge responsibility and we’re only a
few years into making hardware that people actually put down money for . The brand is really
incredible. We’re trying to figure out what’s core to
Google, and how do we translate that into physical form? And sometimes it’s not about a
direct translation, because most people don’t want to pay more fun something
quirky, maybe, so taking that principle and that idea and
thinking about what it means for hardware. So in this case, for example , it’s optimism and an
optimistic outlook on the future. So if I can do things
that remind people of that or that makes people smile, I think that naturally
feels like a Google product . One simple example is if you
turn Mini upside down, there’s a pop
of color on it. It has Google on the inside.
>>BRENDA FOGG: Let’s go back to Ryan, then. Over the years,
over the seven or eight years or however many
years you’ve been — >>RYAN GERMICK: 12.
>>BRENDA FOGG: 12 years! You’ve had a lot of
opportunities to craft the moments of delight and user
experiences that are like turning over the mini and finding a little
surprise. You’re responsible for the peg man, which is the
character you drop into Google maps when you go into street
view. And we talked about the
personality of the Google Assistant a little bit. And the
doodles taking over the home page. Over the 12 years that
you have been working in that territory, as the Google arena grows and evolves, how has
the growth of the brand impacted the work that you do?
>>RYAN GERMICK: I think the core of what I try to do, I almost
discovered it by accident. Like the street view peg man is maybe
a story for another day. I was just glad that I worked in
a place that had free strawberries when I got here.
That was very exciting to me. And the fact that they paid me
to draw and be creative was beyond my
wildest dreams. I was so happy to be here and
still am. I was like how can I use my position of privilege to
bring other people up and give them a sense of
belonging. And that’s stayed consistent.
Whether it’s trying to make sure we have inclusive doodles, or
creating an opportunity for a mannequin that can be dressed up
for holidays for street view. It’s — you know, there’s been a
through line. Maybe in the beginning Google was more of an
underdog, it was a very important part of people’s
lives. I don’t think you can say it’s a
small organization. >>BRENDA FOGG: I want to make sure we leave time for
questions. Now let’s talk about the future. If we’re sitting here a year
from now or a few years from now, Doug, what do you expect —
what do you expect that machine learning might do for art in the
future, whether it’s, you know, your aspirations for the next 12
or 18 months or maybe five years from now?
>>DOUGLAS ECK: So I think the really interesting way to
think about this is consider generative models as a family of
machine learning models, models that generate new instances of the
data upon which they’re trained. Where I see us going is actually
very heavily integrating the design process of products with generative
models, so that we’re seeing machine learning generating part
of what we’re trying to do. I think that will touch our lives
in the arts and music and communication in a number of
ways. To those of you in the room who
are developers, which is easy. We’re at a developer conference.
We’ll have an opportunity as machine learning experts to
understand a little about design, and I think we’re
going to see much more of a need for
end-to-end integration. For me, the future started happening
already in a sense. I have teen aged kids and I
watched how they use snap chat to
communicate and they built their own grammar around it, and it’s
a very simple product. Now imagine ten years of advances in
assistive writing. So you ear using Google docs and you’re
writing and you have some machine learning algorithm
helping you communicate, right? We’re going to get very good at
this very fast. And I expect when my kids were younger, the teachers were all
worried that they used Wikipedia to use their peep papers. And
now it will be how much did you write and how much did your
assistant write? There are very difficult issues here, but it’s
also wonderful, I think. As lock as we use this to
communicate more effectively, it’s very
exciting to think about how machine learning could become
really more deeply integrated in the process of communicating. And again, that’s what I see the
arts as being about and music being about. It’s about
communicating, sharing our thoughts, feelings, and beliefs
with each other. I’m seeing in my career that happening with
machine learning as well. That’s my future vision.
>>BRENDA FOGG: I love your vision. What do you want to see
in hardware in the next year or two?
>>ISABELLE OLSSON: Number one, I hope people find out
about it. We just did a small exhibition in Milan a couple
weeks ago, and part of the exhibition was the portfolio
that we launched last year. And a lot of people come up to me
and say these concepts are great. And I’m like they’re not
concepts. They’re actual products.
There’s a little bit of that. And I hope we just continue to
design for everyone in everyday life.
>>BRENDA FOGG: And Ryan, what would you say? What would
you like people to take away today?
>>RYAN GERMICK: I think remember that technology is for
people, first and foremost. Always keep that question in the
back of your mind how is what I’m doing helping people?
>>BRENDA FOGG: Okay. Do we have questions? Here we go.
>>I have a question about Google Home. Sometimes I insult
Google Home, and I was wondering how do you deal with it? Is it common for people to ask
Google to shut up or call it an idiot?
>>RYAN GERMICK: I saw a good tweet today actually that said in
“Star Wars” you could tell who the villains are, because
they’re mean to the Droids. I’m not making a judgment on your
character, but… the empire is.
No. But, you know, trolls are a part
of — and discovering boundaries is like a natural part of
interfacing with a new technology . Our general policy is to not
feed the trolls and to play dumb and not
engage and leave bread crumbs of reward
for bad behavior. There’s more important and urgent work to
help people than that particular boundary area. Thanks for the question.
>>I have another question for Ryan. How do you prioritize what
questions have an artful dodge.
>>RYAN GERMICK: There’s a relationship between farts and
ice cream depending on your lactose tolerance. There’s one
through line. It’s a good question. We look at frequency and query
logs. There’s boring analysis of
things people ask. And we have an awesome team that comes from
backgrounds and people who did film and animation and all kind of
great improve comedies, and we try to define the character and
little moments that maybe matter and create a constellation that
you can’t fill in every detail. And we’re just getting started.
So some things had to be a bit of a wild guess. Thanks for the
question. >>Number three for Ryan, but
interested in everyone’s opinions. Very cool yesterday
to see AI schedule a haircut or a dinner. You’re really good at the artful
dodge because you just did it to the last two questions. But I’m
curious, we know when we’re talking to our Google Home or
Assistant, but that person on the other end of the phone
doesn’t necessarily know. Do you think there is the need
for AI to identify itself as such?
>>RYAN GERMICK: The demo is an impressive research demo.
But I take to heart that it’s just getting started and we’re
trying to figure out how to get it right. These are hot off the presses
kind of possibilities, and I think there’s going to be a lot of ethical
complications .
>>So I have a question about emotional intelligence and
emotional understanding. Clearly you have to project
emotion like a calming, personable feel.
What do you need to project from an audio 6 input side?
>>BRENDA FOGG: We’re all looking at Doug.
>>DOUGLAS ECK: There’s been some work done in this area. We won’t get better at this,
like a chicken or egg problem. We start launching products, so
we get connections with users. I do think it’s the way to train
models is to understand how the work that we’re doing is
affecting our users and actually respond positively. There’s a hot area of usage
called effective computation where we’re trying to understand
how to connect in the right way with and for our users in terms
of lowering stress and issues like that. What I can say is that it’s also
in its infancy, and there are no easy answers.
>>RYAN GERMICK: One thing that’s exciting about being part
of the Google world, and all of you in the eco-system as well,
if there is an interesting question, you can be sure that
someone is pursuing it somewhere at
Google . There is definitely
interesting things happening to answer questions, I have a
wearable. Can it detect my pulse rate and
does that affect how it interacts with me? And when I’m speaking loudly or
softly. There’s work to be done. But it’s an interesting
problem. >>BRENDA FOGG: I think we’re
out of time. I’m SORP sorry we didn’t get to
everybody. But thanks for coming.
>>Thank you for joining this session. Brand ambassadors will
assist with directing you through the designated exits.
We’ll be making room for those who registered for the next
session. If you’ve registered for the next session in this
room, we ask that you please clear the room and return via
the registration line outside. Thank you. Live captioning will appear on
these screens… [ Applause ] >>MONICA DINCULESCU: Hello
hello hello. How is everyone doing? Hi, everyone, I’m Monica. I
work on the polymer team. If you ever heard about polymer or
the polymer team, we’re always excited about the web, excited about DOM APIs.
I don’t know if you have heard this, but the web is pretty
great. It’s portable, it’s fast, it’s in everybody’s pockets all the
time. You get to write apps in HTML,
CSS and JavaScript, and they are called
PWAs. They feel like native apps, but they work on the web
this year is a really exciting year for PWAs, because they’re on four — six out of six and a
half browsers. They work on Chrome and safari and Firefox and edge and Opera, and
Samsung internet, and they sort of work
on IE11, but nothing everything does, so we won’t hold that
against them. They also work in app stores. Microsoft recently
announced that along with native windows apps, you can find PWAs
in the store, and you can install them. They have a native-y feel to
them. They are on desktops. Chrome announced that you can
install a PWA on your native desktop
machine, and you can have a shortcut and
double click the shortcut, but it’s
been a web app all along. Progress web apps are
everywhere. If you’re going to build something soon, you might
think I should build a PWA that works everywhere. But often
when you start building something, especially a new
project on the web, it often feels like
this. That poor soul in the middle is
you, and all the the upper tupper ware falling is falling
on you. It has to work everywhere and it has to be
shiny and oh my God, there are so many things. And it feels
awful and not great. If you think about this graph of effort
versus results, you have to do a lot of effort to get to like a
point where you can even call it an
app. It’s soul-crushing. It sucks to work hard and not
accomplish a lot. And where we want you to be in
this graph isn’t here, it’s all the way at the beginning. We
want you to do very little effort and get awesome results
off the bat. The hard stuff you’re working on
is not your stuff. It’s not boring boilerplate that
you have to implement just to call
it an app. You have to reuse somebody
else’s components and elements and a regular kit. Everybody on my team has a
staying, Steve says do less, be lazy.
Kevin says hard things are hard. And I started saying always be component-izing. I mean this in
the real world. When you buy an I- IKEA table, you have legs, a
slab of wood and screws. You don’t go to the forest and
cut down a tree and mine other to
make screws. You get components and put them together and that’s
how you get a table together in an hour, versus a
year after. In an app, you do the exact same
thing. You big a router. You pick local storage. You
don’t write the libraries every time you need them in your
application. That’s banana pants. You use web components. Web Components are encapsulated
and they have styles inside and you get to use them everywhere and it’s
super easy. Web Components, if you saw the polymer talk earlier, they’re
first-class citizens on the web now. That’s really awesome and
powerful. These are the components that you could use.
You could use somebody’s bar chart. You can stop reinventing your
date picker in every application. I’m happy to tell
you, we have this kit. PW starter kit is a thing that the
polymer team has been working on, and it’s basically a
template that lets you build PWAs. It has components
prechosen for you and you can mix and match them. We’re
announcing it today. We shipped version 0.2 this
morning. We want you to demo it and play
with it before we go 1. 0 and we can’t take back any of
the features chosen for you and you
can mix and match them. We’re announcing it today. We shipped
version 0.2 this morning. We want you to demo it and play
with it before we go 1.0 and we can’t take back any of the
features. In order to not bore with you
words the entire time, I built a game. Super PWA adventure, the awesome one-player game to build PWAs. For any lawyers in the audience,
I have been playing hundreds and hundreds of hours of breath of
the wild. Every good game has a map. We are that little lady
with a wrench, because I built the game, I have the wrench. We
will build a PWA that lives in the castle. The princess does
not live in the castle, your users do. That’s where your
users belong. Every good game has levels. Our first level is
the tutorial. It’s where you figure out how to get by on the game, what dad baddies
we’re going to get. And then you get building blocks, like
easy puzzles just to get introduced to the game. And
then you figure out slightly harder patterns in the game. In Zelda, you need to chop all
the grass and stuff like that. And eventually you realize that
not all peasants are the same, so you have to figure out combo moves to get
through the super hard levels. And finally the boss level is
where we have to ship and deploy the app. When we kill the final boss we
get fame and go on a vacation to the Bahamas, and that’s where
you want to be. So let’s play PWA adventures.
Every good game starts with your basic We start with it and then set of armor and your shield and
weapons. Our shield and weapon in this game is the PWA starter
kit. we figure out how to use it. So, are you ready for the first
level? That’s right. DOOT DOOT DOOT DOOT DOOT DOOT pew pew pew.
We have to get started in seconds. Nobody likes tutorials
that last forever and you have go through
them. So the goal of PWA starter kit is to let you get
started in seconds without having to mess around for too
long. If you go to the Wiki, the first thing you see is the
things that PWA gives you and how to get set up. Once you do
that, you can go into the advanced levels. The way you
get set up is you clone the repo, run, and run start to see
it go. If that worked well, you can run
MPM test to see how we do testing and see THAT is passing its tests, and
then you can run build and serve so you can minify your build and
build it for production and test the production build. So when
you actually start your application, this is what it
looks like. It’s a small application that has three
pages. One is a stat ic template, and the others are
there so you can get inspired. One is a full-blown shopping
cart where you can buy cheeses from
San Francisco. And of course it has responsive layouts, because
all apps need to work on mobile and desktop, and a template that
doesn’t have that wouldn’t make sense. Cool.
DOOT DOOT DOOT boop boop boop. That was the tutorial.
It went super well. Now we’re confident that this
starter kit will be a good weapon in our game. What’s
next? Next are the building blocks. DOOT DOOT DOOT pew pew
pew. Fun fact about noises. Jake tried to use noises from a
video game once, and you’re not allowed to. So he had to make Street Fighter
noises by mouth. It gets you everywhere, mouth sounds.
The thing that we’re testing is not the PWA that we want. Not everyone is going to have
the lorem ipsum app. We want to add our views and
elements, not the ones that the template comes with. This is
our first real level. If you think about a view, it looks
like this. It has components and elements and all of this
stuff in it. And all of these components and elements are
separate. We’re using components that we
believe are important for our apps. They have to be
lightweight and fast and expressive. If one element is
super heavy, the whole page is heavy. All of your bandwidth will then
be going to that element. It has to be fast. Nobody is going to want to hang
out on your page. And they have to be expressive. It has to let
you build any view in any element that you want, because
otherwise it’s not going to be really fun to use. So in order
to build these elements, we picked a new base class
called lit element. This lets you do basically everything, and
it’s super small and fast, and it takes advantage of all of the
new features that have arrived in browsers recently. You can MPM install it and it’s
built on top of LIT-HTML, a very small
library, like 2 or 3K in total. It lets you write as JavaScript
template literals. So you can do all of the same things to express any dynamic content
your element might want. And it does so really, really fast,
because it knows exactly what you need to render, and it only
re-updates it when you need to. So it does it this way. First, every element reacts to
changes. Properties are things that the element cares to
render. A value, a click, something like that. And
whenever any of these properties changes, the render function is
called. In this render function is the template that your
element looks like. So two things might happen when you
look at this code. I have seen that in re-Act.
You’re familiar with this, so you don’t have a weird jank when you
do this. It’s a simple way to think about what your state is with and what your
element likes like. You can write declaratively. You don’t
have to write listeners on buttons imperatively. You can
do everything declaratively here. What kind of things can
we do in this render function? You can obviously have static
content, but you can also have dynamic
content. But all of this is just a
JavaScript template literal. That means I can write advanced
expressions in here so I can conditionally render different
things. I can have buttons that have
in-line event listeners, and all of this just works.
The important thing here is all of these things only update
when there is an update. When my message updates here,
only that div re-renders. It’s
magical. When you make it into an
element, it looks like this. You import it from a module.
They’re not new. We have a class. We register it at the
bottom, define it with a tag, and we do our properties and
render it with everything else there. This is the whole base
of LIT element. And it works nicely. You can of course have
encapsulated styles in there. One of the awesome properties is
you can encapsulate CSS inside of them, and we do this by
letting you export a style from a module and importing it and
using the squiggles to import instead of your element. This
is native shadow. This is not CSS in JS or weird
things that need transformations. This is what
shadow looks like natively and awesomely. We have that few, but this is
not how you build an app. You need to glue them together and
put the logic that responds to route changes and stuff like
that. When my team was building apps, because that’s what my
team does, we build apps. We were writing the same
boilerplate code over and over and over again to glue the views
together. Any time three people have to write the same thing and
they’re like oh my God, I’m doing this again, that’s not a
good place to be in. We created a helper for that
called PW helper, and it’s a bunch of tiny snippets and
helpers that you can add to your app. You can add them anywhere. They’re like croutons, delicious
things to SPRINGle in your soup or salad to make it better. They live on GitHub and also MPM sprinkle in your soup or salad
to make it better. They live on GitHub and also MPM. You install it and it gives you
a call back. That’s all the things you have to do. There is
a network listener that basically gives you a call back
when you go from online to offline so you can conditionally
render your offline UI only when you need it. There’s a bunch of
one-liners like this that you only install with one line.
There’s something that lets you observe media queries, updates
your Twitter cards, lets you connect
to Redux. They are tiny helpers there to
make your apps better. DOOT DOOT DOOT DOOT DOOT pew pew
pew! We’ve assembled our apps. We have these views and elements
and they’re glued together with LIT
elements and PW helpers, so now we’re off
to make our own views. The next level is figuring out patterns.
DOOT DOOT DOOT pew pew pew! And this is the time where
things get a little hard. We have to level up. In every game
there’s a point where you have to learn a specific pattern. In Mario you figure out that you
have to take the shells and throw them at everybody and
that’s how you win. And in Zelda, you figure out any
time you see a crate, you have to bop it because there’s
definitely something in there. And also a wall that has cracks. In super PW adventures, we learn
about patterns that we can use. Patterns save you time. Games
would suck if every single level was completely different than
any other level and you had to re-learn the entire world every
episode. That’s not how we play games. Clear patterns help team members
figure out an application quickly because they have seen
the codes and patterns before. The base view class that we
have, LIT element is present in every single element, which
means that you can look at every one and be like I know this.
I’ve seen it before. It looks like the other ones. And using
a pattern is scalable. Goal of patterns is to make sure
your code isn’t a mess, your structure is there. There are patterns that ensure
your applications are super per forMANT, even though you have 50 3 pages and
everybody is adding new views because they can formant, even though you have 50
3 pages and everybody is adding new views because they can.
Redux has an awesome developer community, which means
that every time you have a problem, there’s probably a
solution for it there, and we don’t have to find it. And the
whole point of a kit is that it’s flexible and lets you,
like, plug in all of your solutions. So if you don’t like
Redux, you can take it out and plug in your own activation
solution in it. But just in case you want ours and you don’t
know what Redux is, Redux is a small state container. It basically is a glorified key
value player, and that’s how it stores
your data. It’s 2K, so it doesn’t add anything to the size
of your application. In particular, Redux is view
agnostic, which means it works the same in any application you
use it. You can use it in a React app or
view app. Redux works the same way behind
the scenes this makes it really widely used, because it’s
reassuring that all of us building applications in the
frameworks out there have the same problems all the time, so
the solutions are really clear and well documented. And in particular, Redux lets
you time travel. And if you don’t want to take whatever tool lets you time travel on
principle, then I don’t think we can ever be friends. This
debugging technique where you have your application and you
install your developer tool, and this is the one we use and it’s
documented on the wiki. As you’re navigating, it records
all of the interactions and states,
which means you can also rewind time. If you find a problem in
your app, it can rewind time to right before it happened and
check what the state was and what happened to it. This is
awesome, because when you have bugs, you don’t have to restart
the dev server, do all the clicks — oh no. Start over.
And so on. Time travel is amazing for debugging. The
premise of Redux is very simple. You have your UI, and any data
that needs to flow through that UI lives in a store, and that
store is the guardian of that state and no one else can touch
it. If the UI needs bits of that
state, it can ask for them, but it gets an immutable object.
It’s not the data that actually lives in the store, just the immutable
version of it. But if you just had this, you
would have a lot of code in the store.
What do I say? ABC, always be componentizing. So Redux has
the idea of reducers, which are smaller functions that update
the data for you and put it in the store. And these functions
look like this. They deal with a small chunk of
the data, take the previous value, do something to it, and
return the next value. This is a counter that updates a value, whether it gets an
increment or decrement action. That’s all. You can have
reducers across your application depending on your data. And then the UI needs to talk to
the reducer, so it dispatches actions. This is what I pretend it looks
like, not Redux. If you think about your UI and
your reducers, the actions that you take on the view layer are
not necessarily the same actions that happen at the store. If
you think about logging out of an application, you will press a
button that says log out. The UI only knows one action, which
is log-out. What happens behind the scenes
is complicated. So the dispatch function is
there to convert your UI actions into what I call data actions.
I don’t know. I call them. The top part of the top is
Redux classic. There is nothing that we have invented here.
This is Redux straight out of the box. You can go to the
Redux site and read all about it. The bottom part is what is
interesting to us. How you connect to your web components
is kind of interesting and not a lot of people have been doing
it. Let’s talk about that one. You have your element that we’ve
had before. We’ve seen this. This is a lit element, and we
connect it with a mix in, another one of
the little helpers. And it basically, when the element
attaches, it subscribes to the store and when it detaches, it
unsubscribes from the store. And then you get this call back
called state change. You could write all of this by
hand, but why write it yourself when you don’t have to? You can
just use a helper. What happens is you update your properties, and this is a
pattern that we have seen before. Our element has a
property. We take that value and we put in our property. And
of course, every time we update a property, rendering gets
called. So we will re-render the value of this property.
Pretty cool. Works pretty well. But of course I need to update
the value in the store somehow, because that’s the whole point
of something that I clicked. So we do that with actions. Again, when you click a buttston on, it dispatches — this is
where lit HTML and lit element are powerful. We’re not worried if we get into
many updates to the store or we update the value too often, it will be too
slow, because everything else does not need to be affected by
the re-rendering. That’s Redux, and it scales up really well,
and a lot of people are using it in their applications.
So the next one is performance. For performance, we have a
pattern called purple. It stands for pushing your critical resources for the initial route, rendering the route,
pre-caching, lazy loading, and creating if needed.
Purple. The happy thing is you, your browser, and you’re
requesting a view from the server, and the server looks
like a stack of pancakes because if anybody has a better icon, I
will take it. Pancakes are delicious, but I don’t think
that’s what a server looks like. So it will give you view one
back. But also going to figure out
that you need other resources. Didn’t update the slide in time.
So it’s going to push to you other resources that it knows it
needs. It does this on the push manifest. It will look at me
and be like I need this view. And you will ask for the other
two JavaScript files, and it will give them to you. So you
don’t have to do a round trip to the pancake stack when you need
them. The browser is happy and it will render that view and
install a service worker. The service worker is going to cache
all of the views that you might need later just in case you do.
So that when you do actually request one of these views,
they’re already locally in the service worker. You don’t have
to go back to the stack of pancakes, which is always slow because you need to get the
Nutella and maple syrup. So then in your code you get to
lazy load that code. Don’t do things unless your user asks for
them. Always lazy load everything, always lazy load it
from a cache is even better. In your code, that looks like a
dynamic import. That’s how we dynamically import
these modules. So this is the action creator for a view, and a view is what the
router calls back and it will be like oh, you need this new piece
of code? I’m going to dynamically import it. Because the modules are loaded
magically, it will get it from your local module cache. That’s
awesome. That view might have reducers in it, because it might
touch a piece of data that you haven’t seen before. We’ve
loaded a way to load the reducers lazily. This is not
Redux standard, but comes from one of the helpers. You can add
magic to your Redux store so you can lazily add reducers
and thus the PRPL pattern is complete. That’s how we deal with
application status performance. If they work for two views, they
work for all 530 views because you
reapply the same pattern over and over and over again. All of your views are lazy
loaded, so it doesn’t matter. These are only two that we have
in PW starter kit. There’s more of them. And you can add more
patterns and plug and play them as your application needs them
and scales up. DOOT DOOT DOOT DOOT DOOT. We
save time. I don’t know what my winning sound is, and I was
thinking about it, and I don’t think I’m allowed to play one.
Every time we have a problem, we have a solution for most of
them, so that’s awesome. We’re so close to getting to the
castle. DOOT DOOT DOOT DOOT DOOT DOOT pew pew pew. The next part is learning the
combo moves. Most games you have many bosses. You have
puzzles that are just a little bit harder. In Zelda you had the water
temple that were all of the things that you had seen before
and harder with new things. And the same thing happens when you
build an app. The things that you know from PW starter kit are
that we have a base class, lit element. We have a responsive
UI, because they come with app elements that build a responsive UI for you, we have
PW helpers that glue things together and make our
application easier to use. We have Redux and PRPL, but
we’re going to figure out that not all apps are the same. At
some point you’re going to have a problem that the PW starter
kit template is not going to solve. I promise you this one.
So it’s important for a kit to give you the flexibility to
deviate from these decisions. Even though we’re comfortable
with our decisions, you may not necessarily agree with them. So
we have other templates that you can start with. Templates like
the Redux one is one you have seen, but there’s a template
that looks identical and has the same amount of classes, but
doesn’t use Redux. It just uses unidirectional data flow. Maybe
that’s what you have learned and you’re comfortable with. We
also have a template that looks slightly different rather than converting the links from a line
to a responsive UI. We have a consistent drawer that
stays there when your screen is wide. We have a template that
has no fancy UI whatsoever, because you may already have an idea of what your UI
looks like and it’s probably not pink underlines. So you can
start from that one. These are all documented in the wiki, and
they’re in great detail and you can pick and choose from them. There’s anned a VANGS advanced
page. We have code into the wiki that
you can pick and choose. Things like what do I do about a CO and how do I beam my
application? But that’s still not enough. So, our team has been building applications with PWA starter
kit, and we built three new ones. The reason we built these
is we could have added all of these features
to PWA starter kit, but it would have
been horrifyingly full of features. I recently started playing sky
rimSky Rim. You can’t hoard all of the cut
kettles. -shop is an application you may have seen before. It’s
your standard e commerce site where you can buy things, add
things to a cart, I will do some crypto
mining and it will be very cool. We have these apps that we have
been building. We have an app that’s you read
hacker names. We let you search the
Googlebooks APIs, and we have one that will let you teach
yourself any language that you want. We think of these apps as
shells of armor that you can pick and
choose features from. If you played Zelda, you have
layers of armor. You don’t use the armors for
swimming when you’re running around. You pick the one
feature that you want and you add it to your application. The flash cards app teaches you
about theming. It looks completely different. It has speech synthesis, and it
uses local storage to save your local state. And that’s because
it’s a game. You have to start from where you left off every
time you refresh it. The books app teaches you about
authentication using the Google sign-in API. It uses speech
recognition, because you can read the words out loud
into the books app. It has fancy loading UI. These are the cards where we
pretend the content is there so it gives you an indicator of what the layout
might look like. You can get a giant piece of
JSON. It uses favorites and index DB
to save them. Any of the features — if these
are any of the features that you need, you can look at these apps, you can
look at PWA, and you will feel right at home. They have the
same structure and naming scheme and patterns across all of the
apps. And they’re all documented here, and there’s links to all of the
repos and demos. They’re all open-sourced on the
wiki, and you can use them when ever
you need to. DOOT DOOT DOOT DOOT DOOT pew pew pew. We got a
power up. The only thing that’s left is the final level. DOOT DOOT DOOT boo boo boo.
We have to actually give our app to people. It’s great that
you built an awesome app app that runs locally. We
started with a template. That was our first level. We
customize it. We understood how it worked. We used things like Redux and
PRPL and machine learning, but we got ship it. That is our
final level. We have got defeat the final boss. That is
deploying. In order for you to do this, you need confidence,
and you need confidence the first time you deploy it and
every time you redeploy it over and over again. If my app works
today and I do an update and it doesn’t work again, I promise you your users will be
pretty horrified. And testing is the only thing that will give
you the confidence that every one of your redeploys are going
to be successful. Testing is like the shields to protect you
from the monsters in the game, and the MONSer s monsters are the rage of your
users when you break their calendar. We tell you where to put the
tests. We have unit tests and integration tests, and they both
serve very different purposes. Unit tests make sure that
functional elements in your application stay correct between
commits. A button is clickable and brings up this dialogue all
the time on all of the browsers. Integration tests, on the other
hand, makes sure that from the beginning to the end, your
application looks the same as it used to be. You don’t
necessarily click all of the buttons. It’s a double sanity check that
everything looks okay. For unit testing, we use mocha
MWCT, and you can run it locally and
on Travis, but in particular, we’ve added accessibility
testing to unit testing. You do this via AXE-CORE, an awesome library
written by the DeQueue team . This element can be a button
or an entire view. In this case, I have a button that
doesn’t have a title and I get all of its violations.
Accessibility is a feature that you don’t really want to break. For integration testing, we use
PUPT puppeteer. It lets you spin up a hidless
version headless version of Chrome, and
we use it for screen shot testing. We look at what all of
the pages look like and we compare it to what we think the
pages should look like correctly, and if they do, you’re A-okay.
And of course they run on Travis. You can use circle CI
or whatever you want, but you can add your tests
to continuous integration, so you’re always confident that
none of your commits broke your application.
Now that this part was easy and we’re confident and fired up to
fight the boss, we have to actually do the fighting. You
have to actually deploy it. PW starter kit took care of
this, too. We have npm build which builds
and minifies, and then you have NPM run serve which lets you test
locally. And npm run build looks simple,
but it does a lot of things behind the scenes. It minifies, builds the service workers so you can pre-cache,
and then there’s two kinds of builds you can generate. You
can build static builds so you can upload them to things like
Firebase hosting, or you can run dynamic
builds with prpl-server. It’s a production optimized server, a
node server that you can install on your hosting if you have
access to it. It serves the optimal bundles to
the browsers that request them. So that means that you can have
multiple bundles. This is the bundle for IE11.
And this is the bundle for browsers can dynamic imports, and this is
for things like Firefox, and prpl
server will look at the browser and be like oh, I see, these are the things you
need. Here is the bundle that works for you. The reason why this is awesome
is it means you don’t have to deploy
any S5 and serve it to Chrome. We can just give the correct
bundle to every browser as they need it. This is awesome and powerful and
improves your performance. And once you do this, once you
run npm run build, you will notice amazing lighthouse
scores, because all of the heavy lifting and performance work was already taken care of by the
starter kit patterns. There was really good armor, so
the deploying monster couldn’t attack you. And you improve 97 to 100 by
adding more lazy loading. But if you put it into a template,
it will be too heavy for everyday use. We beat our final
boss. DOOT DOOT DOOT pew pew pew whoo
weeeee. We managed to deploy an entire
app from start to finish in 38 minutes. We have finished our
thing. We can go to the final castle where we’re showered with
candy and cake and sugar. This has been my game. I hope you
liked it. I hope you try PWA starter kit
and I hope you tell us when you build something with us. Thank
you so much for watching this. [ Applause ] Realtime captions will appear on
these screens… >>Hello, everyone, I lead
developer relations for the angular team. I’m excited to be back here at
Google I/O. I’m excited to talk about some of the new things in angular . We’re going to talk about the
momentum we have been building over the last year, what it’s like to be
an angular developer today, and we will give you a sneak peek
into some of the future things we’re working on to make angular
better. Let’s get started talking about momentum. The angular team
talks about three values in particular. We talk about apps
that users love to use. At the end of the day that’s why
we’re building software. We’re trying to build great user
experiences for every person that comes to one of our
applications trying to do something. We also care about
developers and making sure that we have apps that developers
love to build. Developer experience is so critical when
it comes to productivity and when it comes to maximizing the
outcomes that developers are able to deliver. Lastly, but
not least, we talk about a community where everyone feels
welcome. This is really critical to the Angular team,
because over the last few years, we’ve only been successful
because we have a huge community and
eco-system and it’s been building around
angular, not just the efforts that we’re creating. So let’s
reflect a little bit on the growth of that community over
the last few months. If you look back at the end of last
year, you can see some really interesting numbers . We estimate that 90% are
built behind the firewall. So the best stand-in is docs
traffic . We’re also very excited that
starting this year, we just recently hit 1. 25 million 30-day actives on our
doc site. This is a huge amount of growth, and we’re very, very
thankful for the new developers that are picking up Angular and
getting started with it. There’s a lot of great
applications built with Angular. And I want to reflect on a few
of them. GrubHub is an awesome app about
driving great user experiences. They use Angular today, and they
are looking at server side rendering
and progressive web applications. If you look at the Allianz fund universe,
Angular is a great choice for things like
this . And so, having a good, rich maintainable user experience is
critical . When you visit the article
page. You’re actually visiting an
angular application. Not only do they do this on the client
side of the application, but they also do server side
rendering. One of the reasons we love to highlight Forbes is
they participated and collaborated with us to
build out angular universal, which is our
answer to server side rendering. We also can’t forget about
Google. There are over 600 applications
and projects within Google that use angular. Among them, applications like
Google shopping express, and more. One of the reasons we
talk about Google using Angular is it’s part of our validation
strategy. While Angular is developed
completely open source, every line of code goes through GitHub
and gets merged into Angular, we also do a level of validation.
So every commit before it gets merged into master is actually
validated against all 600 plus applications within Google to
ensure that it’s going to continue working. And actually
on the Angular team, we have an interesting challenge.
Every time we want to make a change to Angular, we’re
responsible if we break any of those applications. So we’re
very cognizant of when we need to make a change to
Angular, the impact that has on other developers and the work that it is going to
cause those developers to stay Up to date. We’re thankful for the huge
amount of meet-ups and groups sharing passion and knowledge
about Angular. There are over 700 groups according to medium. com . One of the ways to get
connected is through Angular. meetup.com. No matter where you
are in the world, you can probably take advantage
of an Angular community nearby. We also look to a lot of
different conferences to coast the Angular
team. One of the things that not a lot of people know is
those conferences are not put on by the team, they are put
on by awesome community members who
get invites to participate. One of the conferences we liked
was NG Atlanta. One of the things they focused on as part
of their speaker line-up was including underrepresented
groups and speakers. Their focus on making sure they had the top-notch talent that
reflected the communities that they serve was really amazing,
and we would love for this to be a model that other communities
and organizations adopt . This is an organization to
training developers and making it easier to get entrance into
this amazing technology industry using technologies like angular. They are having a meet-up this
Friday. Unfortunately it’s full. If you’re interested in
having one of — or co-organized one of these
events if you’re interested in getting involved, go to NG
girls.org and try to get involved. This is a great cause
and we love to support it. We also talk about how the
Angular team has deep partnerships with
a lot of communities across the eco-system. On one end of the
world, we’ve got things like webpack. We worked
to make sure we were getting the capabilities we need and making
sure that Angular was able to take
advantage of the latest webpack. We have dependencies on projects
like RxJS. We really want that to stay as
an independent project and support
it and collaborate with it to make sure we’re helping to push
the web forward. Has anyone seen STAX stack
blitz? If you haven’t seen it, it’s a web-based IDE that handles
anything from npm downloads to — one of the
greatest features is that you can point it at an existing GitHub repository that has ANG angular has an angular CLI dependencies.
We’re working closely with teams like NativeScript, building
applications justing JavaScript and TypeScript, and rendering
out to native widgets on iOS and Android. We’re working closely with the
NativeScript team. We see a future where the native script
tooling can be embedded directly into our CLI to make a single
experience that allows developers to take
their applications further. So we talked about the
momentum of angular, and now I want to talk about some of the
reasons why developers really like the things that we’re
doing. And one of the things that we hear most is that they
really like our opinionated integrated set of tools. I’m
going to go through a few of these tools that developers
really like using. The first one that most people are going to touch is the Angular
CLI. It’s built to take care of common tasks and making things you want
to do, allowing you to do them faster. There are things like NG new,
including things like tests, ensuring that your application
as you continue to scale it out is justable and maintainable. We have commands like ng
generate. It takes your application and we
can add things to it like a
component. We have ng serve where we will give you a live dev server built on
top of webpack that will allow you to see your changes as soon
as you hit file save. And we have ng build which takes our
knowledge, takes your application and builds it in a
way that is redistributable and usable. One of the things under
the hood of the CLI that we’re seeing a huge amount of adoption
of is called schematics. They are what we use under the hood
to make the changes when you use
something like NG new or generate. It’s a really
powerful because it can run any code that you want, basically. When I run ng generate component
my component, we’re usie inging
schematics under the hood to do that. We’re also updating some
of the files in your application. So when I generate a new
component, I’m going to get all the files, the CSS, the HTML,
the TypeScript for that component, but I will also get
my module updates so there’s correct references to the
application. We don’t do this with regular expression, but AST
parsing. So we look at the syntax tree of your application
and make safe changes with it. We do the same thing with ng
new. And there’s other examples of
schematics, too. Using a schematic, you can
create an ng generate command, create a
local API and encorporate a service. We think schematics is
an open platform that anyone can use, and we will take it much further in V6. Angular helps developers with a
lot of the common tasks that they have. So we have a router,
so solving the common problem of a state that
exists in a URL bar of your application, understanding what
route the user wants, what the intent is, and mapping that into
the local application state, showing the right components and
views. We have an HTP client that is
able to give you back JSON, typed JSON
if that’s what you’re asking for. It’s easy to mock out,
easy to include in this integrated environment.
Forms are really important part of building applications.
Because every time we build an application, it’s a conversation
with a user. We want to be collecting information from the
user, but at the same time, we want to give feedback about the
validity state of an application, and
Angular Forms helps with that. Animations are key parts of
building great applications. A lot of people think about the
style side and brand side of things when they think about
animations. When I think about animations, I
really think about building better user experiences by
giving the user subtle hints. If you animate the rough
transitions as the user is navigating across
a hierarchy, you’re going to give them a more intuitive
understanding of what’s going on in your application and how the
application is reacting to every change that they make this is
helpful and drives better user experiences.
We also care about internationalization. A lot of applications are
building for a global world now, and so we
have the i18n that you can add and
give you an industry standard tooling that you can translate or pass to a
translation house, and then build that back into your
Angular application at performance build time. We also
want to help you write better code. For a lot of developers,
this means writing good tests. We focused a lot on tools like
protractor and karma making sure that it’s easy to do end-to-end
testing across a number of different browsers, and being able to do unit
testing .
Another piece of helping developers write code is our
language service. It’s the thing that helps the IDE
actually understand what’s going on in your application, and the
most noticeable part of this is in your templates. We can
actually pull in the variables and the properties from your
component so if you make a typo or a
mistake, we can give you those red squiggly
lines that tell you to go fix that mistake. One of the additional tools we
have is Angular universal. Server
side rendering is really important if you’re delivering
some of your content to machines. That could be a web crawler for
search engines, or that could be a social crawler for something
like a social share where I want to have a share button on my
application. A lot of the machines are not capable of running server side
JavaScript. So that gives you not only machine readability,
but benefits in terms of the perceived load of your
application where it looks and feels more interactive faster
than it is when the application is boot strapping in the
background. This can really help things like conversions.
The last tool I want to talk about is — you have heard a lot about
Google I/O this year about material design, so this is
really designed to take material design aesthetic and manifest
that as a set of Angular components that are easy to use.
While we were building out material design, we started
seeing the same sorts of patterns and problems happen
over and over where you had to solve
things like bidirectional input or
accessibility or creating overlays. As we talk to other
component library authors, we baked all of the capabilities
into our component DevKit so you can apply them one by one. If
anyone is building a component library out there, which
basically every company does, we recommend it in
the CDK. If you look across this huge set
of tools, you will see that we’re definitely opinionated.
We want to be a really good default. But at the same time,
we’re not taking away your freedom. For any of these
tools, if you want to use another third-party library, you
absolutely can do that. So, in the last week, we
actually launched 6.0.0 of Angular. We’re trying to make
updates really easy. Some day we want this version to
disappear so that we just talk about new features and everyone
automatically gets them. One of the ways that we see that vision
coming true is by making the update process easier. As part
of this latest version is we have focused less on the
framework side of things and more on the tooling side of
things, focusing on the end-to-end user experience. So
we want to give developers a balance between stability where
you can continue writing angular applications day after day, but
we want to bring to you all of the innovation that comes from the vibrant and exciting
JFS eco-system avaScript eco-system. It will also keep your project
up-to-date. We’re already using this in
RxJS, Angular Material, and more. If I run a command like ng
[email protected]/core, we’re going to do a few things. If you look in the package JSON,
you will see a pack Angular group. When you update one of
the packages within Angular, you’re going to get all of the
associated packages that should be locked in place. They’re all
going to be updated together. So this is something that we
would love the package managers to do and we use the package
managers under the hood to manage these things. But this
is not where we want it to be yet, so we have taken care of
this problem for developers. We’re going to give you all of
the latest framework packages as well as all of the dependencies
that are required by that. When you install the update using
this command, you’re going to get that, the latest version of zones and the
correct version of TypeScript as well as RxJS. The other thing
that will happen is for each of the packages, we will look in the packages and see if there
are schematics that we can run to update and migrate to the
latest version. We saw this with RxJS. We wouldn’t just update RxJS in
your package JSON, we will install
compat to insure it was a smooth process and give developers more time to update
their applications and use some of the automated tooling that
we’re working on. Another CLI command is ng add.
We will call ng autopackage, and it will download from npm, and then
it’s going to look for run an ng ad
package. This will work in a brand-new
scaffold application that I create with ng new, but also within existing
applications . It will also set up an
application manifest for you. Another thing that we’ve been
asked a lot about recently, one of our
Angular labs projects is Angular elements. We actually landed
the first version of angular elements, because it allows you TA take angular components
and ship them and boot strap them using the custom elements
in the browser. If you look on Angular. io, we are using Angular
elements today to take angular components and
ship them and boot strap them using the custom elements in the
browser. If you look on Angular.io, we are using Angular
elements today. Historically, if you wanted to embed rich
functionality in some of your content, you would have to boot
strap that functionality into your app. With Angular elements now, we
can ship code example as a custom element
in our application so that instead of waiting for Angular
or the developer to boot strap that component, we’re actually
relying on the browser. So the moment that we load this content into the DOM, we’re
going see this rich experience for code examples where we get
code coloring and the ability to copy and paste that component. It’s really, really easy to do
as a developer. If I take a component that I built in
angular, pass it to the injector, that is a custom
element that is ready to be defined by the browser’s custom
elements. These are easy to get started with. They’re available
as part of 6. 0 within Angular applications
where you have an injector. We’re looking at a world where
it’s easier to distribute these things. You can distribute it to other developers who maybe don’t want
to apply Angular to their project.
We’ve also included updates to RxJS. I talked about how they’re
automatically applying a capability layer. We’ve updated
the application so that it’s faster and more tree
shakeable, and we have updated webpack so it’s more tree
shakeable as well. CLI, you will see the CLI format
will change a little bit, and now we have work spaces, projects,
targets, and configs. We’re able to represent more,
but by default, it will all look the
same . This work is built on top of
the work ng packager. So, we talked a little about the
momentum of angular and the state of angular today and a lot
of the reasons why people are using it. I want to invite to the stage,
Kara, one of the engineers leading the future of Angular. [ Applause ]
>>KARA ERICKSON: Hi. I’m one of the engineers working on
this new project. I’m here to talk about the
future of Angular, so this project is
still in its early stages of development. But we were really
excited about everything that we’re seeing, so we wanted to
give you a sneak preview. So what is Ivy? We’re rewriting the code
that translates your Angular templates into whatever you see
rendered in the browser. So why are we taking the time to do
this? So this was a project that was conceived specifically to
tackle some of the problems that we know that
Angular developers face. We have been hearing the same
things over and over from our developer community, and we’ve been
hearing that developers want smaller bundle sizes so they’re only paying for the
Angular code that they’re actually using. They want great
start-up performance so their apps will load quickly,
even on slow mobile connections. And they want apps that are
fundamentally simple to understand and debug, even as
their applications grow larger over time. That’s what project
ivy is all about. We want to make your apps smaller, faster,
and simpler. r, all while requiring no
upgrade effort from you. It was important to us to make this
change without requiring changes from existing applications,
because we want everyone to be able to leverage the benefits
without having to do a bunch of things.
This might sound ambitious, and it is. But we have actually
done this before. In Angular 4, we completely
rewrote the code and did so with zero
breaking changes. The way we were able to achieve this is through an extensive vetting
process. We will use the same vetting process when we upgrade
everyone to ivy. So we have over 600 projects
inside Google that are already using angular. So we first turn
on the flooding for all of these 600 plus projects
to insure that we don’t see any breaking changes. These are real world
applications like Firebase and Google Analytics, and apps that
cannot break and has a lot of real world use cases. Once we are satisfied, we can
render the default for everyone. So, as I have hammered in by
now, all of these changes are to angular internals. As an Angular developer, you
actually don’t need to know how everything works under the hood,
but I will take the time to explain the new design,
because it’s really cool. And also because I really want you
to understand why you’re going to
get better results with ivy than you would with angular today.
So when we were redesigning the rendering pipeline, we knew
there were a few characteristics that we wanted to have. We
wanted to design it to be tree shakeable, so you’re only paying
for the Angular code that you’re actually using. And we wanted
it to be local in its effects so that as a developer as you are
developing your apps, you only have to recompile the components
you’re actually changing. Let’s start with tree shaking. So what do I mean when I say
designed with tree shaking in mind? It’s essentially a build
optimization step that ensures that code that
you’re not using doesn’t end up in the final bundle that ships
to the browser. There are a lot of tools.
There’s roll up to ensure that code you’re not using is never
added to your bundle. There are other tools like
uglified and tries to intelligently
delete dead code from it. Their efficacy depends on how
you’re writing your code. This is because tree shaking tools
typically use static analysis of references to figure out what
code you need and you don’t need in your
bundles. And static analysis by definition is something that
tries to figure out what’s going on in a piece of code without
actually running the code. So there’s some limitations to
that. And sometimes it will have to kind of assume the worst
case in your code to ensure the resulting program is
correct. So what does it mean to write
code that’s friendly to tree-shakers? I’ll give you a few examples.
Let’s say you have this code in your application and you’re
importing a few functions from a third party
library. You’re not calling an unused
function anywhere, so tree shaking tools would do a pretty
good job of analyzing this. It would see that some function
is being referenced in main and would stay in the bundle. And
unused function isn’t referenced anywhere, so the tooling would
know it could safely remove it from the bundle.
But let’s take a slightly more complicated case. Let’s
say you wanted unused function to be called, but only if some
arbitrary conditional check passed. Tree shaking tools
would have a little bit more of a problem with this set-up. And
that’s because, remember, they’re relying on static
analysis of references. Here they would see that unused
function is being referenced in main, but they don’t necessarily
know whether that code path is going to be
used at run time. To be conservative, this is going
stick around in your bundle. These are the types of patterns
that we want to try to avoid. We want to try to avoid code
paths that you’re not using in conditionals being in your
bundle even when you’re not using them. You might wonder
how to avoid conditionals. They’re part of programming.
And you would be right. But we can restructure our code
to make sure they’re not quite as
necessary . I’m going to show you how our rendering pipeline works today
so you have context for the change that we’ve made. Say you’ve written the this standard hello world inside
of a div. The Angular compiler would parse
and generate JavaScript that represents the structure of your
template. You can see an example of the generated code
here. We have an element def, and a
text def that creates another data structure based on your
text node. This is a parsed version of your template.
At run time, this data structure is then passed into the Angular
interpreter. And the Angular interpreter
tries to figure out which operations it needs to run to generate the correct run,
and hopefully it renders the correct thing. But let’s take a step back and
look at this angular interpreter step. You might have noticed
that this pattern looks slightly familiar. You have all of the
conditional checks. And the problem here is that all angular
templates are going through the same shared code path. So the
Angular compiler doesn’t know ahead of time what kinds of
templates it’s going to see. So it has to check, you know,
based on the data structure which operations it needs to
run. So some of these conditional checks will be false
at run time, but it doesn’t matter. Because tree shaking
tools will see that they are referenced in this function, and
all of the symbols will stay in your bundle. This was the problem with trying
to solve. Instead of parsing the template
and creating the data structure and passing that structure to an
interpreter that needed to know how to do everything, why don’t
we just skip a step and just generate the instructions
directly that come naturally from a certain template. So that way DWOENT need don’t need an interpreter at all
with the crazy checks in it. We still pass through the
angular compiler, but instead of generating a set of data structures, so we
have elements and it’s creating a Div, we have text which
generates a text node, and if you look at the implementation
of element start, it’s doing the work of creating the DOM, it’s creating
a div. We don’t have those conditionals anymore .
That code won’t be generated from the template that you’ve
given the compiler, so there won’t be references, and if there are not RECHBSs,
then tree shaking tools can effectively remove that code from your bundle references, then tree shaking
tools can effectively remove that code from your bundle.
If you’re not using queries or life cycle hooks or whatever, you
don’t have to pay for that code in your bundle. This also has
implications for code splitting. So if you haven’t heard of code
splitting, it’s a process by which you split your application
code up into smaller chunks and you can lazy
load them on command, typically by routes. Like tree-shaking tools, code
splitting tools figure out which code ends up in which chunk. So
it will have the same set of problems. So with Angular
today, you would end up with most of the Angular
features inside every route. With this restructuring, it’s
much easier to split the angular code apart, so each route will only be
loading the code that you’re using in that route. So a good
way to think about it might be that tree shaking tools will
remove code that you’re not using anywhere in your application and code
splitting tools will remove code that you’re not using right now in this route.
Obviously this is great for large applications. You’re
paying for fewer features per CHURNG. chunk. We also wanted to make Angular
developers more efficient. Our strategy was by adhering to a principle that we like to call
locality . So if you know that you can
generate the correct template instructions for each component using only its
own information as an input, then you know you can compile
each component completely independently of other
components. This has great implications for the speed of
your build process. If you take an example one more time. Let’s
say you have an app and you have a header and a footer. So you run that through the
angular compiler, and it would generate some code for each of your
components, right? So later, if you made a change
in the header, since you know that each component is compiled only using
the information for that component, you know the app
cannot be affected by any change that you made in the header, so you can
confidently regenerate the header by itself. It has the potential to increase
the incrementality of your builds. Your build time should
be proportional to the size of the change that you’re making,
not to the size of your whole application. If you have 1,000
TypeScript files, you should only be regenerating the file
that you changed, not your entire project. Structuring it
this way leads to faster rebuilds.
So this is not actually how Angular is configured today. We
have a different strategy. So once again, I’ll explain how Angular works today for context
for how we’re making the change. So again we’ll go back to this
example where we have an app with a header and a footer. And
this is the template, once again. Header and footer. This is the
code that we might generate, a simplified version.
Our strategy today is to try to do as much processing at compile
time as possible so we can avoid doing
the same processing at run time. One of the strategies we used to
speed up the processing of the component is to in line some
information about that component’s dependencies in its
generated code. Here you can see for the app component, we
have some information about the header. First of all, we have
the information that it’s a directive at all, because we’ve
already done the directive matching at compile time . We also have information
about the header specifically. This is essentially represents a
bunch of different flags about the directive and the node
THAT’s on. Because there is a life cycle hook, there is a bit
flipped and we get this number. The point they want you to
remember is there is implement implementation details.
It doesn’t have to look into dependencies at run time and try
to figure out all of its characteristics. But it’s also leaking
implementation details of a component into its parent. What
that means functionally is once again if you make a change to
the header, then we not only have to
recompile the header but also the app component because it has
its own copy of the same information. So with Ivy, this is slightly
different. So as you can see on the right, with the same
template, we just have an element start for the header and
an element start for the footer. We don’t have information about
either of these components. All of their details are completely
encapsulated inside their own generated code. So what this
means is we have a situation that we want. You only need to recompile the
HDheader. There are other side effects that come out. One is if you can compile all of
your components independently, you can ship a third-party code that is already precompiled . Once again, third party
libraries can ship generated code, so they
don’t need to ship any other information to
help us. We’re doing a little more run time, so we could
possibly do things like create directives on the fly. So it’s
pretty cool. So we’ve talked a lot about the design. You might be wondering, did id
work? So I have some early results and
remember, it’s just early stages. So the first goal is
that we wanted to make your bundled sizes smaller. So our
benchmark was a hello world application. With Angular today, that would
be 36KB compressed. With Ivy, we’re able to get this
down to 2.7Kb. That’s a huge jump.
[ Applause ] we also wanted to make your apps
faster. So we ran our hello world
through this. And that was 1.5 seconds. Current angular was 4
seconds. And ivy was just 2.2 seconds. So we’re making huge
strides. We’re already 45 % with your builds being more
incremental, you save time in the development cycle as well. Our last goal was to make
angular simpler. If we go back to this slide from before, you
can see there’s a whole lot less noise. You can see the header
and footer. There isn’t, you know, numbers and nodes and
stuff. It’s a lot easier to road. This
really helps when you’re trying to debug apps. I had a quick
demo prepared. I don’t know — we do not have time to show it. We’ll be in the web sand box after this, so anyone who
wants to see it, we’re still writing the
road map and compiler. We will give you time to try it out and
give us feedback. Once the verification process is
done, we can do the render of the
default. We’re so happy with the growth
and adoption. Thank you for being a part of our community. V6 is exciting. It just came
out. Hopefully you can give it a try. And we’re so excited
about Ivy. So hopefully you are too. We
will be in the web sandbox, so please come talk to us. Thank
you very much. [ Applause ] .
>>ERIC BIDELMAN: Are you ready to talk about headless
Chrome? You’re tired, I’m tired. Hopefully by the end of the
presentation, you’ll be inspired to use headless Chrome, and
we’ll talk about the puppeteer. I am a web
developer, but basically I’m an engineer that works on
the Chrome team and on developer relations . W I think it’s a really
exciting space. The fact that we have headless Chrome now, puppeteer . This talk is not about
testing. You should test your apps. You
can use headless Chrome to do smoke tests, UI tests, whatever.
I want to stick to the automation side of things. So this is something that I
realized a couple of weeks ago. Headless Chrome can be a front
end for your web app frontend. Once I
started to bake headless Chrome into my work flow, it makes my
life easier. I can automate things and put headless Chrome on a server. Some cool
and powerful things you can do with headless Chrome. We’re
going to talk about what headless Chrome is, get that
nomenclature out of the way. We’ll introduce puppeteer, and
along the way we’ll see ten
interesting use cases that I want to share with
you and we’ll talk about puppeteer’s use case.
This is something I’m going to refer to a couple of times
throughout today’s presentation. This is the pyramid of
puppeteer. At the bottom is headless Chrome, just the
browser. Normally when you click on Chrome, there’s a
window that launches. The user can input a url, you can
click around, open dev tools, tweak
styles and modify the page as you go. The dev tools has many,
many more features. With headless Chrome there’s none of
that. Chrome is running. You can see knit the task bar there,
but there’s literally no UI. Headless Chrome is Chrome
without Chrome, so to speak. It’s Chrome without UI. So if
there’s nothing to interact with, how is this thing useful
to us? We’ll talk about that. If you want to launch Chrome in headless mode, it’s a one-line
command line. By itself. It is not too useful. We need to combine it with
something else, which is the remote debugging port flag. Once you combine these, it will
open a remote debugging port, and then we can tap into the dev tools using
this remote debugging port. That’s where things get awesome.
What does headless Chrome unlock for SNUS one of the most
exciting things is the ability to test platform
features like ES6 modules and service workers and modules and
streams. We can finally write apps and test the apps because we have the
up-to-date rendering sen engine. Things like network TLOTenning
and code coverage code throttling and more. This article is about a year
old at this point, but it is still
relevant. You can do interesting things with headless
Chrome without ever having to write code. You can print to a PDF and do
some other interesting things. Do check that out if you want to
know more about headless Chrome. Maybe I’ve sold you. It’s a
thing. Headless browsers are things.
What can you actually do with this stuff? Let’s go back to the
pyramid of puppeteer. We’ve got the browser at the lowest level. On top of that is the
Chrome dev tools protocol. There’s a huge layer here that
the Chrome dev tools itself uses to communicate with your page
and change the page, a whole API surface
that we can tap into. These are the yin and yang . Headless Chrome and dev tools
are an awesome duo. The dev tools is
straightforward. There’s a lot you can do with it. It’s a JSON-based web socket
API. I opened to local host 9222,
which is the remote debugging port, and then you can just do
message passing. In this case, in this example here, I’m
getting the page’s title. I’m evaluating the document
title expression, using the evaluate
dev tools method. And you can see this traffic.
So dev tools itself uses this protocol. So if you open the propertytocol
monitor panel in the dev tools, you can see these requests and
responses fly by. So like any time you tweak a style or do
something in the dev tools you’ll see the traffic for it. So you can learn the API as you
see this stuff happen. So back to the pyramid of
puppeteer, all this cool stuff we’re going to tap into, and on top is where
puppeteer comes in. Puppeteer is a library that we launched last year right around
the time that headless Chrome came out. You can get it off of
npm. There was not a lot of good options for working with
headless Chrome at that point in time, and we wanted to highlight
the dev tools protocol, make sure people know how to use the
protocol, make it high level API for some of the powerful things
you can do. So we used a lot of modern
features. And that’s because of this async
feature and node talking to Chrome. Promises lend
themselves very nicely to this. But you can use node 6. You
don’t have to transpile or anything like that. We wanted to create a zero
configuration set-up for you. When you pull down puppeteer, we
download chromium with puppeteer. It’s hard to find Chrome,
install it and launch it. So we wanted to
make it easy. Just bundle a version of Chrome that’s
guaranteed to work with the version of puppeteer that you
guys install. High level of APIs. We’ll see a bunch of examples of
that. And so that’s why we create puppeteer. So let’s look
at a little bit of code, a little bit of puppeteer code.
One of the most common things people do is take a screen shot
of a web page. In order to do that, we’ll call
puppeteer launch, a promise that will return a browser instance that
we can interact with headless Chrome. This is going to open a new tab
in Chrome. It’s opening about blank. And once that promise resolves,
we can navigate to the URL that we want to take a screen shot
of. That’s going to wait for the page’s load event to fire,
and then we can use puppeteer’s API to take a screen shot. This has a bunch of options, a
full page screen shot, a portion of the page, or a DOM element.
It’s easy. It’s very high level. You don’t need to deal with
buffers or responses. You just pass the file you want to create and you get a png file. And lastly, you close the
browser out . It’s like four or five lines
of code. Open a new page, navigate to a
page, wait for its load event, take a screen shot, close the
browser. This is what I mean by the high level API . There’s headless Chrome, and
this is what we use by default with
puppeteer launch. This is just normal Chrome. If you include
this headless false flag, this will actually launch Chrome, you
will see it, and this is handy if you’re debugging scripts and
you have no idea what’s going on. Throw this on and you can
see puppeteer click around, navigate pages and it’s cool to
see this stuff in realtime. Approximate 6 We’ve got Chrome dev tools
protocol, puppeteer, and of course at the
top is where our automation scripts come in. So we’re
already standing on the shoulders of giants. All the
stuff below us has been iterated on for many years. Let’s see
what you can do with headless Chrome. Ten awesome things you
can do with headless Chrome. The first is kind of neat. You can pre-render JavaScript
applications. If you don’t have a good server side render story,
you can use headless Chrome to do this for you. So I wanted to
actually put my words where my mouth is and build an app and
see if this is a viable solution. So I built this dev web fire
host . It’s a clientside app, fire
store and Firebase, a JSON API so you can
query the data. The backend is written in node. It runs puppeteer and headless
Chrome to get a good first meaningful
ping. This is the app. You can dive into the index page, the
main page of the app, and it’s a basic client-side app . It renders the post into the
container. It creates a template string,
and it literally enters HTML as the content. So that’s basically what this
app does. So the goal is to take an
important page when the page loads and turn it into a static
version. That’s essentially what prerendering a JavaScript
app does. That’s the goal. We can do this with puppeteer . It will take a URL, launch
headless Chrome, navigate to the URL. Page content. It is
basically getting the new snapshot of the page . It’s going to get the mark-up
in the DOM and we grab that. That’s
the thing that we return in this method. The updated version of
the page. We can put this on a web server
and render this client side app. So somebody will visit our app,
what headless Chrome is going do is fetch the same page. It will
go off, run the page through Chrome in the cloud. I will see the page as written,
render all of that JavaScript and stuff. The thing we
actually return is the final result . I’m going to reuse the server
side method I just called . So any time somebody comes to
visit my home page, I basically just call that server side
render method. Just load up the index.html file, the client-side
app. Its will go through headless Chrome. Run by the
browser. And send the HTML final response
to the user and that’s the server side rendering using
headless Chrome. You’re probably wondering is
this fast or a viable solution? I did do a little bit of
measuring of this, because I was curious. If you slow your
connection down to be like a mobile device, slow the CPU down
and network, you can see the comparison between the server
side rendered version on the right and client side on the
left. So server-side-rendered version,
immediately you see the page, the mark-up is in the page.
It’s right there. The client side version takes
longer. It’s got to churn through the JavaScript, render
the post, and things are slower. So you see a performance win on
slow mobile devices. And the numbers actually speak for
themselves. You go from 11 second down to
like a 2.3 second first content. And that’s how fast those posts
actually render in the DOM. Not only do we make our app
faster by adding on headless Chrome, but we also make it
available to search crawlers and engines that don’t understand
JavaScript. So we have two benefits.
Now in order to get to those numbers, I did make a few more
optimizations that I want to go through because I think they
highlight puppeteer’s really awesome APIs. You don’t have to
wait for the entire page to load. The only thing we care about is
this list of posts, right? We only care about that mark-up as
headless Chrome renders it. If we go back to the server-side
method, we’re waiting for all network requests to be done. We DREENLTly care about our
analytics library to load. We don’t care about images to
load or other wasteful things. We care about when is that
mark-up available directly care about our
analytics library to load. We don’t care about images to load
or other wasteful things. We care about when is that mark-up
available. What this is going to do is wait for this element
to be in the dom and visible. We’re catering now this
server-side rendering method and catering to
the app . That’s not waiting for the
entire page to load. This is kind of an obvious one
and it speeds up things quite a bit. Same method as before, but
we will wrap it in cache. Any time somebody comes in in
for the first time, we will do the pre-rendering and store
results in the cache and subsequent requests gets served
from that cache. It’s in memory. This just goes to show
you that it’s very easy. You only pay the penalty once for
the pre-render. Number three is to prevent
rehydration. What the heck is rehydration? If you go back to
our main page, you have the container that gets
populated by the JSON posts. And if you think about what’s
happening here, the user is going to visit this page in the
browser, Chrome is going render this in the client, but headless
Chrome is also doing it on the server. It’s wasteful that we’re doing
this twice. I check and see if that post container gets added to thedom, and if
DOM, and I know it’s there. So that’s not optimization that
you can do . If you think about it, we
only care about this mark-up, and certain things don’t build
the page. So JavaScript can build a page
with DOM APIs and image tags and CSS, they don’t actually construct mark-up . What this is going to do is
give us the ability to intercept all
network requests before Chrome makes
them. If you’re one of these requests or scripts or fetch
events that can generate mark-up, it will allow you to go
through and continue the request. If you don’t. If
you’re a style sheet, we’ll just abort the request. This is
another cool way that we’re speeding up the prerendering
process . There is a an article that I
wrote a couple weeks back. It’s got more optimizations.
Please give me your feedback. I didn’t have to change any code
on Tthe app. I’m curious to know you guys’s thoughts. Number two awesome thing you can
do with headless Chrome is verify
that lazy loading is paying off. Sometimes I’ll put a bunch of
effort into my apps and I wonder if all of this work is paying
off. So you can verify this now using puppeteer. So we have a
code coverage API that gives you the break down of the CSS and
JavaScript that your app uses. You can start code coverage, do
a bunch of stuff on the page, navigate around, and then stop
code coverage. Dev tools has a panel for this. You can check
that out. But I wanted to go one step
further and analyze the page . Over the course of page load,
and eventually when the entire page is loaded, it will give us
the print out of everything that’s going on. So you can see the URL itself,
I’m using about 37% of my JavaScript
and CSS resources at that point in time. As the page
progresses, I’m using more and more of the files as you would
probably expect. This highlights that I’m lazy loading
things. The second resource, the second set of bars, you can see it’s not
uselize utilized at all. If you’re familiar with
Istanbul, it gets puppeteer’s coverage and run this thing and get the exact Istanbul
HTML output, which is nice. So check that out. Number three is
A/B testing. There’s that word testing again . So I want to measure if it’s
faster to inline styles versus just
having my style sheets be linked style
sheets . Normally you would ship two
versions and measure that. But with puppeteer, you don’t
have to do it that way . For any style sheet response
I get, when I check the resource type, I will stash the CSS text, the content
of the files inside of a map, the layer . Navigate to the URL, and
we’re using a new method, double dollar sign . By call back is going to to
get injected into the page. In here you can run anything the browser supports . It replaces all link tags
with a style tag. So I’m replacing the style sheets with
the equivalent style tag on the fly. And that’s actually what
gets served up. You can run this on a server, do a script to do a side-by-side
comparison. And we have not changed the page, we just use
puppeteer to live modify the request that I made. That’s
doing A/B testing. Number four is to catch
potential issues with the Google crawler. A couple weeks back, I built
this fire host app and I realized after I pushed it to production and I hit the
render as Google button that my app
doesn’t render correctly in Google BOT because it runs a super old version of Chrome . I was kind of hosed. What do
I do? I said hey, could we use
puppeteer to catch this or have an early warning signal before
shipping your app? The answer is yes . So I can start a trace and
stop a trace when ever I want. And basically you get this huge
JSON file that you can take action
on. You can pull out all of the features used by a page . Then you can correlate that
with can I use data. So that’s what this script does. It will
tell you the features that you’re using that are not
available on Chrome 41. None of that stuff is available
in the Google search box. So this is a cool early warning
signal if your app might not render
correctly in Google search. Number five, create custom PDFs. A lot of people like to create
PDFs of their web pages. I don’t understand it, but a lot
of people do. We have an API for it. You can go up to the
big lighthouse, input a URL and puppeteer spawns
up three different tools. It runs web page test, lighthouse,
and page speed insight all at once. Eventually what happens is you
get this overall report of the PDFs of each one of the tools . We create a new page. We’re building an HTML page by
giving it a string. We’ll set a view port because we
want the page to be big. We’ll use the view port and
emulation APIs to create a big page. And last but not least, similar
to screen shots, create a PDF of
THAthat page that you’re visiting. You can give a header, footer,
stylize the page, you can do it in puppeteer. You don’t need a
JavaScript library to create PDFs anymore, just use the tool
on your system, which is the browser. Number six is make your browser
talk. It will read a text file in node and it will open a page that uses
the web synthesis tool .
>>Hi there, my name is puppeteer. I clicked the speak
button on this page to start talking to you. I’m able to speak using the —
TLDR, the rise of the machines has begun. Bye for now.
>>We’re not quite there, but ITS kind of a cool example. it’s kind of a cool example.
I’m also using executable path. I’m opening Chrome canary, and
not the bundled version that gets shipped with puppeteer. The web speech synthesis API in
the open sourced version of Chrome doesn’t have that cool
British accent. We’ll use a new API called
evaluate on document. It will run this before any of the other
pages’ JavaScript runs. This gets injected in the page. I’m
being silly and setting a global variable and creating a global
variable in the page called text to speech and sending it to the
content of the file they read. That’s how the message gets into
the page. What I do is I read the page, just the HTML file, and instead of
starting a web server, I navigate to the data URL version
of it. I’m kind of on the fly opening the page. And then the
last thing that I do is I click that speak button using
page. $ and that’s what kicks off this
reading of the text. Number seven, awesome thing you
can do is test Chrome extensions. I don’t know how
people tested or test their Chrome extensions. But you
could certainly test your Chrome extensions using puppeteer. I’m
going to show you guys a real example. I’m going to run the
lighthouse Chrome extension’s real unit test. They decided to
use puppeteer because they would ship code and their extension
would break. We wanted to fix that so we wrote a test. This will use puppeteer to
launch a tab. You can see it in a
corner. It’s actually been started and inside of the bars at the top, it is
being debugged and Chrome is automated by puppeteer.
Lighthouse is just running. It reloads the page and gives you a
report, and eventually all the tests pass, which is really
cool. I know that was a lot and very fast. How did we do that?
How are they testing their extension? The important bit is we have to
use headful Chrome because headless does not support full
extensions. And we can use these arguments
to pass in our extension directory to load Chrome with. The lighthouse team grabbed the
background page and they will run one of the background page’s JavaScript
methods . That’s actually what ticks
off actually running lighthouse inside the Chrome extension. So
that’s how they’re able to test their Chrome extension.
Number eight awesome thing you can do, you can crawl single
page application. Maybe you want to visualize your app. Maybe you don’t note all of your
URLs. Maybe you want a suite
vizsualization . So to do this, you can
discover all the page on your page. Grab all the anchors on
the page. This is going to get run inside of the page and we
look for all the anchors. Are they the same origin as the
page? Are they part of our app? And are they not the app we’re
actually viewing? We don’t want to render ourselves. So we return the unique set, run
this recursively, and that’s the way I created the D3
visualization. You can do not just a list of
links . You can take a screen shot or generate a PF or what have you.
Number nine is one of my favorites. Verify service
workers are actually caching all of your app. Every time I use
service worker, I always leave something out. I always forget
to cache something in the cache, which means somebody comes back to my page and my app
doesn’t work offline and they get a 404 that the image is
broke or something. We can verify that is not the case using some of puppeteer’s APIs . Next thing we do is we look for
all network requests. Any request that happens on the
page, we’ll use a network interception
and take a list of URLs that the
network gets. After that, we reload the page. We want to
know what’s coming from the network and what gets served
from the service worker cache. At this point, service worker
has been installed. It’s cached its resources, and we can check
and see where things come from. That’s where the last line comes
from. It loops through the requests
that get made on the page. It checks the responses and
determines if they come from the service worker or if they come
from the network. Here’s an example of the script. You can basically see on Chrome
status, everything is cached, which is great. So you get a green check . That was a choice to not
cache analytics requests, but everything else gets cached
offline. So the last cool thing, and
there’s many more things you can do, but the last thing I have
time for is to procrastinate. I didn’t have a demo of the
keyboard API and touch emulation, but
what this is going to do is basically open the Google pac-man doodle from a
couple years ago and play it in node. So I’m going to get keyboard
events and eventually the game will fire up, I’m forwarding the
key presses to the page, and the page has JavaScript that knows
how to handle key presses so I’m able to play pac-man in node.
Somebody likes it. [ Applause ] Before we wrap up, there is a
number of useful things that I want to draw your attention to.
Sites and tools. The first one is puppeteer as a
service. The notion that you can run
headless Chrome, the browser as a web service. We actually put
on handlers in the cloud. So this first one here, you
know, you pass to the URL and it takes a screen shot and does all of that stuff
in the background. But you can think about baking headless Chrome into your web
service . We have an awesome GitHub
repository. A lot of useful stuff there. If you want to see anything else implemented, I will show you a
demo . Play with the code, run
demos, see the results, and you don’t have to install anything
to work with puppeteer. So that was a lot of stuff. We covered a lot of things
headless Chrome can do. Things like server-side
rendering, pre-rendering your apps, A/B
testing, making the Google search BOT
happy, creating PDFs. Hopefully you realize that automation is a
thing. It’s not just about testing but
about making your app more productive and yourself as a
developer more productive. So headless Chrome is a front end
for your front end. You can find me online if you
want to converse with me after the show. I will leave this one
up here which has a great list of things that
you can take a screen shot of. Thank you for sticking around.
I really appreciate you coming. [ Applause ] Clara Bayarri Florina Muntensescu Siyamed Sinir Kara Erickson . >>CLARA BAYARRI: Hello. >>FLORINA MUNTENSESCU:
Hello. >>SIYAMED SINIR: And hi.
>>CLARA BAYARRI: Text is a major part of your app. I’s
part of who you are and the major thing that your users will
be consuming. It is the first thing that your users will read
and the thing that will mostly affect usability, accessibility,
all of that. On top of that, the Android text
app is responsible for emoji, which we all love. All of this obviously comes at a
cost. There’s a bunch of performance implications related
to text that we want you to be aware of. Today we’re going to go through
best practices and things you should know. Let’s talk about
the architecture. We want to explain how the tech
stack is fitted within Android so you can better understand
what we’re going to talk about later. It is split into two parts, java
code and native C++ code. Text view and edit text provide
text functionality out of the box and do everything for you.
If you have custom views or don’t want to use our widget,
then you are probably using the second layer . Once get into the native is the
very first layer we call Minikin. It is a our first point of
contact with C++. Below that, there’s a bunch of libraries we
use to help us. ICU, HarfBuzz , FreeType, and finally Skia. Today we will focus on the top
three .
>>SIYAMED SINIR: It’s main responsibilities are text
layout, measurement and line breaking. It is emptyimportant to
understand — if you provide a string like
this to Minikin, it has to identify the
glifs, and you can think of it as an image to be drawn . Glyph is s can be found in different
fonts. At this point, positioning is not always putting the glyph side by
side, which is the case with the letter e here. If you provide a longer string,
Minikin can divide it into words. The result of this
measurement is put into a cache so that if the system comes across the same word again, it
can call the computed value. This has a fixed size of 5,000. When a string is wider than the
other, there has to be a line break. In the simplest case
this is putting the boxes side by side until the boundary is
reached and then moving to the next slide . Minikin will distribute the
words to have better text alignment. The default is high quality,
which is similar to balance, except a few
subtle DIRVESs, one of them being
hyphenation . Hyphenation improves the
alignment of thestring and use of white space. It comes with a
cost. The cost arises because now the system has to measure
more words, and it has to compare more configurations for
an optimum solution. After Android P, enabling
hyphenation is two and a half times more expensive than the
disabled case. Another thing that affects hyphenation is the lowcale. We
have a device in the English and two example strings. One of
them is not hyphenated correctly, and the other did not
choose the right character for the
Japanese text. It happens because the system is not aware of the languages of those
strings. To fix this, we have to tell the
system about the language using the set text locale function. You can
use it to mark different parts of the string. However, keep in mind that the
number of locale spots affects the performance of the text layout. >>FLORINA MUNTENSESCU: We
usually start working with the text attributes and set the text size, color,
and font. We call this a single style because all of these
attributes affect the entire text from the text view. But for designers, this is not
enough. What they want is to apply
multiple style to the same text block. So in order to achieve this in Android, we work with spoon spans. They can affect character and
paragraph spans, depending on whether they apply to a few
characters or entire paragraphs. And then character spans can be
split into appearance and metric affecting. And the difference
between them lies in the method that needs to be called on the text view in order to render
the size. Appearance affecting spans
require a redraw to be called. Metric affecting spans require
both a remeasure and a redraw. Let’s see an example. So let’s say we need to style a
code block. First we would change the background color.
For this we would use a background color span. This
doesn’t change the metric of the text. So this means this is an
appearance aFEKTSing span. But we also needed to change the
font. So for this, we would use a type
face span ffecting span. But we also
needed to change the font. So for this, we would use a type
face span. Paragraph spans like the bullet
span have to be attached from the beginning of the paragraph
and affect the entire text of the paragraph. The thing is
that if you don’t attach it from the beginning of the paragraph,
chances are you’ll lose the styling completely. So, in the framework, we define
a series of interfaces and abstract classes. And under the
hood, the framework checks for instances of these classes and
triggers different actions depending on them. But what makes spans
so powerful is the fact that they give you access to the things like TextPaint and
Canvas. You can pretty much handle everything you need in
terms of text. So first of all, whenever you
need a span, check the ones that are available in the framework,
because they cover most of the common use cases.
Okay. But if you go back to our example, we see the type face span only gets
a types a type face starting with
Android P. To create a custom span, we
would do a metric affecting span. To update measure state,
which is called when text view on measure is called, and on the
draw state, which is called when the text on draw method is
called. And both of these methods give
us access to the text span. Here all we need to do is change
the type face. But if we have the text span, it
means that we can also set the background color. It means that we can create one
code block span to file our code block.
So now we have two different ways of styling our code block. One with composing multiple
spans, one from the framework, and another one where we’re
using a custom span. So which one should you use? To help you decide, remember
only framework spans can be
parcelled. First when we’re passing text
via intents or when we’re copying text. If we’re copying
from a text view to an edit in the same activity or whether
we’re copying from one application to another, the text
gets to be parcelled and unparcelled via the
clipboard service. If we’re passing with one framework span
and one custom span, just our framework spans are tossed. But
if we’re passing a text with one custom span, it means that no
styling will be kept. So in general, when you’re
implementing spans and the text gets parcelled, consider the
behavior you want. Should the style be partially kept or not?
>>CLARA BAYARRI: So how do we use spans in text? The first one is is spans. It gets you query
spans but not modify them. If you want to modify, you get
spanable. Then we have three concrete
implementations of these interfaces. The first is
spannedString and has immutable text and immutable
markup. Spanable string has immutable
text but mutable markup, and finally we
have Stanable string build spanable string builder.
Spanable string holds an array of spans, whereas spanable string
builder holds a tree of spans. So as a team, we were wondering,
well, is the tree more efficient? Maybe there’s a case
where we want to recommend people to only use spanable
string builder all the time because it’s more efficient, so
we ran tests. Up to 250 spans, they’re the same. They
basically perform equally, so we recommend you do what makes the
most sense, which is use spanable string for immutable text, spanable string
builder for mutable text. But after 250
strings, we recommend you use spanable string builder for all. We ran the tests to the limits.
When you have thousands of spans, they really diverge, but
we really hope this is not a common use case for anyone. Another common thing we see with
spans is people trying to check if one span is present within a
spanable. And the solution we have seen
online uses spanned.getSpans. You query, get an array back and check if the array is
empty. This is inefficient. You’re going through an entire
span, collecting them all into an array, and checking if that
array is empty. There’s a better way to do this,
which is next span transition. When you ask for the first
boundary of a certain span , it will just have to do the
work, and then it can stop and it won’t collect spans into an
array. >>FLORINA MUNTENSESCU: A
frequent task is styling international
text. Usually in your resources, you will define the
same string in multiple languages. But then you need to
highlight one specific word. Here we want the word “text” to
be bold. The word and where it appears is different in every
language. One easy solution would be to
just use HTML tags. But the problem with this is they have
limited functionality. If, for example, we want to change the
font, well, this is not something that HTML can provide.
The solution for this is to use annotation tags. So, the annotation tag allows us
to set key and value pairs and define whatever we need. So
here, for example, I’m defining the key font and the value to be
the font name. And this is how we would use it in code. So we would get the text from
the resources as a span string . We would get the annotations,
get the key we’re interested in, and then the value, and then set
the span based on the indexes.
>>SIYAMED SINIR: We will now look at how text is laid out in
the view. Let’s assume you have a text
view with some paddings added. When you set the drawable top
attribute, it will position the drawable to the top of the view.
It will transition from top to bottom of the image. This will
add extra spacing between the image and the area that the text
will be drawn in. And now it will point to the new location.
Text view will create a layout. It is responsible for boundering
text. Here the boundary is highlighted in orange. The
layout object provides the various text metrics such as the
baseline, line top or line bottom. Boring
layout is for single style and simple text which does not
contain next line, tab, or right or Bidi
characters. Dine NAMic layout is for
editable and selectable text. And for other cases, text view
creates a static layout. Include font padding will add
extra spacing to the top and bottom of the layout, and the
height of this spacing is calculated using the values in
the font. The effect is more visible when
you use a tall script. When it’s set to false, the text
is clipped. Since it only applies spicing at the top and
bottom of text, on Android, we would see that lines would
overlap. On Android P, we fixed this
issue. If the system detects that the
lines will overlap, it will put extra spacing between the lines
and apply the attribute to the whole paragraph. We added an attribute named
fallbackLineSpacing which is, by default, turned on. A similar attribute is the
elegant text height attribute. However, even though it changes
the height, unlike the other at RU
BUTDs, at tributes, but it will not change
spacing. It will make the system choose the elegant version of the same
script if that font exists. Speaking of line height, it
is an important at RU butte for
readability of the text, and you tribute for readability of the
text, and you can — the designers would mostly provide line height as it is. On Android P, we added the line
height attribute. Two other attributes to bridge
the gab between design and
application are — the first we let you control the distance
from the top to the baseline of the first line. And the second
one, similarly, we let you control the distance between
bottom of the wheel and the baseline of the last line. Finally when you set text
alignment, the text will be positioned in the layout. Get
right and left functions of the layout function will give you
information of the layout. Now that you know about layout and
some important functions, we can answer a frequently asked
question on the internet, which is how do you
implement a rounded background span? We can draw it ourselves
using the layout functions. To do that, first we need to mark
our words that we need to draw the
background for. Annotation is a good option for such a use case. Then we will define our
drawables. One for the words that start and end on a single
line and another for different lines. We will look at our
annotations to find what we need. And for each annotation, we want
to learn the line number, so we can look for the line number and
convert. For the words that start and end on the same line,
we also want to learn about the vertical coordinates of the
line. We can use get line top and get
line bottom functions for this. And finally we want to learn
about the horizontal coordinates of the words. And get primary horizontal
function will get the horizontal coordinate relative to the
layout bundles. Now that we know the rectangle that we want
to draw in, we can draw our drawable. The other case where
we would start and end on a different line would have almost
exactly the same code, except now it has to identify more
rectangles for the drawing.
>>CLARA BAYARRI: Text view goes through measure, layout,
and draw. Measure is the most expensive. Here is where we create the
layouts or recreate them and where we decide on the width and height of the view. This is really expensive work. And finally in on_draw we issue
the drawing commands. Measure is very expensive. It’s
important to distinguish what causes an on_measure versus an
on_draw. Anything that causes the text to change in size or
how it needs to be laid out, like the letter spacing, text size, that will trigger an
on_measure. If you change something that just changes the
appearance but not how we place the text on the screen
that only needs to trigger an on_draw,
which is much cheaper than doing an
on_measure as well. Let’s look at these separately. Say I use spans to style some of
the words in it. I have changed the text size and colors. The
first option to measure your text is there is a method in paint
called measure text. A dead give away is it takes a string,
not a spanable. So what this will do, paint just doesn’t
understand spans, so it will take the text you’ve given it.
It doesn’t understand line breaking either. It will take the text as one
long line, ignore the span and give you one long line of text.
The next is get text bounds. It takes a string, so it will do
exactly the same. No line breaking, no spans. It will
place it all together and give you the bounding box of all of
your glyphs, which is a weight and a height. These return
slightly different values, so text paint. measure text will give you the
next advanced glyph. This is where you would put the next one if you were to place another
. These come from different values of the font, so they may
be very different. If you want to improve on this,
we use layouts. This takes the text you give it
— taking into account all of the styling, it measures each of
the lines and then returns the width of the longest line that
it has. This is useful to know how much your text wants to be before you can
restrict it. If you know the width of your view, well then
you actually create a layout object. In this case we’ve
created a static layout. By having a constraint, layouts
can calculate how to reflow the text and how to make it fit and
give you a height in return. By giving it a width we can
actually calculate the height. On top of that, you will get a
lot more information out of your layout object. It will give you
things like the baseline, line boundaries and everything. Measure text is the cheapest.
They are cheap to run. They are similar. But once you get into creating a
static layout, that gets more and more expensive. Now that
you know what each of these methods do, make sure you do the
one that makes sense for you use case.
We’ve talked before about the word measure cache and we
MESH words once and place them in a cache .
Once we have a hit in the cache, it’s only 3% of the work. If we
measure this in text view, there you go, on_measure takes about
16 to 11% of the work that it did. So using that cache is
really important for us measure words once and place
them in a cache. Once we have a hit in the cache,
it’s only 3% of the work. If we measure this in text view, there
you go, on_measure takes about 16 to 11% of the work that it
did. So using that cache is really important for us. In P
we have come up with a new feature called pre-measured
text. We know that in on_measure, we
need to take the glyphs, find them in
the font and place them next to each other and measure them in
the cache. It takes care of this space for you, and you can
run this in any thread so that once you have that
pre-calculation, setting it on the text view only takes about 10% of the work you had to
do initially. The way this looks in code is we need a bunch
of attributes to know how to measure the text. Things like
the font you’re going to use, the text size. We have a convenience method of
text view is get text metrics prams . On a background threat, pre
computed text.create. It measures each of the words,
places them in the cache, and it stores all of those values.
This is also useful if you’re going to measure more than 5,000
words as this all actually stores all of the
values. Then you can go back to the UI
thread and set it on your text view. All of this only makes
sense if you know your text beforehand. If you just know
your text as you’re going to render it, do what you’ve always
done. But if you do know beforehand, say you’re lowing it
from the internet or you have an internet scroll, you
can preload before it needs to be shown on screen, and 90% of
the work will be done on the background thread, which
is amazing. In support library, we do have a
solution. Between L and P, we can actually not do exactly the
computations that we’ve done, but we can warm up that cache
for you, so that’s what we do. And before L, you can’t do
anything. But we do have a solution in the support
libraries. And another common thing we see
related to all of this is very long text. People tend to say
I’ve got this text. I’ll just set it all on the text view. When you set text on text view
it measures all of the words and lays out the entire text you’ve given it,
even if you have given an entire book. This can be a huge
performance hit if you’re send long text that maybe
you’re not even showing on screen. A solution is to split your text
into separate pieces, like paragraphs
and put them into a recite review. As the user scrolls, we will
start loading the next bits. You can link this to
pre-measured text, which I just talked about, and pre-measure
all of the background measurements and then you will
have all of that information really efficiently.
So we’ve talked abmeasure. What about draw? It is much
cheaper. You have several options if you’re going to draw
yourself. An easy one that people say why don’t you canvas.
drawtext. Canvas does not understand spans or line
breaking or any of the things we have been talking about. So
similar to the things we have been talking about before, it
will draw the text you give it with no styling as one run. If
you have styling or if you want to use line breaking, use the
layout classes. You can see the recurring theme. They all have
a draw method that will draw on the canvas for you.
>>FLORINA MUNTENSESCU: I think it’s time to set the text. So set text is the most commonly
used method and great for views that don’t change. So when both
the text and the mark-up attached to it are immutable.
So under the head, there is a span string. So if you are changing the
original reference, the textview doesn’t
update. What if we want to mutate the spans later? We would send the text using
buffer type spanable. While the text
is immutable, the spans are mutable. So then what we can do here is
we can get the text as a spanable, and then we can add or remove spans to it.
You will be tempted to call a send text again, but you don’t
need to do this anymore, because text few
has an all span change listener, and it automatically knows when
something was added or removed and the text will just be
displayed. What if you want to change an
internal property of the span? In our case we want to change
that type face in our custom type face. In this case, the textview
doesn’t know what changed and it doesn’t know that it needs to do
something, so we need to tell it. We would do this with
either request layout or invalidate . If you changed the measure
affecting attribute and it needs to
re-measure and redraw, you would call request
layout. If you just change appearance,
invalidate and call redraw. If we look at the code, we
see that under the hood, there is a
spanable factory. So it has a default spanable
factory implementation. But we can implement our own and then
instead of creating a copy of it, we will just return the same
reference to the object in case the object is a spanable .
This is specifically important if you’re using styled text inside
a recycler view. Like this, you avoid creating
copies inside the recycler view, saving
CPU time and memory allocation. This is how you would use it.
In the view holder, you would set the spanable factory you just
created and in the on bind view holder, make
sure you’re setting the buffer as
spanable. >>CLARA BAYARRI: You might be
familiar with taking a text view and setting auto link to say web
to be able to detect URLs within your text
and have them automatically linked. What happens under the
hood? When you set a text on that text view, we will create a
copy and then we run a metric for regular expression on your text, and for each match where
we find a URL, we will create a URL span
and add that to your text. If you do this inside on bind
view holder, you are running that every time you show text , you’re going to be
recalculating that to each and over item. Don’t use auto link
in your XML, because that triggers the process. Instead,
first of all, create a spanable copy of your string and run either linkify so you precalculate all
of it. And on bind view holder, you
simply set the text. And then you can use Florina’s
trick to avoid the extra computation. We want to
discourage everyone from using auto link map. All of the other
options use regular expressions and are easy to run. Map actually spins up an
instance of a web view to find addresses, which is a huge
performance hit for your app, so we want to discourage everyone
from using map. And you may say I need this and
you’re taking it away? What do I do? Coming to the rescue is smart
linkify . This year we’ve taken the
same machine learning models that detect entities and apply
them to Linkify. We can do a better job at detecting all of
the entities and we can detect new typesment. On top of phone numbers, URLs
and phone numbers, we can do more sophisticated things like flight
codes . On top of phone numbers, URLs
and phone numbers, we can do more sophisticated things like
flight codes. It’s very important that you do this on a background thread, as this
is loading a machine model from disk. Don’t do it on the UI
thread. Then you can apply the links to the text. There’s change between the old
Linkify and this. It used to generate a span that
when you click it would go through the link. The URL spans that are added pop
up a floating tool bar with smart actions that you can take. For an address, we might suggest
apps. Finally when you have all of
your text ready, you can go back to the UI thread and set that
text on the view thread. Notice the big difference and the old one was synchronous and this is
asynchronous. We understand this is a huge difference, but
it is a way to use machine learning to enhance how we
detect entities, and it is really much better.
And since I’m talking about new features, let me introduce
magnifier. A lot of people tell us that selecting text is hard
and placing the cursor where you want is a really hard task. So magnifier helps you place the
cursor in the correct position. We have implemented this by
default, so you don’t have to do any work if you’re using the
default widgets. If you’re doing your own custom widgets
and you want to implement this yourself, there’s an easy to use
API there. Take your on touch event and
when the finger goes down, show the magnifier and up dismiss the
magnifier. This has the magnifier follow
your finger around the screen. We will publish guidelines on
the final UX we come up with, but it is very easy to use.
So I hope today we’ve presented a bunch of tips and
tricks. We’ve shown you what text is
under the hood the hood at Android so you
can take this back to your apps and build beautiful, more
performant apps. Please file feedback and thank you very
much. You can find us — [ Applause ] I know we’re late into I/O, but
you can find us at sandbox C. Please find us. It’s behind the
food area. We will be there. So please come ask questions.
>>FLORINA MUNTENSESCU: Thank you.
[ Applause ] >>Thank you for joining this
session. Brand ambassadors will assist with directing you
through the designated exits.

Leave a Reply

Your email address will not be published. Required fields are marked *