Android Dev Summit 2018 Livestream | Day 1, Theater 2
Articles Blog

Android Dev Summit 2018 Livestream | Day 1, Theater 2

August 10, 2019

Coming up next: Getting the Most Out of Android Lint by Matthew
Gharrity SPEAKER: Howdy, welcome to another Android
developer summit! So how many people went to the last one?
How many people enjoyed the last one? We liked it here, which is
why we did it immediately again three years later. We were just
building suspense to make it extra special. Hopefully we can
do it again. A lot of this content, all of the — most of
the content is live streamed, all of it is being recorded. If
you prefer consuming this through a live stream, I can
pause occasionally and buffer and maybe that will make it more
of a live stream situation for you. I’m Che it, I — Chet, I lead the toolkit
team, I’m emceeing today. And I am making it a nice enough
event, but short enough so you are not tired of the emcee. I
will tell you how this thing is going to work today.
So Dan went over some of it, and I assumed you went over the
keynotes as much as I do. The most important to me is
office hours, there are two reasons why we like to have
these events in Mountain View, it is such an interesting city
with so much to do, we live here, and every day we are here.
So obviously, Mountain View, for the culture.
And the second reason is Google is just up the street, which
means we can drag a lot of engineers from the team that
build all of the bugs that you are now dealing with into the
product, and we like to bring them into the conference to help
answer all of your questions. The key thing for you to know,
though, is all of those people aren’t here all day.
We have broken up the event into four half-day sessions, and
there are different people in different teams in different
technoloies represented at different times.
And so there are three scheduled posted, there are screens around
the conference. This is one of the screen sscreens, if you
can’t read this, maybe you can read the screens outside the
room instead. There’s a rotating schedule, will tell you
when people are on where, if you have a question for the location
team, make sure you are at the right time. You can skip
sessions if you want, they are recorded, and there are people
in all of those sessions. So ask the right questions, find people, they are here at the
right time. There’s App Review clinices, there’s the hashtag on
Twitter, #AndroidDevSummit, there’s the application. And
if you don’t have it, it is an instant app. Do your favorite
search in the search engine, click on the link if — for the
instant app and it downloads instantly. There’s a party
tonight after the session after 6:00, that will kick in, a party
with a bunch of engineers, where we all have the excellent
experience of talking to each other while the DJ is playing
the loud music and having conversations that go along the
lines of, what! What! Enjoy the party, enjoy those
deep conversations with the new friends, that’s about it. And
one more thing, there are three different session links.
There’s the long session, 40 minutes. There are shorter
sessions of 20 minutes. The intention is to link these
things, you will go for both of them at one slot. It is awkward
to get up in the middle of a crowd, so we are about to see
two in a middle here. Please stick around for both of them,
if you need to get to one on the other side, try to do so quickly
and quietly, we are going to roll from one into the other.
And then there’s lightning talks that start at 12:05 and will run
for an hour, those are going to go back to back into each other.
So stick around for those.
And that is about it. I’m standing in the way between you
and technical content. I would like to bring up Matthew
Gharrity to talk about Lint. Thanks. [ Applause ] MATTHEW GHARRITY:
All right, let’s get started. I’m Matthew, on the Android
Studio team, I work on project stability, call integration, and
we will go to the next slide. And one of the most exciting
things I get to work on besides project stability is Android
lint. It is a very powerful tool, you know a lot about it,
but you might not know the cool things it can do.
So despite the name, it is not just a linting-tool, it is a
full-fledged stack analytics framework, it is powerful, you
can write your own checks and I will talk about that today.
No worries.
This talk is really for anyone really interested in programming
languages, compilers, stack analysis, and for anyone that
has to deal with bugs in the project. So that is for most of you. Yeah, one last thing, I only
have 20 minutes, there’s a lot of material, I will go fast
through the slides. Try to keep up. At the end of the day, you
can watch it in half speed on YouTube and get a sense of it then. Perfect.
All right, let’s get started. So what is Android lint? A lot
of you probably already know, if you don’t know the name, you
know what Android lint is. It shows up in the IDE saying that
you did something wrong. In some cases, it is like telling
you that Gradle is out of date, do you want to update it, or
maybe you are not calling a super method in a method that
expects you to do so. In Android Studio, in the IDE, it
will say that there is suspicious code, take a look.
Sometimes there are warnings, sometimes there are errors that
will fail the builds. So some of you might not know
it, there’s a lot of the ways to use Android Lint. The first is
on-the-fly, that I just mentioned, in the IDE as you are
typing, shows the error messages right away. You can also do
inspect code, this is an action in Android Studio where you can
run a batch analysis of the project, a good way to audit
your project for common errors. And an action called run
inspection by name. If you have a particular bug in mind, say
you want to find a lot of thread annotation violations, you can
run the inspection by name, run the entire projects in one batch
analysis and look through those one pie by one. You can run
Lint on the command line with Gradle, Gradle Lint debug, it
will do a batch analysis for the project. And the great thing
about that is it will make an html and XML report. You can
parse the XML report, or you can do the html report and do the
warnings. If you know what Lint is, I want
to dive into the advanced use cases, because it is a powerful
tool. I will talk about configuration, and it is
actually really important. For big projects, you want to make
sure that Android Lint is working for your projects and
needs. I will talk about annotations, it is a powerful
way to mark up your code to help Lint help you more. Lint does
as much as it can, but sometimes you need to give it hints to
help you more. We will talk about Lint internals, I’m
excited about this, I want to give you a good mental model for
how it works under the hood. This will get you set up for
kind of realizing how powerful a tool it is and writing your
customcustom Lint chuck schecks. We will talk about
configuration. The easiest way to see what Lint is capable of,
Android Studio, preferences, inspections window.
You will see the heading, Lint, and all of the inspections that
lint has. There are are hundreds, most are enabled by
default. There are some that are not, and take a look at
them, some of them may apply to your project, even though we
don’t enable them by default. One example is call
interoperability, this is a set of checks we created so you can
call it from link code and vice versa. If you do something that
makes it difficult, Lint will tell you to re-name the function
so it is easy to call from the other language. There is
another way to configure Lint, with Gradle. If you go into the
Build.gradle file, you have this Android block, as you know. You
can write a Lint options block, and there are powerful options
here. So this first option, I will go
through a few examples. This first option is a powerful tool,
if you have a big project, you want to pay attention to this.
The Lint baseline, when you add this to your options, Lint will
look at all of the warnings and errors you currently have in the
project. And this will give you a clean slate, so you can say,
hey, I want to take a baseline of this point of the project,
forget about all of the thousands of warnings that I
already have. And I only want to look at new warnings and
errors and new code that checks it. And this will help you get
a handle on the massive amount of code that you have so you can
have clean code going forward. When you have more time, free
time, you can knack go back and look at the baseline issues that
Lint stashed away. Once you have a clean slate, through a
baseline, or you want to do a new project, you want to say, I
want to have clean code forever, starting now, you can turn on
warnings as errors. I encourage you to try it out. And if you
want clean code, try to get this to work for you. And then, like
in the last slide, you can enable specific checks that are
not on by default, like interopability. Some of you may
have this Lint options block set up. And some more advanced
things I want to talk about is performance issues.
So one of the reasons we have this lint Lint options block is
a lot of people run Lint on the continuous integration server,
they are checking the application for warnings,
errors, they block Lint submit if you have it checked in. And
some people are concerned with performance.
So just a few tips here, real quick.
Try to avoid the check all warnings option, I know it is
tempting. I know I told you to enable more checks, but some of
the checks are off by default for performance reasons, for
example. If you check all the warnings, man, my time to check
the project just doubled. Be more selective in the checks
that you turn on.
Also avoid the Gradle lint tasks, it is a parent task of
Lint debug or release. It will run it multiple times on the
project in each Lint variance, that’s a simple gotcha to avoid.
Use the ignore test sources function. By default, we don’t
look at the test sources for warnings or errors, the theory
that you don’t care about how clean your tests are. But this
option makes it so you don’t parse your test sources at all.
So presumably, all of you have well-tested code. So this
option will help with that, to make things go faster.
Let’s switch gears a bit, I want to talk about annotations.
They are a really powerful way to live Lint hints for specific
issues that you want to look for.
And so, I’m just going to go for a few examples, you know about
the nullability annotations, you use it a lot in Java, you don’t
need them, it is built into the language.
But we also have really Android-specific annotations,
that you should check out. There’s the required permission,
this is an interesting example, we kind of annotate certain
Android APIs with requires permission, like the set
wallpaper permission, and Lint does analysis on the code that
calls into the API, is the program checking that they have
this permission? If you don’t, we can worry, hey, it looks like
you have not checked you have this permission, this may crash
at runtime. And this impacts the users quite a bit. There is
also one of my favorites, kind of annotations, which is UI
thread and worker thread annotations, there’s thread
annotations in url. And these are really important,
actually. So a lot of — a big problem
with application development is a lot of UI frameworks require
that you only up Tate the UI on a single thread, the network
thread, and everything else is on background threads. If it is
on the UI thread, it might block, you will not see a crash
report, the user is frustrated, it is hard to track these
issues. We have these threat annotations where we say these
method should only be called on the UI thread, or the worker
thread. And we have some analyses. So I will actually do
a demo of these analyses. Hopefully that is set up.
So let’s switch over.
Awesome. So I will go through an example.
This is just Kotlin, let’s say that we have a function, oh, the screen
is black. You’re right.
It was there for a second. Awesome. Let’s get started. We
have a function, update UI, this is run on the UI thread. This
is updating the user interface, whatever you want, and the
network request. This is a contrived example.
And this does some stuff. All right, and unfortunately,
someone decided to call a network request from the update
UI method. So this is a problem, as you can see. There
is no errors or warnings, and studying this is hard to track
down in general. If you know about the threat annotations,
you can add the UI thread here, you can add the worker thread
annotation here. And instance instantly, Lint says you are
making a worker request on the UI thread, that freezes the UI,
you don’t want that. So I encourage you to use these. And
you might think, this is a trite example. This is too simple.
In the application, we have the UI method that calls the
function, FU, and that function calls something called bar, and
then that bar function calls the network request.
There is just multiple layers of indirection here. And you might
think that Lint cannot help us here. It can’t be default, it
for performance reasons, we don’t do advanced analyses. We
have an analysis for this. I will show you an example of
that. You can do run inspection by name. The name of the
inspection is called wrong thread, and then
interprocedural. You select that, select the scope, I will
say module scope. And bam, it will find the path from Fu to
bar to network request. You cannot see the error message
right here, so I will copy it for you. I will remove the
class names, you can see the error message says, hey, you are
doing the UI, going to fu, bar, the network request, that is
annotated with worker thread and that is an issue. Let’s go back
to the slides. So please check that analysis is
off by default for — it is being performant and doing a
program analysis, pretty expensive. Use it when you can,
try to see if you can find any bugs in the application.
All right.
NERBS next I will talk about Lint internals. This part is
really cool. The way Lint works, it starts
out parsing a source file, so you have some Java or Kotlin
laying around, it builds abstracts syntax tree, and then
we wrap it in UST, universal Syntax
Tree, and this may sound pretty familiar. And this step is
really important because it means that when you write a
check, you only have to write it once, it works in both Java and
Kotlin. That is really convenient. When you write a
Lint analysis, you are writing directly on the UAST. When you
are writing the Lint check, you are scanning the AST for the
source code, calls, and expressions you are interested
in and that is how it works. And Lint actually works on all
types of files, so you can look at Gradle files, you can look at
XML files and resources. This is powerful, it gives a
wholistic view of your application and can provide a
lot more helpful messages with that.
A couple more points here, so type information is available.
It is not just — you are not just looking at text, you have
the whole AST, you can do method resolution, examine the class
hiarko of of the class you are looking at, that is important
for preventing false positives and having powerful Lint checks.
That is what makes Lint useful. We have tight integration with
the IDE. And Lint does show you the warning messages right in
the IDE. That is critical, if you don’t have that, a lot of
your messages are lost in some build-up somewhere and it is not
useful to most people. And so, with that, we are going
to move into writing custom Lint checks. This means that, you
know, Android Lint is a framework. You can write your
own Lint checks. If you have something that annoys you, a
common bug that you run into on your team, or something that you
want your colleagues to watch out for, you can run your own
Lint check, upload it to the code base, and all of your
colleagues will see in the I it DE this highlighted.
And Lint has an unstable API, we don’t promise that we keep it
the same from release to release. The good news is that
it is not changing too often, but no promises.
The other good news is that unstable APIs are really fun to
play with. All right. And with that, we
are going to move into the demo. So let’s switch over, awesome.
All right. So don’t bother reading code too
much. The example I want to set up
here is that you see that your call logs are calling
thread.yields. If you look at the documentation, there ask is
not a useful method. If you are running concurrent data
structures, it is useful. But in other ways, you want to avoid
it. It doesn’t do what you think it does. So let’s say we
will write our own lint check that calls thread.yields. So
right now, it is not highlighted, there are no Lint
checks yet. I want to show you how I have this set up. You
have an Android application module here, here it is called
app. The way you will write your on Lint checks, you will
have a Java module, checks, for example. And you are going to
add a dependency from your Android application to the
checks module. And we have this nice Lint
checks block here that makes it easy to do that. Once you have
that set up, and I’m skipping over some of the details here,
but I’m going to have some links at the end. I can look at
sample projects. Once that is set up, I can go to the Lint
check itself. You have a class, yield detector. This is a
detector for calls to thread.yields, it extends some
classes in the Lint framework, and I have typed up some of the
metadata for this check that we want to have. There’s a name,
brief description, and explanation for how to fix the
issue, the severityseseverity, whether it breaks the build,
etc. And once I have the metadata, I’m going to start
typing some code here. So one method to be aware of is
called Git applicable method names.
So the way lint works is, you know, there’s a couple options.
You can make your own AST scanner, which will give you
total control. You can scan the source file, looking for
whatever you want. And for performance reasons, we have
these hooks where you can register with Lint, hey, I’m
only interested in method calls with this particular name. And
so here, we are actually only interested in method calls with
name yield. And if I have any typos, it is
not going to work. So let me know.
Haha. Once you have the hook in place,
we have to write the hook itself. So here, we can say
visit method.
And I wil will make sure that is — this method is
only called when we come across a yield function call. We will
say, okay, let’s assert that the method name we are looking at is
actually yield. So this should always be the case.
And then, because we have type information, we can do due
diligence and making sure we don’t have a false positive
here. Maybe there’s another class with a function called
yield, and we want to make sure that the Java link thread.yield.
So we will check to make sure we is have the methods, look at the
evaluat evaluator, is the method in class, we will pass the
revolve method node and give it the class name. So So this means that if this method that they
are callingging is part of the exact class, we can record the
error. And, I will give
it the issue that holds the metadata we have above. I will
give it the node, I will give it the locations. We are telling
Lint, this is where the error is, what you should highlight.
And I think I have to give it a message.
So let’s say, please don’t use this.
And also, you should use a more helpful error message.
All right. That’s all it takes.
If I have no typos, I’m going to make the projects.
It looks like it is done. If I go back to our application, bam,
lint scanned the source code, picked up the check right in the
IDE. And this is powerful. You can make it pop up for your
colleagues and everything. Awesome, we will go back to the
slides. All right.
So 50 seconds left, happy with the timing here.
Here are some links, I want you to check them out. If you never
read the overview on the development page, go there,
there’s a lot of information from this talk and a little bit
more. Please check out the
annotations page, there are so many that can prevent a lot of
bugs ahead of time. If you are really excited about
this demo I just gave, check out the sample project on GitHub,
where you can run custom Lint checks. If you have an issue,
we have a Lint mailing list. Many of you know Tor, he is
active on the mailing list and I am, too. We answer all of the
questions that are asked there. And with that, thank you so
much. If you have any questions, I
don’t — don’t ask them now, I wills will be in the sandbox
afterward. Please check out custom Lint checks. Thanks. [ Applause ]
Coming up next: Testing Rebooted (with AndroidX Test) by Yuki
Hamada, John Gerrish SPEAKER: Good morning, everyone. Great
to see so many people here interested in testing.
My name is John Gerrish, with my colleague Yuki Hamada, we’re
presenting a session on testing APIs today.
To get started, hands up, anyone who has written unit tests. It
is not a trick question, good. And what about integration
tests? You can be honest. Awesome.
Okay. So let’s get started. Click. SPEAKER: I have a question while
you are setting up. Is this related to the project
languages? SPEAKER: There’s a project
nitrogen talk tomorrow, it is near the end of the day. I
strongly encourage CHERK checking it out. The topic we
are introducing today is part of the grander vision, along with
project nitrogen, and they work in con conjunction.
Any more questions? We can do this in reverse.
[ Laughter ]. SPEAKER: I will ad-lib this.
On Android, there are two kinds of tests you might be familiar
with. There’s local unit tests, and then there’s instrumentation
tests. So let’s start by looking at local unit tests.
So these are ones that — these are tests that are executed on
your local developer’s workstation, on the local VM
running there. And, because you don’t need to
run the entire Android build chain, to avoid dexing,
packaging, installing on the device, these tests are actually
really, really fast. And on these kinds of tests, you can
use a tool like robolectric, which comes with
its own set of testing APIs to set up the state for your
Android aenvironment, or use a tool like Mokito and you can
step out the interactions with the Android framework. And
either way, they allow you to write tests, setting up the
state of the environment that satisfied the preconditions that
you might want in your test case.
The second kind of tests are instrumentation tests. And
these are the tests that will run on a virtual, or a real,
device. A real device could just be a phone connected to
your workstation, or if could be a form of devices somewhere in
the cloud. These kind of tests run slower
because you have to execute the whole build chain and install an
application on to the device. But they have the advantage of
being a lot more accurate, because the real lure of a
virtual device is very similar, or in some cases identical, to
devices your users are using out until the field. And this
brings you the confidence your app will behave as expected.
One criticism we heard, on these tests, there’s a lack of
testing APIs available. And that makes it difficult for you
to set up the state of your environment in a way that
satisfies certain preconditions, or edge cases, that you might
want to be testing. We heard you loud and clear, this is
something that we are actively working on.
So a bit of a history less on, 2017, Android I/O, we wrote out
the testing story, the software development pyramid. In this
model, we encouraged you to write lots and lots of fast,
scalable unit tests that tested all of your exhaustive
conditions. We encouraged you to write a smaller number of
instrumentation tests that will actually prove that all of these
units assemble together, behave as you would expect, on a real
device. And, in some ways, this was kind
of a compromise. It was a compromise between the
advantages of one set of — one kind of tests, and the tradeoffs
of another. It brings you a wholistic way of testing your
app. And we showed how this kind of
approach can lead to test-drivern development on
Android. First of all, you would start
with a failing UI test. This would be an instrumentation
test, probably written with Espresso, and it would test the
UI of your component, or your feature. And then you would
satisfy that feature by a series of units, classes, coming
together that are interactions. And you could test drive these
as well, using a tool like roboelectric, or Mokeyto, as a
unit test. And this gives you fast development cycles. When
you bring them all together, you can run the slower-running, but
faithful, instrumentation test and hopefully it goes green and
you’re done. Well, we enter a refactoring
cycle because maybe your code leaves a little to be desired
and you want to do some clean-up. You can spend some
refactoring cycles there before coming around to the beginning
of the cycle where if you have any more work on the feature,
you might add another test, test another aspect to the that
feature. And you will keep iterating until you are
complete, at which time you are good to submit your code and
move on to the next task. So at Google I/O this year, we
realized there was somewhat of a test writing crisis.
And because there’s so many tools available, it is not
always clear which one to use. And each of these tools all have
their own different styles and APIs and paradigms for the same
concepts that exist on Android. And the problem with this is
that tests written in different levels are not
portable across levels, it is stuck in the environment that
you have written on. So this year, in Google I/O, we
announced the render next test, it brings in testing as part of
the tool chain, as part of Jetpack. It includes some of
the existing libraries you’ve used before, some new APIs, full
Kotlin support, which allows you to run beautiful and concise
tests. It is available on and off device.
Well, last week, AndroidX test moved out of beta into release,
1.0. And it is also, as of last week, fully open-sourced. So we
look forward to welcoming your contribution contribute —
contributeutions. . All of it is being revamp ed
to show you the new styles of APIs. So please go and check
that out. Let’s take a look inside.
The first module that we pulled across was the existing JUnit
for support, the runner and the rules that you may have used before.
We’ve already added a new module, which we call core.
This includes some new APIs: ApplicationProvider, as its name
suggestsZsuggests, is a quick and easy way to get a hold of
the application context. ActivityScenario, which is a
brand-new API that provides a course and fine-grained activities, and
FragmentScenerio that provides a set of testing features for
fragments. We also brought Espresso into the AndroidX
family. It is a set of view-matching libraries — a set
of view matching APIs and actions, allowing you to match
and interact with those UI elements.
It also includes some other things like the ability to
capture intents for the system. And finally, we’ve also released
some truth Android extensions. And truth is Google’s open
source, fluent testing assertions library.
And we brought a bunch of components for Android subjects
, which allow you to test your Android objects beautifully and
concisely. And those of you who have been
using roboelectric will know that we had a version, 4.0, in
beta for a while. And, as of last week, we did a
simultaneous release that has now gone into final.
Roboelectric four fully supports all of the unified APIs that are
in AndroidX test, as well as a number of its own new features and improvements.
Okay. So I would like to welcome Yuki on stage, who will
give a deeper dive into some of the APIs available. IA [ Applause ].
SPEAKER: Thanks. Hi, everyone. So let me introduce our new API.
This starts with ApplicationProvider, a new way
of accessing context from your test support. So when you look
on Android testing, you need to handle two different context
objects. The first one comes from application under your
test, and the second one comes from the instrumentation APK,
where your test code is stored.
So with today’s library, we have two different methods to access
this context object. And this makes your test code harder to
understand, because the library used, targets the
context meaning the context from your instrumentation APK, while,
sorry, the target context means that context from your
application while get-context means the context from your
instrumentation or APK also — it is not obviously which one to
use for your tests. So, in our new API, we hide the
instrumentation context from target API, and we only — only
the ApplicationProvider provides an application context in the
form of your application graph. And let’s take a look at the
example. So here, let’s say we have a
test for the location tracker activity. And in the set up
method, we get the target context in the old-fashioned
way, and we type cast the location tracker application, so
that we can reduce the mark object for testing, and the
second line, set a mark object for that. This code is simple,
and it actually works. But you could use the run context and
face the runtime error, and ended up with wasting your time for debugging.
Now, with the new way, application context provider
provides your context in the application graph.
So you can do exactly the same thing, but there is less chance
for confusion. Okay.
And let me move on to the more complicated stuff, activity
scenario. Actually, before I — we dive
into it, I have a few questions for you. How many of you have
written your own activity, and handled the live cycle
transitions by yourself? Raise your hands if you are. And how
many of you have shifted your activity with a bug related to
life cycle? Many of you. Who didn’t write
the tests for that? Who did not write the tests?
Cool. Well, yeah, I see some hands up,
keeping up. And to be honest, I do, too. And I agree.
Writing tests for the activity life cycle transition is pretty
hard, and there is no good API in the testing library now.
So that’s why our team have sought for a solution, and we
developed the ActivityScenario, which you can use it for driving
your activity state to laboratory state for testing.
So let’s visit the activity state first.
So the creative state is where activity is created, and
instantiateinstantiated. But it is not visible to users yet.
Or activity, sorry, can be created state, while it is
running in background. And the started state is where
the activity is created and started. It is partially
visible to the users, but not the fore ground of activities.
The activities running in picture mode is also in this
state. And the the resumed state is
where the activity state is visible to users and running in
the fore ground. And under the framework, you can
change the life cycle state anytime the user interactions,
so the activity has to handle those transitions properly for a
good user experience, otherwise you see some bugs. And activity
scenario provides a method moveToState, where you can move
it to testing. We will look at the testing code. We have a
test, location Tracker Activity, and here we want to verify that
location listener is properly registered from the system when
the activity becomes a creative state.
So across, we have to start the activity. The launch activity
takes your activity graph and it starts the activity and waits
for it until it becomes a reasonable state.
And move to state initiates the transition, and moves the life
cycle state to the created state. And the older method, the
older ActivityScenario method works as a broken code. So
after this method call, it is guaranteed that activity’s life
cycle state to be created. And then you can inspect your
activity’s internal state by calling activity method. Yes,
that is easy, and now we have our API.
And also you can use ActivityScenario for testing the
creation of your activity. So activity
creation happens when your activity is running in
background for a long time and where you can come back to it
later. And your activity has to save its internal state, to a saved
instance state bundle, where it is destroyed, otherwise you will
lose the state. We have a re-create where you can use the
testing scenario. This is the example code.
So here, in this test, we wanted to make sure that input
text is restored properly after the activity is re-created after
disruption. So first, we have some test
data, like test user, as our input. And again, we started
the activity. And then we input — we filled the text box by u
using the Espresso library. The ActivityScenario works
nicely with Espresso, and we call re-create, which disrupts
the activity, re-creates the instance, and waits for the
activity to be a reasonable state. And using the Espresso
again, we can make sure that the text is there as we expected.
Yes, it is also simple. Okay. And finally, I would like
to show you one more example from the Truth extension.
Intent Subject, by using Intent Subject, you can verify your Intent values and it produced
really good schema-friendly error messages, if that error
happens. This testing example, again,
this time, we want to make sure that Intent — we want to make
sure that the data intent has an expected contact name in extras.
So first, we create a bundle of the data intent. And then we
start with three lines, one is checking if that intent has the
expect action, the second is type, and the third is extra
bunted bundles. If the value does not meet the expectation,
you see this error. And you can immediately know the, in this
example, the intent action is not what you expected.
And there’s a lot of components that come under the release 1.0.
I cannot show you everything today. For example, we have
more assertions, and we also have Android Builders where you
can create your testing data easily. And also scenarios for activity and fragment. You can
look at the documentation to see more. I hope you try it out after the talk.
Okay, so this is our solution for the test writing crisis.
So with unified API, you no longer consider whether you have
to write an instrumentation test because you can now just write
the Android test, and that test runs on both runtime
environmentsenvironments nicely. With the unified API, you can
focus on what to test and you can forget about where and how,
and to ensure the contistancy of the behavior of our API, we have
a verification test and we run the same tests against the,
sorry, we run the same test on locally with Robolectric and on a bunch
of devices from the API 15 to the latest version.
And let’s go back to the workflow that we showed you
earlier in this talk. So we can execute the test
driven development much more effectively using the device
agnostic tests, with the tests, writing it in a unified API.
As a recommendation, we recommend that you write that
test with Robolectric until the call is
ready to submit, and run the same test, but on a battery of
devices before you submit the test to maximize your
confidence. And also, you can run the same
test as a continuous integration test against the binaries.
With the upcoming nitrogen tool change, you can set up such a
configuration easily. If you want to know more about
the Project Nitrogen, we have a session tomorrow, and we highly
recommend that you attend it. Thank you very much for
listening, and have a happy testing.
[ Applause ] .
Coming up next: Optimizing User Flows through Login by Sean
McQuillan, Jeremy Orlow. SPEAKER: Hey, everybody. With
that interstitial music, the point is to make you really
happy for when the next presentation starts. We will do
lightning talks that we have not done before. We will see how
well it works. The way it is supposed to work, we are going
to roll from one into the other. They have 5 minutes to go, when
it is up, I will call time. In the back, they are going to
press a magic button that makes this sound.
So that’s how we know it is time for the next talk. And
otherwise, I have no purpose being here. I will talk about
the next speakers, optimizing user flows through login, and I
will start my timer. Thunderous applause for the
lightning talk, please! [ Applause ].
SPEAKER: Welcome to lightning talks, I’m excited to talk to
you today about how to welcome your users to your Android app
whether you when you get a new phone. Why is this important?
On the Android system, we get a new phone every 2 or 3 years and
we reinstall the apps that have been installed, and all of the
users do this as well. If you buy a new phone every two years,
half of your users are going to get a new phone this year. And
if you don’t do anything at all with the application, you will
give them a cold welcome when they come back. You will show
is them an email and a password form that shows them that they
have never used it before, you will disrupt the flow of the
app. You want to welcome them back to the application and
create the continuous user experience that jumps from their
old device to the new device, so you don’t have friction or
retention problems for these users that are switching devices
and using your application. There’s a couple products you
can use in Google that help you do this. So one of them is
Google sign-in, it is a button you press when you log in. It
is a button you press, you log in, it is amazing
for the flow because it makes it simple to do the log in. It is
good for the activation because it makes it easier to log into
the app. In your phone, it is good for reactivating the users
because they have to click a button. On Android, you can
configure it so you don’t can have to click the button on the
new device. Google sign-in works on iOS,
Android, as well as the web. So you can use it everywhere, and
it is a great way to add federated log-in to your
application. And another feature on Android
is Smart Lock for passwords. It is a password manager, stores
user names and passwords, federated credentials in the
cloud, when the user confirms they want to save the password
after they log in. So in the old phone, they save the
password, on the new device, in the application, you will give
them a warm welcome and say, I know who you are, do you want to
log in with the new account? And it will create the seamless
experience and log the user into the application. One of our
partners, Netflix who is here today, they used this and they
found that they had 20 percent fewer support requests. If you
think about this, every time a user contacts you to get a user
name and password fixed, there are five that didn’t, and they
will just become a retention problem and may never come back.
This is a great way to improve your business metrics.
And Autofill is another feature we launched in Android
O, works like smart lock. It is an Autofill system, it Autofills
TextViews the way you would think, and one of the most
common ways is user names and passwords. After they log into
the old device, they will choose to save it to the Autofill
service, that comes to the same data store, smart lock. In the
new device, it Autofills. There’s a few ways to get the
application ready for Autofill. One is to set up a digital asset
link that links the web and the Android app. This is great, it
has pluggable password manager and all of them support this and
then the user can log in on the web. If they are using the same
password manager on the Android device, they are go to going to
get the credentials transferred. And then the password is saved
and you get the seamless experience to the Android
application. And another thing you have to do for Autofill, you
should set up Autofill hints. Tell the service what the user
name and the password field is, and that way the service does
not have to guess and maybe get it wrong, probably the worst
experience you can give your users. And the last thing we
are going to talk about is auto back up, I will hand it over to
Jeremy. SPEAKER: It is a great way to
also provide continuity to your users.
And so even if your app syncs to the cloud, there are still
device specific settings you can back up, and no user who has
been using your app for a year wants to see a tutorial on how
to launch an app on the new phone. So auto back up, when
the phone is idle, we back up to the cloud, the data is stored
securely, you can have up to 25MG, if you go over that, the
back ups fail. On the new phone, when the user restores
the app, it will restore the data before the app runs for the
first time. And the user picks up where they left off without
any friction or churn. And you can also include and
exclude specific files, so you can exclude OAuth tokens, and
keep users engaged when you switch phones. We will be back
in the back. [Gong ringing].
SPEAKER: Be around if you have any questions.
Coming up next: Kotlin + Watch Faces by Jeremy Walker.
SPEAKER: Hi, I’m Jeremy Walker, a developer platform engineer
here at Google. I wanted to talk to you about how I use
Kotlin to make watch face creation much easier on Wear OS.
So watch faces are kind of hard right now, and you have to write
about 600-plus lines of code, it is not like a nice app you do in
Android or iOS, where you declare the UI in the XML
format, you have to manually paint everything, and then you
have to include a bunch of code so when you are in ambient mode,
you are not hurting the battery life. I was trying to think of
how to improve this. Last year, we launched the support for
Kotlin. So I thought, how can I use Kotlin to make it easier? I
converted the whole thing to Kotlin, which reduced the code
lines a lot. But I found something else with Kotlin that
made it more cool, and that was something called DSL.
So what is Kotlin DSL? Well, the best way to understand
what domain-specific language is to compare it to a regular
programming language. So, again, domain-specific lang
language, that did not really help me what it meant. If you
compare to a general programming language, Kotlin or Java, you
have a bunch of key words and classes and languages, and you
make a big app, on the other side is a DSL that focuses on a
task or domain. It foregoes a lot of functionality and allows
you to do that specific task. You probably used external DSL
and have not realized it. For example, SQL, for manipulating
databases, that is a DSL. Those are expressions for manipulating
strings. They have an independent syntax, reduce
functionality, you don’t have methods or a class to make them
work. You are not going to write a full-on application in
them, either, at least I hope not. And for SQL, the first key
word indicates as a verb what you are going to do. You are
going to select and update. The disadvantage is you have to put
it as a string literal. You have to pray and hope that you
spelled everything right until run time, it fails and you have
to see what is going on. And Kotlin DSL, it extracts that out
the string and puts it in the code so you can have type
checking, you can have code hint and all of that good stuff that
comes with your IDE. So now that I have an idea of
what it is, I like to show you two structures really quick.
And I used the latter, there is one called creating ones for
chain method calls, and the other is nesting of Lambda. So
recognizing DSL is really subjective. You know it when
you see it. So let’s see some. So this is Kotlin DSL for SQL.
You can probably look at it, and if you know SQL, you can
understand it right away. And it is all words, like Slice,
then I select all, group by,ords order by, and limit. This is
all type-checked, you get code hinting so you don’t spell
anything wrong. This is great and understandable, this is DSL.
The DSL for the watch face that I liked is the nested Lambda
structure. So this is to create a watch face, you can see it
right there, create a watch face is a verb. If you look at the
structure, you may not know anything about making a watch
face, but you probably understand what is going on
right away. Analog watch face, it is not digital, it has arms.
Colors, the dimensions for the hand, the hour, the minute, the
second, I understand that. And watch face background image,
that is all — it is very declarative, you can understand
it right away. This is type checked, and I get code hints
and all of that good stuff. In the end, I get this nice little
watch face with no work. And the 600 lines, they did not go away.
I put them in a helper class and combined them with the more
important class that interprets the DSL. If you are making a
watch face, you only need to know about the DSL.
So what is next? I — this is kind of an
experiment that I did as a code a lab, check it out, it will
take eye you 15 minutes to make a new watch face, search for
Google codelab, it is under the Wear OS, search under Kotlin.
You can see how to make a watch face, you can see the source
code for how I made the transition between DSL and
interpreted it into a nice little watch face. And more
importantly, hopefully now you are a little bit interested in DSL,
you can use it in a project to make it better, or you can go
and use something like the DSL for SQL, for testing, or in html
in the Kotlin coding type checked. So thank you for the
talk, for letting me talk, and hopefully I have gotten you a
little bit interested in DSL. Thank you. [ Applause ]
Coming up next: Push Notifications: An India
Perspective by Amrit Sanjeev SPEAKER: Well down, no gong.
Next up, push notifications. SPEAKER: I’m Amrit, a developer
advocate at Google. I want to talk about push notifications
and how interapp developers are getting the delivery higher in
the region. I want to talk about a stat in India, we have
314 million-plus smartphone users, growing on a 16 percent
year on year growth rate, your apps have a chance for users and
great business in the region, and it comes with a lot of
competition. There’s a need to have a better engagement as well
as lighting your users. And push noteification is one of the
mechanisms a lot of developers use to engage your users. And
when it comes to push delivery ratess some in India affect the
delivery. Until recently, data plans were expensive and people
have the habit of turning off mobile data when they are not
using the internet. So when they go out the WiFi zone, they
will turn the mobile data off, and that changes the behavior of
time to lift parameter for the push notification, if you have a
short one and they turn the data off during that period, the
notification is never delivered. Recently, new carriers have
come in and reduced the data costs drastly, and now those
behaviors are changing, but not enough for you to ignore that
change. Secondly, a lot of devices with
custom behavior, or Android builds, where your notification
and, because of battery optimization, they kill off the
notification services. They sometimes force close the apps,
and avoid the app for never getting the notification
delivered to it. That is another problem.
And then your time to parameter, if you actually
increase it by, say, from four to-eight eight hours, you see a
drastic improvement, or a large improvement, in notification
flivverdelivery. You might go over the WiFi user. So that
makes it interesting. And in any other region, notification
frequency also, the number of notifications that are sent to
the users is affected and people get irritated and turn off
notifications if you do it too many times. So collapsible keys
and things like that are very important here.
And I want to talk about three design patterns that developers
are actually trying out for querying use cases to improve
the notification delivery. One is buffering the notifications,
to give you understanding of your use case and how the
buckets work better for you. The neckinism developed here,
there’s a period job that runs, refreshes the npm token, sends
it to the server, when the server gets the data, it maps
the fcm and the update. When it you send out oo cap — a
campaign, they split it based on recently. And you put
in the first batch based on what is in the last 7 days, the next
batch is to update it from 7 to 31 day said, all the tokens from
the 7 to the 31 days, and the third is 31 is above. And what
this allows us, in each of these buckets, the parameters for each
of these buckets will improve notfication delivery, you have a
better understanding of which users get the notification and
might react to your notification better. And because you have a
lot of tokens, which are not updated frequently, or the users
never send that update cycle, you don’t end up calculating
that in peer notification delivery and you miscalculate
the effectiveness of push notifications.
The second one is basically where you have, again, a job
that comes in. It checks whether a notification was
delivered to the device from the last one or not. If not, it
will go to a service, an RS service in the notification to
the user. And you have to ensure that this
does not wake up the device. This is not the most efficient
way of doing things, but it still works. And the last I
will call out is data messages in the scenario. In the case
you want to send a message, there’s a sale that is going to
start at 12:00 you want to show the message at 12:00, and your
users are 20 percent more on this service. Instead of
senting at — sending at 12:00 and hoping that the users
receive it, they will send it two days before, which as a data
message. So the app will receive it, and then based on
the schedule that is within the data message, they will schedule
a job to come up at exactly the right time to display the
message to the user. SPEAKER: That is time’s up.
SPEAKER: [Gong]. SPEAKER:Coming up next:
Understand the impact of Generic System Images (GSI) by Hung-ying
Tyan. SPEAKER: Hello, I’m Hung-ying,
I’m talking about GSI, the generic STIMP system images.
It is the purest form of Android framework that we can build from
AOSP. By purest, it does not have a device maker or carrier
customization. You will want to ask — AOSP has been
there for 10 years. Why does it become a thing right now?
It is because GSI is the central piece in travel compliance. So
remember the project travel, we took out the Android updateability prom in trial. We architected between the flame
work and the hardware original vendor implementation.
With this, we no longer need to update the vendor implementation with the Android
framework. This reduces the time for the
device maker to update their Android framework.
Now, to make sure that the boundary is not altered by
device makers at will, we require every treble-certified
device to pass a series of CTS and VTS tests with the Android
framework replaced by GSI. So with all of these powerful tools
in place, about a year ago, we started to see the momentum of
faster Android adoption this year.
So let’s take a look at the Android developer program
at Google iOS last year. Only Nexus and Pixel forms were able
available for you to try out Android Oreo. And fast forward
one year later at Google I/O this year, it was the first time
we had non-Google phones join the beta line up. It is not
one, it is not two, it is seven device makers joining us.
So the progress was phenomenal. We know we are on the right
track, and we will continue to push for it.
Now, despite that GSI is the central piece in Treble
compliance, we feel like it has a lot more potential than that.
So we set up a goal to make GSI be more accessible and useful,
not just for device makers, but also for the general public.
Including app developers like you, and even consumers.
An important first step toward that goal is to make GSI
available in AOSP. So, for this, we have published pie GSI in AOSP, so
you can download and build the pie GSI today. We are exploring
ways to make future GSI available earlier than the
release of the next Android version.
So you will be able to try out next century version earlier
than GSI. And, at the same time, we can
get early feedback from you. So the benefit is mutual.
So please stay tuned for our further announcement on this.
Finally, we understand that trying out GSI can be cumbersome
sometimes. We are looking into different
ways to improve the process. Right now, we have an idea of
trying out GSI without actually flashing GSI out to the device.
But I’m not going into detail here. We do have a working
early prototype at our demo booth.
In addition to that, we also prepare a set of devices. They
are all running on the same pie GSI i. So please do check out
our demoes at the Android platform sandbox.
SPEAKER: Thank you. SPEAKER: Coming up next: Buiding Reliable Apps When Connectivity
Is Unreliable by Naheed Vora, Ryan Hamilton.
SPEAKER: Hi, I’m a product manager at Android checkativity.
SPEAKER: I’m Ryan Hamilton, a manager on the Chrome networking
team. SPEAKER: So we will start with the — how many of
you have noticed this in the last few years? The signal is
flaky and you turn off the WiFi. Connectivity keeps changing
underneath you, and the application is trying to
understand what is going on underneath, is really
complicated. And, for creating a good
experience, we need to hide this complexity from the user. We
don’t want users to go and turn off their WiFi in order to use
the app. It could be any app that you can think of that the
user is using at a given point in time.
So who should care? If you are building a
user-facing application, maybe your app depends on mobility.
Maybe it is Google maps, or something like that. Or you are
building a gaming application or communicating app. Anytime you
know you need to react really, really fast, irrespective of
what is happening in the activity is critical. This is
important to make, that you address this point in your
application. And one thing I would add here
is that the user dropoff rate with had when the delayss are
1-3 seconds, or 10 seconds, it may vary in markets. If you are
in a market that is emerging like India, it may be resilient
to delays. If you are in the United States, if you go beyond
three seconds, you will lose the user.
So what I have here is a demo. And it is a Google assistant
demo, what we are doing is we are using the technology and we
will talk about that a little bit later. On the left-hand
side, there is no optimization on the phone. On the right hand
side, there’s an optimization, and we are doing a Google
assistant query and the user is walking out of the building.
The phone is connected to WiFi, as the user walks out, the user
loses connectivity. Let’s see what happens.
Trying to click. This is interesting.
Okay, in the interest of time, this is what happens.
The one on the left-hand side, it takes — is it possible?
Awesome. Okay. So the — we will make a query,
and what it is doing — it is in direction to Oxford service in
London. And you can see on the right, the response came back
already. The left one, it is still waiting and waiting. The
right one is like, eh, I don’t have anything to do. I’m out.
The left one is like, ah, finally. This is a lot from
reaction time perspective. You want to have this resilience in
your app, it can reconnect fast irrespective of what is
happening in the connectivity stack.
So let’s get out of here. It is a separate problem.
Okay, so what it is using, what you need is you need a protocol
that can handle this different mobility where the device is
moving between the networks. And so that’s where the Cronet
and QUIC comes into play, the application layer and the
protocols that you are using can handle this mobility. So I will
hand it over to Ryan to talk about Cronet and QUIC.
SPEAKER: I will do this quickly. We have taken the network stack
that is inside of the Chrome web browser and made it available as
a library for apps to use, we call it Cronet, Chrome, network,
Cronet. Since it comes from Chrome, it has the reliability
and interoperability that you would expect. It is used by a
variety of Google apps, search, YouTube, maps, and available on
iOS as well. Coming soon, it is an option to pull from the Play
Store, you will not need to bundle it in the app. You can
get the automatic opdates and security and performance
improvementes. Part of the coolness of Cronet
is that it comes with support for QUIC, a new transport we
have implemented in the user space on top of GDP. And it does
some things that TCP can. It is a modern transporter, provides
new teachers and better performance. And with QUIC
connection migration, you can switch back and forth between
WiFi and Sailor. It has a bunch of awesome performance
highlights, we release every six weeks, it comes with state of
the art security. SPEAKER: And the best part is
that the platform optimizations are built in, you do not have to
do everything time there are Android releases, it will take
advantage of the latest and greatest that Android has to
offer. For any more questions, we have
a booth outside. You can talk to us, we are happy to help. Thank you.
SPEAKER: Thanks. Coming up next: Quick Ways to Ensure App Compatibility with
Android Autofill by Felipe Leme. SPEAKER: Next talk, as a bonus,
this person lets 10 minutes on Autofill.
SPEAKER: Hello, everybody. So my name is Felipe Leme, and I’m talking
about what Android Auto fill is and why it is important for you
to optimize your apps for it. It is a new feature that we
introduced last year on the Android Auto release, and the
goal is to provide a safe and fast way for platform managers
to do their job. To use Autofill, you need to select an Autofill service which could be
provided by Google, or third-party apps, like One
password, and many others. Last time I checked, there are about
30 different apps on the Play Store that provides an Autofill
service. And one key decision we made
when we designed the API is that it should work out of the box
with existing apps. In order words, as an app developer, you
do not have to support the app — if you don’t do anything, the
password manager will figure out what to do for you.
But just because you don’t need to change it, it doesn’t mean
you shouldn’t. After all, it is your app that is on the line,
and you don’t want to depend on the third party app it has no
control over. And the tip I want to give you, you should not
rely on Autofill service heuristics, you want to make
sure that the password manager does their job properly, you can
do it quickly and easily by annotating your XML views.
The first thing you should do is make sure that you annotate
what should be Autofilled, and you do that using the Android Auto fills filling stack. So
let’s say you have the user name and password loading screen. If
you don’t annotate it for Autofill, and the layout XML is
using user name fields and password fields, it is going to
work fine. When they use the instructor, they will say,
there’s a user name string here and password here, and that is
the user name and passwords. And say you are in Brazil, your
name is in Portuguese, and I grew up there and it is common.
So my fields are [speaking in language other than language].
So the password manager you get is this instructor, and it knows
what it is. So it is providing the Autofill data, and the user,
you have to manually type in the user name and password, which is the
problem to avoid with password managers. So you will annotate
the fields, the user name, and the password, you use password.
And on Java, we provide fields for the
user name and passwords, like telephone number, etc.
And there is another issue, you should annotate what should
not be Autofills, using the Android import texts.
So to see another example, in developing the API last year, I
wanted to send an SMS to my friend, and then I got a pop-up
with my own telephone number as a recipient of the SMS. Am
when you are composing an SMS, or a spreadsheet, you want to
type something dynamically, you do not want to use a pre-define
said ed variance. This was annoying for me, I knew the API,
it is confusing for ra a user that doesn’t. So you can
disable the field for the activity, you can do it on the
whole activity layer by annotating the root view, it is
important for Autofill, no exclude descendantdescendants.
It will not use password manager when it doesn’t make sense and
it will improve the experience. And say you are using the tags
and you are so excited to change, that you skip the
presentation, the summit, and go back to your laptop and change
your apps. Don’t do that. If you are going to change, how are
you going to test the changes? You need to select a service to
see what is the data provided by the service, and you will say,
we will choose whatever comes with my device, Google Autofill,
or install a third-party password manager. You can do
that, but you can go back to that problem of relying on the
password manager heuristic, and we can provide it without a lot
of changes, and you cannot see what is going on with making
these changes. You can use the standard Autofill service, and
we can provide some samples on GitHub. We provide a couple
Autofill samples permutations project, one is the
basicservice, which is a service that only understand Autofill
hints. If you are using auto fithints on your app, you will
see something like this that will pop up, because in this
field, the user name, you will see is the the options. And on
the other screens, the base fill will not show anything.
And it has a pretty simple service, less than 200 lines of
code. If you are interested in seeing how it works, you can do
this a well. And debugservice, it will try to fill anything
based on your search IDs. This is useful to test what would
happen if you are not tagging your views.
So back to my SMS app, you would see something like this, and you
will click on two. And I’m just making sure that — okay.
So we will make sure that I’m seeing, I’m looking on the
screen. So when you click on two, you
are going to see this pop-up with some user data, which is
two, which is just the name of the resource ID. You can see
how confused it is for the user if they see something like that
on the app. And then, the final tip that I
would like to give is you should also make sure that the app
works when the Autofill service requires authentication. So
what does that mean? So most password managers, they don’t
retain the data right away, it is encrypted somehow. So they
will ask the user to authenticate, a master password
or a finger print to unlock the data. The way the framework
works, we are launching this, it belongs to the service on top of
the app stack. The user does the authentication, and when you
are finished, the activity resums.
So for most apps, it is not a problem. If your app is doing
something where it is resumed, you are revalidating, or
rebuilding this new hierarchy hierarchy, you might break it
out of view. So to test this, to make sure
that the app works in this scenario, you need an Autofill
service that requires a permpermutation, the
debugservice. And when you launch the
debugservice, you need to launch the Autofill and click and
select the debugservice, you will see the debugservice is one
of the options. We provided that example. And then you can
select this authenticate response option on the top.
When you go back to the app, when you trigger out of view,
instead of getting the data, and unlock it, you will have this
response toa thent caught. So icate. So when the user
attempts to — it will ask for yes or no. So when you click on
yes, this screenshot, the sample app where I have re-created a
new hierarchy. When you click yes, you will return to your
activity, and now from the framework point of view, all of
the data is different. So they don’t know that they have the
locked data for your activity, for your user name field, it is
a different ID. They are going to show the response again.
When there’s a click there, it will launch the authenticate,
and then it go backs and asks it again. So the user is now on
this groundhog loop where it never goes forward, it is
frustrating. And the solution for this case is to make sure
that you don’t — you create a view hierarchy. So the user
attempts to authenticate, we will launch the authenticated
activity, and now when we go back, we will see the unlocked
data and when the user selects the data, and we are Autofill
and everybody is going to be happy. These are the main tips
I would like to give from this session. Make sure that you
have your notater views, the tess in the standard service,
and the work with usera thent authentication. I have links
for all of these talks for Autofill service as well that
explains all of the apps that I’m using. That’s it. Thank you for coming, and I’m supposed
to say that the QA will be out. I have 25 seconds before I’m
going to be gonged out it. SPEAKER: SPEAKER: Do you want to
say anything else, or are you done?
SPEAKER: We are done, I guess. SPEAKER: Thank you.
SPEAKER: Thank you to everybody that played the game today. $1 did.
And we will assume that the users are able to touch the
device, or they can see what is on the screen. That turns out
not to be true. So these assumption s assumptions helps people with
disabilities to overcome obstacles.
Not what they are interested in or what they are actually able
to do. And so the way Android does
accessibility is it depends on the entire ecosystem. So we
were directly on Android, and we — we build a lot of the core of
it. But we really depend on the entire ecosystem to make this
work. So the accessibility framework is able to handle a
lot of low-level features, like magnification, you are not able
to build it into every app, if you do, we will have a nightmare
experience. And that is handled down close to the graphics
level, you want to magnify things. Accessibility
developers who work on services, TalkBack, select to speak, they
build these plugin services and the idea is that they generally
have a particular type of user in mind, they will go through
and really understand what those users need, what their
challenges are, and try to build something that
works for them. But they cannot do everything on
their own, they need to get information from, somebody who
is blind, you need to be able to speak what is on the screen. We
need to find that out from the apps what is going on in the UI,
and that is where we really need the help of the ecosystem and all developers.
So as a general model, we have these plugin services, like
switch access, voice access, and Braille, they are able to query
an API for the accessibility service to tell me what Windows
are on the screen and the views inside the windows, what text is
where, what actions are available on each view.
And then they can present this to the user in whatever way they
need to. The user does not touch the screen, it do do you
want to perform this control, that control, that gesture. And
we need the apps to provide the right semantics to do that.
And the — I want to try to make this as simple as possible. So
a few things to keep in mind. One is just to make sure that
the information that you are presenting to all your users is
as visible as possible. So color — luminous contrast is
the most importantsingle thing you can do in that regard to
make sure you are not using gray on slightly lighter gray to
convey information. Thatt can look cool, but it is difficult
for people to use. And the second is to prefer controls to
be big and simple.
It is sometimes — you want to cram a bunch of stuff to get as
much options as possible for users, and you want to figure
out what is important for your users can simplify things for
everybody, and for somebody who does not have perfect dexterity
to use it. We can have a 48 by 48 density independent pixel,
that’s a guideline we can throw out there, to have a uniform
standard throughout the ecosystem. That is set as a min
height and width thing. Next is to label toff stuff.
If you are conveying information visually, if you have a button,
just have a label and the labels should be precise and concise.
So the users who cannot see the screen can find out what they
do. And so people ask when in the
project should I consider accessibility? Early and often
is the short answer. The longer answer is, in each of these
different phases of your project, there is something that
you can do. So in design, ZERL generally the
things that I mentioned, keeping the contrast, control size,
labels, that will get you a long way. If you are using standard
UI patterns. But the more innovation you are
doing on your user interface, the further away you get from
the stuff we built into the Android framework. At that
point, it is really helpful if you can think broadly about the full range of
users who use your project. So the more gesture type things
that you are doing, you want to make sure that you are taking
into account how people are going to interact with that that
cannot perhaps perform that gesture.
During development, we have a series of APIs, Qasid is going
to show those, and we have testing tools that show, as much
as possible, we can automate the processes to make sure you are
handling these things correctly. So just to explain a bunch of
stuff about the complexities of users and things you can do, you
can forgive me for saying making accessibility easy, and I said
making, and not that we already made. In general, we want this
to be easy. We want to make sure that we are doing that, if
we are not, please let us know. If you are using standard com POFE components and ways, it
should not be complicated, like how a particular user is going
to use your project. It is something like, what does this
control do? And as you evolve, it gets less
easy. But we want to make sure that the incremental work that
you do for accessibility grows more slowly than the work you
need to do to serve whatever users you have in mind to begin
with. And now, Qasid is going to talk
about a way to think about the development process for folks
with disabilities. SPEAKER: Hey, my name is Qasid,
I work for Phil on the Android accessibility team.
Let’s get into it. If you really think about it, there are
two ways that your user can interact with your application.
Right? The first is consuming
information. This can be content that the user wants, or
indications how to use the UI. And the way we use information
varies from user to user. I look at it on a screen and I
process it that way. A TalkBack user will hear a description of
what is on the screen from a device speech synthesizer. Once
a user understood and consumed that information and combined it
for the real world, they can act on your application to make it
do things. And just like consuming information, this
varies dramatically from user to user and from circumstance to
circumstance. And now these actions can be
something like a tap, a swipe, a scroll.
It can be speaking into a device, some users drive the
whole device with a single button.
And now once you combine these two modes of interaction, we see
this cycle of action and information, right?
And we can see if any part of this is su subtly broken for a
user, the application is useless. How to make it
complete for every single user? That seems like a daunting task,
like we mentioned before, the way the users interact with our
devices varies a lot and trying to understand those users is a
complicated task. If you don’t try to do anything through
non-standard, you trust us, and you use APIs, it is a pretty
easy task. I will show you some trivial things that you can do,
that are fundamental, and push your accessibility to be much
better than it otherwise would have been.
Let’s start with the way that we consume information. And I
started this search application, a novel idea. And the way that
I’m indicating this is a search UI is that, by that magnifying
glass on the little line. You will notice something, that’s a
very light shade of gray on a slightly lighter shade of gray.
That is going to be difficult for people who — for a lot of
people to see, because that is a very low contrast ratio. It is
frustrating and downright impossible for others. So we
should darken it up. And just like that, many other
people are able to use our application. The goal is to
make sure that all of the information is visible in terms
of size and contrast. If you want concrete guidelines on
this, you can go to the material website and we can give you
contrast ratios under circumstances and the hard
numbers in terms of size, like the 48 by 48.
That was straightforward, but there are dramatic variations in
how we consume information. For those situations, what we want
you to do is to fill in the blanks for our frameworks. What
we mean is that your framework can infer a lot from the view
hierarchy from the code we have written, but there are some
situations where we need your help. I will show you. Let’s
assume that I’m a TalkBack user and I’m putting a finger on a
screen to hear the description of what is under my finger. I
put my finger on the mag finying icon, but there is no way for
our frameworks to figure out what that means, because there
is no text associated with that. It is essentially an arrangement
of pixels on a screen. You have to fill in the blanks as app developers, you do
it by a label, search. Make your labels concise and to the
point. If it indicates an action, make it a simple action
word. And this is easy, in this situation, set the content
description, make sure it is localized, because accessibility
users do not only exist in America. And now that the user
can understand what is generally happening on screen, let’s make
sure we allow them to act on our device or application
appropriately. So I decided to add a new
feature, clear text button, instead of having a back space,
you can reset and start stip typing again. That’s a tiny
butpen, buttonbutton, if you have a fine motor disability,
that is going to be impossible for them to tap. We will make
it bigger. This works for many more users. And make sure that
your controls are simple and large.
I added another feature, a history feature. You can type
in the queries and see the results of the query again.
And now if you end up swiping on that, any of these items, you
will see a trash icon indicating this can be removed from
history. If you continue swiping, that item will be
removed from history. This is great and all, but it is a
pretty custom-built gesture overloaded on to an item. It is
going to be hard for our frameworks to detect. So you
guys are going to have to fill in the blanks for our frameworks
and for accessibility users. You can do that by adding an
action, or an accessibility action. And all you have to do
here is specify a user-facing label or description of that
action, a simple verb, and the code that should be executed
when the action is performed. You can do that with the
accessibility action API, but we are adding something to the
Android X library which allows you to do it in a single line
call, with a Lambda and a string passed in.
So I have shown you a way of thinking about accessibility
issues and how to address them, but you still need to know how
to find these issues and how to verify that you have actually
fixed them. This is where Casey comes in. Casey?
CASEY BURKHARDT: Thanks, Qasid. I’m Casey Burkhardt, I’m a
software engineer on Google’s accessibility engineering team,
and I lead the development on accessibility testing tools for
Android. So as Qasid pointed out, there are many
accessibility issues that actually fixes our fairly
straightforward from the development side and can improve
the accessibility of your app. The big question, though, is how
it do we go about finding those issues? There are three types of
accessibility testing today I would like to cover. Automated
testing, use of automated tools, manual testing, where you use
Android’s accessibility services itself to understand the user’s
experience, and and and user testing, where you bring in
users with disabilities and get their feedback using your
application. My section of the talk focuses
on automation because, right now, we see that as one big area
where we can make vast improvements across the
ecosystem if we have some developers’ cooperation using
these tools. So automated accessibility testing, until
several years ago, on Android didn’t really exist. Around
2015, we launched a project known as Android’s accessibility
test framework, which is an Android and Java library that
houses a lot of detection logic for the more common
accessibility issues that we can identify within an app’s UI, a
rule-based fashion. This is an open source project, so you can
find it on GitHub. And essentially what it is aiming to
do is to look at the app’s UI at runtime and find the
automateable, mechanical aspects of accessibility that we see
day-to-day that affect users the most. We can find the common
issues that Qasid discuss, so we can find an UI that is missing a
label for a screen reader, we can identify low-contrast text
and images. We can tell you if you have lickable clickable
items within an UI that meet the minimums for touch target
guidelines, and we identify a number of various other
implementation-specific issues, and the core testing library ATF
is growing constantly to find new and more interesting issues.
We have taken this library, integrated it with common
developer end points, tools that you will use commonly throughout
the development life cycle and you can leverage throughout your
project. I want to talk through some of
those integrations today and how you can use them and get started
with them quickly. So first and foremost, we have integrations
with two test frameworks that are commonly used throughout
unit and UI tests on Android apps. The first is Espresso,
and Robolectric. The idea with these integrations of ATF is
they will piggy back on top of your existing tests. You have a
test that loads, runs UI with the application, and inserts a
state. During the phase of your tests, if you used Espresso or
Robolectric, if we identify an accessibility issue that we
believe will affect a user with a disability’s ability to
interact with your application, we will failure the existing
test. If you have strong testing coverage with Robolectric or Espressoo,
enabling this will allow you to get a decent amount of coverage
in accessibility testing for your app.
And each framework offers what is known as an accessibility
validator. And this is an object that you can use to
essentially configure ATF’s behavior inside of your tests.
So you can configure ATF, for example, to not fail your test
and to log something, and we’ve run into an issue.
And you can set up ATF to crawl from the root of your view
hierarchy when you perform a view action, rather than
validating the item that was interacted with for additional
coverage. You can use accessibility validator to set a
white list. If you want to turn on accessibility tests within
your Robolectric or Espresso tests, you can do so. You can
maintain a green presubmit and burn through issues you know
about by creating and registering these white lists
for issues. And how you leverage these in
Espresso and Robolectric, you will call accessibility
checks.enable within your test set-up. This is going to
trigger our global assertion to run whenever you use a view
action within an Espresso test. So in this case, the view action
is run, runs a performant evaluation on the view
interacted with and its sub tree. Accessibility val
validator is returned through enable. So if you need to
customize the enable, you can do so through the object return in
that call. Within Robolectric, it is
slightly different. Instead of calling accessibility
checks.enable, it is annotation, and it will annotate the test
method or class that you would like to enable accessibility
testing within. It allows you using shadow view.clickon for
you to interact with your elements. Avoid the temptation
to use view perform click, do it correctly.
It is available in a different class that mirrors the same API,
you can use for accessibility Util and make data calls there
to configure your behavior. So, in addition to integrations
with these automated test frameworks, we built a separate
stand-alone tool, known as accessibility scanner.
And this is a direct integration of ATF, and accessibility
scanner acts s scanner as as a front-end for the library. It
will evaluate the fore ground for the device.
It installs the accessibility scanner, you turn it on, it will
add a floating button to the device’s screen, you will open
up the application, navigate to the UI you would like to
evaluate, and just tap the button. What you see is
essentially a report that describes the ways in which you
can improve that UI for accessibility.
And again, these will mirror the same types of issues that ATF
and Espresso and Robolectric can identify as well. It is easy to
take these reports from accessibility scanner and share
them with your team, you can export to email or drive. It
does not require technical skills, you don’t need a debug
version of your application, or a user debug device, you don’t
need to — you don’t need to do anything special to set it up.
It works on any app or any Android device running
Marshmallow or later. And you don’t have to have a lot of
experience related to accessibility. Each issue that
accessibility scanner can identify comes with extensive documentation that gives you
background, and how to think about the issue during the
design, development, and test phases of your project.
And so please do download accessibility scanner, it is
something that I highly recommend we use when we build
UI. will take you to the Play Store
page. And one last integration I would
like to cover today, this is a new one. We launched it a few
months ago. An integration of the accessibility testing
framework and the Play Store developer console’s pre-launch
report. For those of you who have not used pre-launch report
yet, it is a great tool to get a sanity check during the launch
or release process for an APK to the Play Store on an open or
closed channel. The way it works, you upload an APK,
pre-launch report takes that APK, instrument it, and push it
to a number of different physical devices in a lab.
And it will crawl your app, essentially on these different
devices and generate reports that include findings about
performance, security, and now accessibility as well.
And so ATF is running along side pre-launch report as it is
crawling your application and generating reports at each stage
in the crawl, and it is taking all of the results and
deduplicating them. So if you would like to check this out, it
is available now in Play Store developer console under release
management, you should be able to see accessibility results for
any APK that you have uploaded to the store since fairly early
in July. So please do cleck that out. Here is what it looks
like, to give you an idea. The main entry point for an APK, it
will show you the categories of issues we have identified. It
will show you the clusters of issues, you can click on any one
of those and it will show you detame tails about the problem.
So in this case, we are pointing out an issue related to touch
target size. In the column on the left, you see many examples
of the same de-duplicated issue across different crawls of the
application that prelaunch report has performed. You have
access to the same additional documentation here as well, too.
So if you are not familiar with a particular issue, the learn
more link will give you the details you need to resolve it,
regardless of the stage your project is currently in.
I want to wrap up by talking about an accessibility testing
strategy. We talked about automation, but we did not go
into manual testing and user testing. And these are equally
important. Automation is great because it
helps you find issues very quickly, early in the
development cycle, especially if you have good test coverage with
our automated tools. So think about automation as a way to
catch very common issues quickly, but not as a way to
guarantee that your app or your UI is fully accessible.
To really understand your user’s experience, to get that — to
get that awaterness of how your UI is performing within an
accessibility service, we really highly recommend you go and
actually turn on TalkBack, turn on switch Access, try them out,
learn how to use them, and gain a true understanding of your
user’s experience. The way I like to describe it is
automation is capable of finding a missing label for a screen
reader. But we can’t really tell you if your labels make
sense. So really only by understanding
and putting yourself in the user’s shoes and understanding
their experience, we are asking users directly about their
experience. Only then can you truly understand how accessible
your UI is for users with various different disabilities.
And we have found at Google, both looking at our first party
and third-party applications, the most successful, the most
highly accessible apps we see day-to-day are the apps that
combine multiple accessibility testing strategies, like
presubmit, continuous integration, and a process for
manual accessibility testing and bringing users in and
understanding the perspective of an accessibility application.
These are things to consider. With that, I will hand it back
to Qasid that will talk about the newer APIs for expressing app semantics.
SPEAKER: I’m back, say you have adopted APIs that I talked about
earlier and the testing that Qasid suggested. And you have a
pretty good foundation of accessibility in your
application, and the more you are testing, the more you
realize that there are holes in your experience, that is
breaking the cycle of interaction that I mentioned
earlier. We are adding API and new things to make it so those
holes get plugged. So the first thing is clickable
spans, the clickable bits of text. Before API 26, non-url
spans were fundamentally inaccessible. And developers
had to write a bunch of hackie work arounds to make these
things work. In the latest alpha of the AndroidX library,
we made it all the way back to API 19. You can look out for
that, too. And there are users that make
apps behave like — they have their own life cycle and so
forth. They are accessibility panes. You need to develop your
frameworks that present these differently to the user. And the
way you do that is by passing a string to the
accessibility pane title on the view. Make sure it is concise
and localized. This is available on API 28, and we
added this API in the latest AndroidX alpha that will work
all the way back to API 19. And finally, there are headings.
These are used by TalkBack users to navigate through sections
quickly and easily. And the way you specify that is
exactly what you probably expect, pass in a boolean to set
an accessibility heading on a view. This is also available in
28, and we added this to the latest alpha of the AndroidX
library that will work all the way back to ’19.
Now it is back to Phil.
SPEAKER: Circlng back to the title of this, making
accessibility easy. We really want this to be easy. The only
way that this — that user’s didn’t disabilities are
going to be able to access the ecosystem.
Sometimes you see somebody that wants to go the extra mile and
dig into the app and all the issues around accessibility,
that is great but not every developer can do that. We need
to make it as easy as possible so everyone can build it into
their workflow in a natural way. So certainly, to get the basics,
we want it to be straightforward.
So if you have reasonable contrast, to make the
information visible to everyone, you have simple controls, you
are using labels. We want that to kind of get you almost all
the way, and some of the other APIs, like Qasid was describing,
should be able to handle the specialized situations. If you
have a full screen fragment transition, you can use the pane
title to make sure that that gets handled the same way a
window transition would. So we want it to be easy. That
means, if it is hard, we have messed up and we should fix
that. And we found some developers
that really, well, the framework seems to be coming up short, I
will engineer a workaround to get around the problems with the
framework. Honestly, please don’t do that.
Let us fix it, because we can fix it at scale, and it needs to
be inside Android. You find it in AndroidX, you know, if you
wanted to fix it, by all means, upload it to AndroidX and we
would be happy to accept fixes. We want it fixed centrally so we
can get a consistent experience throughout the whole COREMS. ecosystem.
If you are engineering a custom solution, if an engineer at a
different company is going to do this work, and the answer is no,
then there is probably something wrong.
So please reach out if we are messing this up. You can file
bugs on AOSP, you can ask questions on Stack Overflow.
But we much prefer to, we probably prefer to get the
feedback that something is difficult so we can get an
elegant solution that everyone can use. Some of the things
that Qasid just presented before he did that, the effort required
to do some of these things really require, like, learning a
new API surface. We wanted to condense everything we could to
one line. So we are trying to present solutions that really
are — if you have this thing, here is one line of code, all
you need. If you get to something that seems like it
should be one line of code and it is not, let us know.
And another place you can go for other resources is to,
there’s a link for accessibility testing, how to get to
accessibility scanner, and the test framework project that
Qasid described. That is available on open source on
GitHub if you are interested in that.
So, I really appreciate your time, I would be very happy —
if you have feedback for us, and you think things should be
easier than they are, we will be having office hours for a while
this afternoon. We would love to talk to you. Thanks a lot
for coming and for your efforts and hopefully making a
consistent ecosystem of accessible apps. Thanks.
[ Applause ]. SPEAKER: Everyone, the next
session will begin in 10 minutes.
Coming up next: Bundling an App in an Instant by Wojtek
Kalicinski, Ben Weiss. 12C3W4R50I6R7B8G9SDS Wojtek SPEAKER: Hello, everyone.
So today at the keynote, you heard that we are unifying the
developerdeveloper experience for instant apps and Android app
bundles. My colleague, Ben, and I are
going to talk about how that works for the developer
perspective. So instant apps, it is a
technology that we introduced a little over two years ago at
Google I/O. It lets you give your users a native Android app
experience without them having to explicitly go and install the
app. And now how that works is you
split up your app, into smaller pieces, and the user can reach
that via a link. And now to QUET your apps small enough and
to launch it instantly, instant apps for the first time used a
technology in Lollipop, called split APKs. And they are able
to use the smallest and most optimized version of the app to
the user’s device. In order to do that, the build system, the
Gradle Android build system that you have that you build your
apps with, built all of those split APKs locally, bundled them
in a zip file, and you upload that to the Play Store they can
choose from the set of APKs and deliver them to the devices.
To do that, the developer had to do some significant
refactoring in your African apps. The project structure of
the instant app, as we looked to one year ago, probably looked
something like this. First of all, you have to take all of
your base application code and move it from the application
module into a base feature. And then you had to split your
app into feature month featuree modules containing the feature
of your activities. And then you have almost two dummy
modules, application and instant ach app. So the APK, and the
instant app zip bundle that you upload to a separate track in
the Play Store. This was not ideal. As I said, it required
significant work. Your project no longer looked simple.
And at the same time this year, we introduced a new publishing format called the
Android App Bundle. This contains metadata about the
targeting of the resource resources, native libraries, and
so on, and it contains some of the module information for a
module called dynamic feature. I called it a publishing
feature, we use it to upload a bundle, telling the Play Store
everything about your app to play. And together with Google
play dynamic delivery, we are able to serve an optimized set
of APKs to user devices. In order to do that, there’s a big
difference. Because it happens on the server side on the Play
Store, we need to be able to sign the app. If you use an
Android App Bundle, you need to enable sending by Google
play. If you allow us to store the key
for you, it is more secure. There is another benefit,
because we can transform the APKs and make them more
optimized, as we develop more apt miizations on the Play Store
side, what we bring to the users is more optimized from
compressed libraries. It makes sense to move instant apps to
this new model. Why build all the APKs locally and not be able
to optimize them further on the Play Store? What if we could use
the new App Bundle for the mat to deliver instant app
experience to our users? So let me tell you how to go
back to that simpler, better project structure for your instant apps. If you have — we need to go
back to the simple project you use as your build your APK.
So we no longer need the instant app plugin, everything we need
is baked into the application plugin that can now build
bundles. We don’t need the base feature anymore.
Instead, we can move our code back to the application, where
we will be using that to build our unified bundle artifact.
And so, again, with feature modules, we replace them with
dynamic features that work in the App Bundle world.
So ultimately, we want something simple. You can have your
application module with any additional library modules that
you need. Optionally, we have dynamic features that can be
downloaded on demand, adding functionality alt run time. We
have single piece of metadata that tells the Play Store that
this is instant bundle app enables. That would be great if
you can upload it to the Play Store and all have it downloaded
on the install and instant track which, what we is what we are
aiming for. We are testing it, but you are
not able to upload it just yet. If you want to try it right now,
create two project variants, it is in the simple project still.
In one of the variants, enable the metadata, enable the instant app for your bundle and use the
other variant to build it without that entry, or man
manifest. Still one app, one codebase, and you may have to do
that in order to upload to the Play Store.
There is another white list n , if you want to use dynamic
features in order to let your users download them on demand as
they run the app, this is on a white list for developers who
want to test that. You can apply online, however, you are
currently not able to publish your app to the production track
if you have more than one module.
Okay, so how to try it yourself right now. First thing you need
to do is use the new Android Studio 3.3 that is currently on
the channel. If you are creating a project from scratch,
you can select this check box. We will collect the modules with
the necessary metadata in there already. That’s what it looks
like, thin in the base manifest, you enable it, you are
good to go. If your app is compliant with the other instant
app restrictions, such as file size, you will be able to publish an instantly enabled
bundle. If you have a project that you use to enable an App
Bundle, you can use the wizard to allow a bundle. This will
instant enable your base module if you haven’t done so.
And next, we will build an App Bundle through Gradle, through
the bundle release command instead of assembly release, or
through the wizard and the UI. By the way if you are using our
Android dev summit app at this conference, it has been built
exactly as an instant enable App Bundle. We publish that to our
channels on the Play Store, what you are using right now is
exactly what I’m talking about. And it has been great. It is
has greatly simplified what we need to do in order to have that
working. Let me invite on stage Ben who
will tell you about best practices for modeler
modularizing this copy of your app. Round of applause, please.
SPEAKER: Thank you very much. I will talk a little bit about
discoverability of your instant apps. One of them you saw, the
try now functionality. I will talk about this. Try now is
basically a second bud button on the Play Store that allows
users to download and run the app without having to
permanently install it on the device. That’s a great
experience that we can easily see how it works, you can go to
Play Store now and check it out through the Android dev summit
app, and you can get to it directly from the dev landing
page. If you don’t have an url associated with your app, that
works as well. You don’t have the restrictions that you have
to map to your urls anymore. How do you do that? That is
basically it. I think that everybody has this somewhere in
their application, the main launcher. You don’t have to do
anything else other than add the instant metadata that you were
shown earlier, there is nothing else you have to do in order to
be eligible for the try now experience. What is your app,
you want to have your app available with an url mapping?
That allows you basically to access your app through any url
that you have associated with your app. The first thing you
have to do, you have to verify if you own the domain. In order
to do that, you upload an asset links JSON file.
There are resources where you can identify what you uploaded
is correct and the mapping you have is correct as well, you can
share the link with anyone and they can open the app straight
away. I will show you how that works where we started with the
main launcher. You have to set a second filter
with it set to true. That tells the Play Store it should check
for the asset links JSON file on the url provided below. Then
you add the action view, the browseable default, this is what
I want to use as the default to view this url. You have the scenes for https, or http, and
multiple path prefixes for multiple activities. If you
want to use a default activity, whether it come from try now or
the home screen, is the metadata here, the tag for the filter.
This is the default url, this is where I want users to come in
initially. Like I said earlier, if you don’t have an url in the
first place, you don’t have to do this. If you have to have an
url mapping, then this is the way to go.
And it is not a lot of work to do. It gives you the — it uses
the experience that you can share any link that leads to
your app and any user can open the app straightaway without
having to install it permanently. And I think for a
conference, it is a good use case to have an app that you use
once, you don’t have to fully install install it. You have a
couple features, like notifications, you have to
install the app. But it is fair enough to have the experience
that you can have straight away without the first steps that are
necessary. And also there is another thing
that we use for app bundles in order to download code
dynamically. So going back to the previous
section is basically, you can do that with the zip and feature
instant apps. What we are going on forward now, you have to
ship, as the App Bundle, and you have to use dynamic feature
modules in order to use the play core library. This allows you
to download features dynamically on demand, not during the instillation, andapp. And it does all the
heavy lifting. If you say to the library, download this, it
connects to the Play Store, downloads it, puts it in the
right place. For instant apps, it puts it into the shared catch
there, and then it installs it on the device permanently. So
how does it work? So you add it as a dependency,
available from, you can use it in your project. You
create a splitinstallmanagerfactory,
create a request where you can install one or multiple modules
at the same time. Those module names have to match the module
name that you set for the dynamic feature module directly
in the manifesto module. You build it, and you tell the
manager to start the instillation.
And that’s all the code you need to get started with it. There’s
a couple things to go around it, I will go into it in a second.
If you don’t want to do instillation straight away, you
can do deferred instillation. So you have the app, the user is
in a flow where they are buying something, so they have logged
in and then they started the purchase process and you want to
download a payment module, for example. You can do that
deferred, or during the flow with had the user is in that,
you can do the instillations. And so the deferred — it is
not done straight away, the system says I’m cleaning up and
this is where the module is removed. And you can also
cancel the instillation requests where, for any reason, you want
to cancel that, there’s an option to do it.
And there’s a listener that you can set on the manager that is
my preferred way of listening for the updates, the split
install state update listener. That’s quite a word. And
usually what happens, the happy path is you trigger an install,
the app starts with Pending and goes to downloading, downloaded,
installing, installed. That’s the happy path, for an app or
module that is small enough — that required user confirmation.
If you go into that state, you will get the — you will get
some information that you can start the intent with, to show a
dialogue whether they want to confirm or deny whether they
want to install the module at this point. If they con firm
firm they want to install, you continue down the happy path.
The instllation can be canceled in the cancelling state, and for
a couple reasons, it can fail. So that’s the path where you
have to handle all those states. We do have a sample al —
available for that, I will share the urls later in the session so
you can see how it works. I talked about file size.
The limit that triggers, for example, the — that requires
user confirmation coincides with a couple apps for instant apps
in general and for dynamic delivery.
One of the things we do is we don’t take the installed size,
or the downloaded size, as the main factor anymore.
So if your app or device has a file size that is larger than
the limit, that is all right. We can take the download size
into account. You get compression over the wire,
that’s what we take into account for that. And this we show in
the play console as well. You will see the download size is
this, you are right above or below the threshold. And
dynamic feature modules do not fall into the initial download
size as well. What falls into it is basically your base module
and your instant entry module. If you have more than one, that
you can have, it is the base module and the largest instant
entry module. And those, under a white list, have to be less
than 10MB. Dynamic feature modules that you
download later on or other instant entry modules do not
fall into the 10 instant MB in the first place. If your app is
larger than 10MB, you do not have the instant app benefits,
you are not discoverable to users as an instant app. If it
is less than 10 or more than 4MB, your app can be seen and
used as an instant app. So you can access it through try now,
on the Play Store, and also you can is show it in the web
banners and share it viaU urls. If your base module and your
instant entry model is less than 4MB, your app can be discovered
from anywhere, like ads and search results. If you research
for Android dev summit app, you will see that and go directly to
the instant app because it is under the 4MB threshold. While
you are at it, you can continue to modularize. It is a tricky
topic because it entails some work that some people are not
100 percent certain how that is done best. We recently went through the
whole process with a project, it is called Platt, it is on GitHub
and I will talk about how we managed to move away from a
monolithic app to an app that uses bundles. It uses the same
technology underneath for modularizing an instant app as
well. So what we did is we, and most apps will do it, we created
a base module which hosts the domain code and data so it
shares the shared preferences, log in, for example, some of the
repository, some persistent API calls, and things like that. On
top of that are different feature modules. Those have
their own domain code, logic, and the UI which is only
displayed in this module. And that set up can be used for many
apps, if you have a base module that shares information and the
different feature modules. I will share a little bit more in
depth what we did. So we initially had an app, a
module that most people had, a monolith. If we shift to an
APK, that is fine. There is no way to go into the whole
modularization part if we have, in the end, one single
monolithic app, APK, that we ship to our users. And since we
were considering going towards dynamic features and app
bundles, we move everything into a base, with the shared
dependencies, that means everything that we include,
coming from whatever repositories externally as well
as a couple of local, third-party APIs and libraries
that we have forked and worked with.
After we tested this, we worked on features that we extracted
from the app itself. The first thing we started with is an
about screen. Well, an about screen is kind of untangled from
most of the stuff within the app. It is a good starting
point for getting our hands wet with — what does it actually
entail to make the dynamic feature module, how we make it
do all the things. Then we did a search screen, and then we
have two new sources that we extracted into the dynamic
feature modules as well. They all depend on the app, and
everything shares the information through that as
well. You can read up more in-depth on a blog post on the
Android developer’s medium publication, and we also have
information available on instant apps on general, and on App
Bundle on dynamic features, and instant apps without urls. The
instant app scene is outside during office hours to share
information and knowledge. If you have questions, please come
there. And with that, thank you very much. [ Applause ]
Coming up next: Android Slices Best Practices by Arun
Venkatesan, Artur Tsurkan. SPEAKER: Hi, everyone. My name is Artur, I’m a product manager
on Android. SPEAKER: My name is Arun.
SPEAKER: We’re going to tell you about Android Slices, part of
Android pie. So we’re going to start an
introduction to Android Slices, user experiences, best practices
in constructing your Slices, and more details on search indexing
best practices, and finally through the — what we have been
running over the course of the summer, the important developer
gotchas for when you start building slices whether when
they are available. We will start by reintroducing
get SNOOT slices. Before we into it, I wanted to
remind you of some resources. We introduced slices at Google
I/O, you can find documentation at
And you can look at our I/O session video, Android slices,
building interactive results for Google search and that will give
you information on building slices, building a custom
template and an UI for your slice . Slices are a new way
for you to present remote app content in Android.
Think of Slices as embeddable app snippits that can be
inserted inside another app or part of a system. Slices are
meant to work across multiple possible surfaces, the whole
idea of a slice is the ability to constrain and provide you
with a lot of power in how you express your app. Slices
contain a variety of components, including text and images, and
they are powerful. They are not just static pieces of content,
they can house realtime data and interactive controls, and
TalkBack to your app in the interaction. So all of these
examples you see here are examples of slices that you can
build. I mention that slices can be
embedded across a number of contexts, and we are excited to
be bringing them to search as the first surface to insert
slices. So as they are typing in a search on Android, you can
enhance the predictions offered by Google with lives, personal
content from your apps, constructed through the slices
you create. There are two types of slices you can provide in search predictions. The
first are app name queries, if somebody can search is searching
for your app, you can direct them to a canonical part of your
app. So maybe in YouTube, you want to get the visitors to the
video they were watching. Or maybe you want to get beyond
the name, and the settings app will give them toggles for data
usage. How do we build these slices?
We will start with default or app name slices. It is easy to
build an app name slice. Construct it as you would.
Android’s recent updates include tooling. When you create a
slice, you will create a definition in the manifest, like
this one, and the exception to the slice provider class. That
provides details of how the system can find your slice and
what templates it offers to the system when it does. To make
this slice map to your activity, or to make it map to the app
name, you need to add the slice content URI to the content
activity and specify it in the metadata field. So this tells
the system, this is a slice that I want to point to the main
activity, this is the content from which you can reference
that slice. The implementation details for
general terms are a little bit different.
So first, you actually don’t provide the same linking in an
activity as you would have an app name slice.
You are going to use updates to the Firebase app indexing APIs
to expose the content URI for a specific set of key words.
Using the Firebase APIs, you would construct an indexable.
That would have a public url, it will also have the name for that
indexable content, some very targeted key words for users to
find that content, a description, and then finally
with the update to the Firebase app indexing APIs, you can add
the content URI or slice URI for this indexable. When the user
is searching for the key words you specified, search on Android
will replace the prediction with the slice predicted by the
content URI in this indexable. Finally, in order for
indexables to be transilated transilatedilated from the
public urls to the content, you need to over write the URI
method into the slice provider. This allows you to talk between
the two formats in order for the URIs to be visible as a search
prediction. Overriing this method allows us to provide you
as a developer with flexibility over what schemas are important
to you, without attaching them one to one.
Now we know how to build a slice and also how to expose it
to search on Android. Let’s go into a little bit more detail on
what makes a good slice, and the user experience best practices
to keep in mind. So the first is try to focus
your slice on one task or a set of sight tightly related tasks.
Avoid making your slice do too many things, and that the
content in the slice is coherent coherent. So those slices are
embeddable as a search prediction for now, we would
like to think that the slices that you build should be
reusable in many other parts of the operating system and you
should consider when building your slice for those
possibilities. So it focuses on consuming and previewing one
video, the slice on the right goes to other functionality,
like channel selection, and stresses the definition and use
of a slice, which we discourage you doing.
Try to keep interaction in the slice lightweight, highlight the
most important features, excuse me, and make sure that the
actions you are presenting to the user are using clear and
familiar iconography so they make sure that they understand
what they represent. Avoid making it do too many things.
So the slice on the left focuses on users listening to the
content and play list, and offers them the ability to like
certain content. The slice on the right leans into things you
can do with the play list items, functionality that is probably
best within your app. When sharing your slices to
different platforms, make sure you are surfacing recognizable
features and content so that users know that they can query
for them. When they query for them and see it as a prediction,
they can understand how it might have appeared for that
particular query. And Arun will go into those
principles, but try to keep that the case to avoid promotional or
advertiing content in the slice. And finally, have the slice
accelerate critical user journeys and make sure that the
slice accomplishes as much of the user journey in line as
possible without going into the app. By providing the
information and actions that the user would need to accomplish
that action. So on the left, the user has all of the stuff
that they need to know to understand what the slice is
for, what kind of information it is representing, what kind of
actions I can do with that information, changing the song,
adjusting the transfer or brightness, and they don’t need
to click into the app to do that. The slices on the right
are missing some critical pieces of information or critical
component as part of that user journey and adds an
additional layer of friction for users in completing the task
they want to complete. There is more that Arun will go
into. SPEAKER: Thanks. So here are
the main pockets when you think about use cases for building
slices. They should be familiar, such as requesting a
ride on a ridesharing app. They should be targeted. And finally, recall
and reengage with content that the user has seen, rather than
discovery use cases. Next we will see guidelines
around the content you can surface in slices and the data
that needs to be indexed.
Slices should be targeted and personalized.
Like the video streaming app that shows slices, when a movie
a user is searching for is available in the app. The app
can index movies, like the users previously, or it is curated
based on the in-app behavior. Do not create one for all the
users in the database. Slices should give
information. A ridesharing app gives users
the ability to search for rides. Create index cases based on past
history, or popular places near the user, such as concert venues.
. Slices should give users timely
information. Let’s say that a smartphone —
they are likely in their home. It is not recommended to create
an indexed slices for controlling lights at any time
or in any place. Here are some more examples of
the type of app content to index and display in the slice.
A food ordering app can search relevant realtime data, such as
food price, when the user fetches the slice.
Similarly, a news app slice can index and display information to
the readers. Travel and local can allow users
to look at their content, such as bookings and upcoming flight
reservations. Let’s now see how to keep your
slices fresh. When a slice is indexed, the
indexable object includes the matadata and the cached slice.
And in order to make sure it is accurate and not still, you
should set a time to live. This is used by the Google search app
to make sure the cache is not displayed. In this example
here, the time is set to 1R expiration. And if the slice
has content that is not time sensitive, use it to time to
infinity. In order to present to the user, reindex the slices
in the background when content changes, and only when content changes.
Earlier, arrtur showed us how to index. What are the best
practices for key words? Limit key words to 10 per slice.
Account for likely permutations.
This is probably the most important, avoid key word
stuffing, use the minimum number needed per slice. So the app
slice will be in the search rankings, if you use too many
key words that the users are tapping on.
How do you determine if the —
the most important signal is the of times that the user uses the
app on the device. It is also important that the
user has viewed the content previously and interacted with
it. And finally, in order for us to
know that the user interacted with the content, log it to the
indexing API. We conclude the session today by
sharing the lessons learned from the early access program.
Content URI can be confusing. Content URIs are mentioned with
a slice and start with content://, and deep uris start
with http or https. Content URIs use the package
name, or the reverse of the package name shown in the
example. When you see a content URI, think of a slice. Key
words must be included in the slice title or subtitle.
This is done to ensure that the relevant content is displayed to
the user when they search for something.
How do you handle location? So slices do not provide
location to apps, and therefore use the last cached location
when displaying the slice. Reindex the key words based on
the last location. And note that indexables are not instantly updated and it leads
to bad ratios. So keep in place when handling location changes
for the app, and apply those to slices as well.
It is recommended to keep the slice initialization short. The
on create slice provider method is called for all registered
slice providers on the application’s main thread at
lunchtime, and therefore do not perform lengthy operations for
the application start up will be delayed.
And if you have multiple indexables, in the case of apps
with multiple slices, put in all — pass in all the indexables
together as we showed in the example. And make sure to test
the indexable. If one is invalid, that update called it
failed. So here is a small code snippet that shows it, how to
pass multiple indexables to the Firebase app indexing API.
So now that you have built your slice, you are
viewing the API. That’s all we have for the talk
for today. We want to give a shout out for the developer, to
the early access program. Thank you and have a fun rest of the session. SPEAKER: We are now on break,
this is the worst fab I have ever seen, I recommend people
put it on lower right and not have it be that color, there is
something disturbing about that one. It is snack time t , it
says on the agenda. There is coffee as well as tea, and there
is also coffee, there is snacks, and there is coffee, and I have
it on good authority that there is also coffee downstairs.
There is coffee, excellent question.
Thank you for asking. Let’s see, we are back here at 4:00.
Building a great TV app. There’s a talk on building
Gradle best practices. See you back here at 4:00, thanks. # #.
SPEAKER: Hey, everyone. Today, we’re going to talk about
building a great Android TV app. So before we dig into the
details, we will talk about the ecosystem. We continue to see
two times year over year growth. We are investing more in smart
TVs, desktop boxes, the ecosystem has been pretty
strong. But you are all developers, let’s get into how
you can make a good TV app. So before we talk about what the
innards of the app are, we will talk about what the TV is and
why it is so important. We have living rooms and other rooms
dedicated and focused around a TV. It is a pretty key point
for users, it is the focus of an entire room. Just think about
that for a second. So your app matters. The content matters.
So if we try to think about what is the foundation for an app,
the biggest piece is your content.
That is your value prop to your users. If you add on to that,
usant, how do you make it easier for users to add. You have
great content, how can they discover more of your content
inside of your app. If you want the cherry on top, think about
the experience. How can you layer in all of these extra
things to build a sweet experience for users, no matter
where they are in your app? We will dive into each of these
concepts and I will call ’em out later on. But the key take away
is they come from your content, but they stay for your app.
So what I would like to talk about is how to build a great TV
app. There are three things to think they can see other content in
your app easily, and distributing, making your
content easy to find should be as easy as making your app easy
to find.
Let’s talk about the player. Player can make or break an app.
We will talk about this review, feel free to read. So the key take away for me is
that it constantly stalls, this app is completely frustrating.
They ended up with just a one-star review, just because
they are a player. And reviews matter, there are whole other
talks about Google play and how to improve your reviews. But
the player was the key point here, it is the why their app
was not as good astle could it could be. So it is very clear
to users that the play back is important. They don’t want to
have stutters and stales, they want to be able to watch the
content. And even in that review, they talked about XHURNS
commercials and ads. They are okay with it, they don’t it
like that they stalled. Soshowing things that may be
annoying, such as commercials, as long as they play fine, users
are okay with it. So the player, we have many options to
build a good player. Media Player is a good tool, comes out
of the box in the framework, you give it a data source, it chugs
long in play, you can build a great experience with media
player. If you have more advanced things you want to do,
Exoplayer is a great tool. We work hard to make it customable,
there’s a lot of extensions. If you are using lean back, there’s
an extension that hooks into the lean back controller. If you
are doing ads, there’s a bunch of ad stitching support. Lets
talk about ads, ads are important. You are going to
make money from showing ads. And ads are just as important as
the content shown and displayed to the user. Focus on ads and
make sure that adstitching works, whether you do it server
or client-side, these are considerations you should make
for your app. So there’s many options for
players, media player, ExoPlayer, custom player. And
having a player is a good start. But there are things that you
can layer in, that top part of the pyramid, the experience,
there are things that you can to to build an experience around
the player to make it even better. So we talked about this
at I/O, play back controls. Everyone’s phone should be
ready, here we go. Okay, Google, skip 5 minutes. Okay,
Google, pause. Okay, Google, play. Thesep
types of transport controls can be commands through the
assistant. Adding this extra little
feature, this little nice nugget of delight, helps build that
experience experience for your app.
This works with Media Session, if you have Media Session call
back, you get all of these features for free. Since I
talked about this at I/O and there are other talks, I will
jam through this fast. Pay attention, here we go.
Boom, beautiful. Six wonderful methods: Pause,
play, stop, seek, next, and previous. But in reality,
that’s a lot. That’s a lot to think about, all of these
different casesism if. If you use ExoPlayer, it can be done
for you. There’s an extension, all you do is connect the player
to the media session and it works out of the box. Making a
media session is pretty simple, there is documentation and talks
ants about Media Session. I will not go into it, set it to
be active, set the controller, set anything else you need to
set. Set the current state, are you currently playing, what
position are you in, set up the media session to be what you
need it to be, and once you have a media session and you have an
ExoPlayer instance, connect them. There’s the extension
library for ExoPlayer, add in the media session as the parameter, and set the ExoPlayer
instance. The media session connecter helps understand how
to set up the call back, the edge cases around playing and
seeking, you don’t want to go back past the end of the video,
rewind before the video starts. It handles the edge cases for
you. In this sample, we are saying set player, player, and
then null. And you can set a custom play
back refair. There are other customizations you can do as
well, so if you are music app and you have a custom playlist,
and you want to set a different order for how the songs go
through the queue, you can set up a custom queuing mechanism on
the extension. That’s it. Three wonderful lines of code, and the instance is
taken care of for you. All the default behavior you would
expect, done. So having a great player is
great, that is one example of how to layer thin experience to
make the player even better. We’re going to skip ahead to
discovering content. So the whole point of
discovering is you want users to stayp in your app, and you want
them to discover and watch content faster.
So let’s look at this review, I love the first sentence, they
love love love — so many loves in this ach. So the key takeaway here is that
it was a five-star review, they loved that all of the content
was there. It is easy to find, they can do whatever they need
to do inside of that app and watch what they want to watch.
Funny story, it is the same app that got the one-star review.
So even though they had a bad player, they worked on
discoverability and they are able to have good review in the
flay Play Store. So how can we make content
discoverable? Everything happens in threes, that’s a rule of
comedy, and a really good rule in life. And discoverability
also happened happens in threes. We can work in in-app
browsing, search with the assistant, and the home screen.
We can start with in-app browsing, there’s a beautiful
library, Lean Back. If you have done TV development, you are
very familiar with it. It is a templated UI system where you
can plugin data and it works on building the UI for you, so you
don’t have to worry about the focus handling and the user
input. You can just, this is the content we have, and it will
show it for you. It is not just how to browse content, though,
leanback also works and showss details. So there’s a bunch of
information about content, you have duration, content rating,
the rotten tomatoes score, the album, artist, I can keep going
on and on for the rest of the 30 minutes of this talk. But I
think you get the point, there’s tons of information.
You can show it using lean back and in multiple places on the
home screen, in search. And by showing this information sooner,
it lets users make these microdecisions faster and they
don’t have to go in and out, in and out, to figure out hot what
they want to watch. Make your user’s lives easier by showing
this sooner. Let’s look at another example, search. We
talked about this at I/O, there’s tons of documentation on
this. I want to breeze through some of these things quickly.
Search is all supplied with the content. Content providers are
simple, they return a cursor, you can do whatever you want in
the background with the content provider.
If this did a network call, maybe you have a bunch of pojos.
This is a database call, you have a cursor. That’s fine, the
trick for the search provides provider is it needs to return
results that match the search manager’s criteria. A search
manager is way of saying this cursor has a bunch of columns
with these names, and then the assistant is able to pull in
from that cursor and say, here’s the title, here’s the duration,
and is able to figure out what content is where.
Super simple to do to do with the matrix cursor, we will dive
into it closer. You need to take each of the results, add
them into a row as a matrix cursor, and then return the
matrix cursor. The matrix cursor is really just like a
mock cursor, it is a 2D array, essentially, under the covers.
So if you don’t have to go about, how it do I store all of
these in a database, you can mock it out at the very end of
your search. So mapping, this is where the
hard work happens. You have a matrix cursor, it
takes in a query projection. This query projection is going
to have all of the columns defined that match the search
manager. So here we have suggest column,
text one, is the title of the content.
An action, the data ID, the ID is what is unique to your
content inside the app.
And if you take the context into this row, you supply an array, the name, and
it corresponds to the order in which the query projection was.
So the ID, the title, the action, etc. All of the fields
you have, you can return it back.
So search manager, and with the search and assistant, you can
make the return result mump much faster.
And so we will cover the new stuff that is happening.
The app will have the channel, the play next row, and for the
video apps, you will have previews. We have seen up to 2X
increase in engagement , you can see a trailer for a
movie, or a recap, but they take a little bit more work because
it requires the content team to make the content for you. We
will not talk about play next, or video.
So inserting it into the content provider. So set the preview,
the link, when they open the channel, they put it into the
app and set the internal provider ID. This is what the
app keeps track of and knows about. And then you just get
the content resolver, you call the insert, give the content
values, and you are good to go.
So you do stuff, the channel ID, you keep track of it for sink
synchronization. So it has the deep link, the
internal provider ID, and the logo, those are the key pieces
of the channel. So what just happened? We
created a channel, we inserted it, and then we stored the logo.
So we did two things with the home screen, insert the channel,
store the logo. So, as of AndroidX 1.00, we have
a new API. This API looks very similar, small differences. We
have a preview channel helper class. It takes in a context,
and then it does a bunch of look-ups to get the content
resolver for you, so you don’t have to do context, content
resolver.insert, it does the work for you. It makes a
channel, you have the builder, you set the name, the
description, the app link, the intent provider url, you think
you should set the type. But this class knows it is a preview
channel. It knows the preview, and you don’t have to set the
type. Instead, you can set the logo.
Now, all of this is contained in one united, you can call the
helper.publish channel, and it gives you all of the work for
you. And you can get the channel ID
back. So what it does under the covers, it inserts the channel
into the provider, and it goes to add a logo.
And if the channel isn’t able to be inserted, maybe you have bad
data and you are hitting an error or something, it will
return an error back to you. If it is able to insert the
channel, it tries to store the logo on that channel. If the
loFWO it go cannot be persisted, it wraps it all up, unwinds the
channel, so you have half one on the home screen. It treats
everything as an atomic unit. Pretty convenient. It does
everything CRUD does. So we talked about publishing the
channel, you can read all of the chanles les and get individual
channel, you can update a channel and you can delete them.
And all of this also happens for preview programs, and there is
support for the play next proin this class.
There are two options to do it, which one is better? You could
say I want to use content providers, I want to fine tune
the performance, I can do batch inserts, bulk operations, I can
go lower-level control. I don’t need an entire program with all
of that metadata, maybe I want the title and the duration, and
now I can slow down that query projection, and have faster
results. And it is based out of the
framework, you don’t have to do all of this extra work to access
it. You get it out of the box from the framework. If you want
to use Android X, you get more convenience, you don’t have to
worry about all of the nuances of a content provider, it is
a-liner for all intents and purposes, and you get the
benefits of having the AndroidX in your app.
Discovering content is great, three ways to go about it. In
the app, searching the assistant, and on the home
screen with chan LSS. Les. How do you make your app
discoverable? The third thing. The app store on TV is a bit different, it makes sure
that only apps designed for TV are shown. When the user opens
up the app store, they are looking at apps that can be
played on or installed on TV. Try to make your app stick out
can be hard, but there are simple things that you can do to
have your app appear on the Play Store. The first is to declare
features, and even if you don’t use it, there’s a giant
asterisk. Don’t start declarng bluetooth or location just for
fun. There are two features that really matter.
The first is touch screen, you want to declare it as false. It
is not a touch screen, this is not a phone, this isn’t a TV
from way back in the day, these are smart TVs.
You don’t need touch screen support. The second thing is to
declare lean back as true. This tells the Play Store is ready to
be deployed on a TV.
If you have all of the code in a single code base and making a
single APK that deploys on mobile and TV, set the lean back
to false. This tells the Play Store that the APK is compatible
on mobile and TV. The second thing you should do is try to be
visible in the launcher. If you are a headless app, a screen
saver or a keyboard, go away for two minutes. And I will see it then.
So you need to supply a banner to see it in the application or
activity. The launcher will go in through the manifest, find
the resource, this is what it uses to show the icon on the
launcher. And once the user selects the
icon, it needs to launch something. So the launcher
fires an intent, and you need to have an activity that accepts this intent.
It is called the lean back launcher intent, cleverly named.
And from that it will trigger the lean back experience. There
are three things you need to have. One, declare the two
features so the app is found on the Play Store. Two, how the
have the banner, and three, have the lean back intent so the app
launches when the user wants to enter your app.
And that’s it. You are ready to go in the Play Store. This talk
is done. All right.
But, in a sense, that is kind of the minimum viable product. You
are able to have a strong player, you are able to have
easy-to-find content, and you are able to distribute on the
Play Store. That is just a good app.
How do you make it great? To start with this, you should
look at your users.
Imagine the spectrum, they start from one side of the spectrum.
I bought a TV, I want to be cool, everybody is doing it, sit
s in my closet but I have one. The next part of the spectrum, I
have one, I watch a show every week. You go further down, I
love how to get away with murder, field of disease
daisies is awesome, I should watch suicide squad, theis in
that. Or for sports, here is a fantasy team, the jersey and the
player I like and I keep going into it. That side, the left,
is called the lean back user. They are sitting back, watching
TV. That’s all they want to do.
This is the lean-in user, they are sitting on the edge of the
seat, this is awesome, who are the people in the show and going
deeper into the content. Everything I talked about until
now, having a good player, making the app usable, this is
for the lean-back, the right side of the screen. If you think about it, how can you tap
into that lean-back user? Here you have a beautiful living
room, a very beautiful living room. I wish it was mine, but
it is not. If you look closer, you see a camera, a microphone,
a tablet phone, and then, as you start to think about it, the TV
is the center piece of the living room. There are so many devices around. You don’t have
to just do stuff in TV, you can tap everything around the liveic
ing room. I love the Android TV, it is the focal point around
everything that is happening. Again, we do it in threes,
threes are great. In concept, if you want to tap into the
other surfaces, what should we do? The first is controlling
media, the play back controls we talked about earlier is a great
step. It is going to go a little bit farther advanced, you
are building an experience around your app. And another
option is to have notefication notifications, the big gomis
game is going to start. Do you want to watch in your TV? And
the next is going deeper into the content, the cast and crew,
behind the scenes of this production, are there extra
sponsored content I want to know more about. And the third
pillar is about reducing friction, how do Iinstall your
app, it is installed, how do I sign into your ach.
I want to make a payment, how do I authenticate it in a secure
way inside the TV. Everybody who
has the TV has done the third step, the frictionless
interaction. The Android TV set up does it for you, during the
flow, it says, do you want to set up on the phone? They give
the UX indicator, you get the notification, you say, this is
me, this is my account, and the TV takes over from that
information and it was really frictionless.
And how do they do that? It is something that you can do today,
it is nearby. We try to use that on
TV, what you can do.
You do the work on the phone, and not on the TV. We will set
up a peer-to-peer wireless connection that is encrypted,
you don’t have to worry about a lot of things, you have an
intimate connection between the phone and the TV. We will dive
in a little bit deeper, let’s get started with this, we will
start on the TV side. So I’m on the Android TV, so I’m biassed.
So TV will start advertising, you set up the
nearby.get-connectionsclient. That is a helper class from the
nearby API that has all of these things to get you started. You
call the start advertising, you give it a name, a service ID, a
package name is perfectly fine. You are going to give it a
connection, a life cycle call callback, and set a strategy. A
cluster is a really good strategy. If you notice,
there’s a P to P, point to point, strategy. And it might
be one TV, one point, point to point, that is great. If you
try to do multi-device set-up, I have a TV in my living room, in
my bedroom, bathroom, all of a sudden that point to point
breaks down. So to make it a more robust app, think about
using cluster. You also set a success and failure listeners.
These listeners are not saying, oh, I have been found, I have
been advertising on non found, the users will say you can start
sad advertising, great for debugging and adding extra
information inside the app. The big elephant in the room is the
connection Lifecycle Callback, this talks about how the devices
talk to each other. What is going to be said is later, how
they are saying it now is handled in the connection life
cycle callback. And a simple three methods: On
initialized, results, and disconnected. And they are
pretty straightforward, but let’s dive in a little bit more.
So when the connection is initialized, that means the
phone requested a connection, you are going to prompt for
security and do a couple things. Eventually, you will see nearby.get connections client,
and you will accept it. Based on that, there’s a result, was
it okay, continue on, was it rejected, maybe it will ask for
another re-try. And based on that result, you should handle
appropriately. And the last one on disconnected
is pretty simple. So clean up any metadata that
you may have started collecting. The big line here is the
connections client, accept connections.
And here you pass in a payload callback. This payload doll
callback is what the devices communicate. So you have a
contract on your phone and on your TV for what they are going
to say to each other. Hey, phone, we want to do this. Hey,
TV, we’re going to do this. And this is all handled inside the
payload callback. So here are a couple tips.
What you are going to communicate is very specific to
your app, but here are some tips. The payload received and
payload transfer update are the only two methods you get.
They are pretty succinct. Payload receive, if you want to
send an acknowledgement back, hey, thanks for telling us this,
phone. We will send back an acknowledge.
acknowledge.Ment, so you know the message that has been
received. You call send to payload. You give it the end
point ID and some body, in this case, it says ack, or
acknowledge. And if you want to disconnect,
hey, I received this payload, I want to disconnect the connection and
close the session, you should do it in the transfer update. And
in the transfer update, you should see if it is in progress
or not. If you are sending messages like ack and send,
those are fast. If you are sending something big
like a file, that can take a while. You want to make sure
that all of the bytes have been sent. Once all of the bytes
have been sent, you can call disconnect from end point.
So now you are going to say, hey, I’m a TV, accept the
connection and you are going to communicate. On the phone, what
happens on the phone side?
You are going to discover the TV this time,
accept the connection, and everything else looks like the
slides I showed you. To discover the TV, this time
you call start discovery. Mind blown. You give it a service
ID. This time, I’m using a constant, and there’s a reason
for the service ID, depending on your app, it should be the
package name or the constant. If you have a package name that
is the same for both your TV app and your mobile app, it is going
to work great. If you have something like com.mycompany.Android, or
Android TV as your two package names, imagine they are on their
own channels. So the nearby connections library will not be
able to find the phone and the TV. Having a service ID used be
both by both sides is a good practice. You will give it a
mobile end point discovery call back, I love really big words.
And you will have a strategy, and I encourage you to use
cluster for this use case. You get
listeners, they are pretty important, and they are able to
start discovery. This does require location permission, so
if you get a failure listener, like permission for location
hasn’t been enabled, so they are great for debugging and trying
to urge the user down the correct path.
So the next part, accept the connection. Really simple, you
have the mobile end point discovery call back. It has two
methods, you found the end point or lost the end point. Pretty
simple. If you find the end point, go ahead and request a connection.
This requests the TV initialized on the TV you saw earl
ierearlier. If you lost the end point, maybe the user is no
longer nearby, or they gave up and closed the app and said
forget it. Hopefully it is the first for you, not the second.
You should clean up whatever metadata you collected already.
And anything after this is identical, you will have the
connections live cycle call back that shows how they communicate,
and the payload call back that shows what they are going to
communicate. And that’s it, that’s nearby in a nutshell.
And nearby is a cool tool, it is a nice box in your kits to have
more experience in a TV app. We wills go on payments, we will
look at one more example. Payments is cool, it adds a
family-friendly idea. Imagine you are at work, your
kid is at home, and they want to buy the next season of a TV
show. You get a push notification, are you sure you
want to purchase this, you say, man, my kid is at home buying
stuff, no, or yeah, they are bored, I will use my finger
print or anything from the phone to authenticate yourself. And
you enabled a purchase at home from your office.
And I have not seen a lot of this in Android. This is not a
good fit, it is not a good tool in the toolbox, but it is not
necessarily the best fit. So let’s talk about a good fit, you
get a push notification, and it says watch on TV, or watch here.
This a big game, you want to watch it, watching on a small
phone. Eh. Watching on a big TV? Awesome.
So you can use nearby to figure out proximity. They are close
to the TV, we should watch the TV button. That’s a great use
for nearby. Receiving the notification an example of
content immersion. When you said watch on TV, the background
lit up, the schedule for the game, the highlights, the score.
And then whenever the user wants, they can put the phone to
the side, it is very non-intrusive, and they can
focus on their game. So in a sense, this is kind of kicked
off from a notification. And in a way, you would say that
is more of a push model. Nearby feels more like a pulling, I’m
pulling a conversation between the two devices, and whereas in
this case you are pushing that information to the user.
And you are talking about Firebase cloud messaging after
this, I will not step on their toes too much. We will talk
about it for fun, what is the worst that can happen. You set
up the Firebase members of the jury messaging messaging service, and
this is what to do. And if the action happens to be
watch this movie or game, start watching and you are good to go.
Start watching, then it should literally launch
an activity. This is the Android Dev Summit,
I will assume that everybody has launched before. In this case,
we will set up the intent, the extras, this is a video to watch and the activity.
And next, what happens when the TV is off? I am at home, I get a
push notification, oh, man, the game is about to start. I have
to hit power on the remote, I have to tune to the channel,
man, this is first-world problems at its finest. But you can solve this. With
the fragment activity, you can call, set turn on screen to
true. And this is a cool API on activity. It is actually
introduced in OMR1. So if you are on API 27 or higher, you
should do a check. Hey, turn on screen to true, otherwise you can add the flag. So start with
your player, your content is king. So really focus on the
player, whether it is content or an ad, make sure the player is
solid. How do you make the app more usable and get the
lean-back experience so users can quickly find other content
to watch? The third pillar is distribution, is my app deployed
to set up on the Play Store correctly? When you have all
three of these, you have happy users, and who DUNCHT doesn’t
want that? If you want to take it further, use these lean-in
experiences, have payments, push notification control, add the
immersive content, the details of the game.
Thank you, and everyone go build great TV apps. If you have any
questions, you can see us at the office hours, thank you very much.
SPEAKER: The next session will begin.
Coming up next: Android Suspenders by Chris Banes, Adam
Powell. S PEAKER: Hey, good afternoon, everyone. Thanks.
SPEAKER: I’m Adam. SPEAKER: I’m Chris.
SPEAKER: This is Android suspenders. We’re going to talk
about coroutines, and to get started, many of you are
probably familiar with Android’s main thread. Like any other
toolkit, it exposes a UI or main thread for exposing updates to
parts of the UI. You may be inflating views, measuring
layout operations to change the shape of your view hierarchy,
drawing, and many other things like processing input events and
so on and so forth. So today you are probably
familiar with the 16 millisecond time limit, the vast majority of
devices out there are have a refresh of 16 hertz, which means
you have 16 mill secondss to load the display framework. You
have less and less time to get this work done as the hertz go
up. The things that cause you to
miss a frame and have Jenk in your application is app code.
If you are binding items to a recycler view, these are all
bits of work that are great if we can keep it out of the main
thread and off the critical path. How do we fix it? This
talk is not about achieving 8 millisecond refresh boundaries,
but using the resources we have available. All the phones we
have have multiple cores in them today. How do we make use of
that? Some of you probably remember this thing, it has been around for some time.
It has issues around rotation, a lot that it gets wrong. There
are executors, that developers are familiar with. It has
thread pools that are nice, but it is a raw API. You can build
things out of it, but it is not convenience — convenient to
use on its own. Loaders are a thing out there
that solved a few problems, narrowly-scoped, it is
deprecated and there’s not a whole lot there. We can use
features, there’s listenable future that showed up in the
AndroidX APIs, but you might not want to pull in all of the
infrastructure that helps you to leverage some of the thingsout
you can do with it. And unless you are working in min SDK24,
you cannot use importable future either and there’s a lot of
reasons you don’t want to use it to begin with.
So Guava, you can use that to pull into the app. And RSDK is
helpful as well. If you are here, you probably want to talk
about coroutines, which went stable in Kotlin 1.3 this fall.
So why should we use them? All right.
So whaI when I think about this question, what does the typical
mobile app do? They are CRUD apps, they Create, Read, Update,
Delete data. And usually it is a source, a database, or
whatever it may be. And it has some kind of sync. They will
upload data and pull it back from some kind of web service.
And apps are pretty simple computationally, you are not
taxing the CPU very much. And I guess the logic can be tricky to
get right, but they are pretty simple. And in Android, we put
a lot of stuff in your way. We make your life harder, but your
apps are pretty simple from a logic point of view.
So why coroutines? And how do we fix that? They are great for I/O
tasks, especially a resource-constrained system,
like the phone and tablets you are using today. And you cannot
create a thread for everything network request that you ever
use, because threads take in the space of that, they take about a
megabyte of RAM every time you create one. That is why thread
pools cache threads for you. And coroutines take in the realm
of 10s of kilobytes in a Coroutine, they use threads
underneath in a much more optimal way. They have an
easier development model. It is easier for developers to come
into a new source, and see an imperative, line-by-line
Coroutine of something like an RX Java chain. To understand
what is happening there, in my mind it is easier, and the same
for call-backs, everyone knows about call back hell, and going
line by line and seeing what you have called and stuff. And
coroutines hopefully, anyway, fix that.
And so a lot of this talk was written in mind with an app I
have been writing called Tivi, I went all in in RX Java, it is 50
percent coroutines and 50 percent RX Java. I use
both, they both have a place in Android development, so I say
use both. And as developers, we have to care about APK size and
method cans. I’m using free libraries from the coroutines core, and RX2 that
allows you to interact with RX Java.
And if you are actually pulling down the jars files to central,
they come to 774 kilobytes. That is quite big. And once you
are actually putting that in your APK, it shrinks down. It
comes down to 500 kilobytes. And the references are quite
high, 54 percent of your 64K. And as soon as you turn on
minify, and now this is tree shaken, no optimization turned
on here. And they both have results.
You are looking at 113K. So a lot less, and again, the
references is dropped. As soon as you turn on optimization, and
you fix all the program rules, you are coming down to the magic
value, which is less than 100 kilobytes. And your reference
is now 834, less than 1 percent of your references.
And now one thing to note, when you are using R8, you need to
use this rule. It is not bundled with Java, hopefully
added soon, but a simple rule to add.
SPEAKER: Okay. So hopefully you are thinking
about how to use coroutines in your app, we will talk about how
to write them. Anything you do with a call back feature, you
can do with a suspend function. Anything with coroutines is a
suspend function for creating APIs, they can suspend without
blocking and resumed later with a call back and can only be
called from another suspend function to set up the machinery
involved in that. The core thing is all of this fits in one
slide, the language works the same in thethe presence of
suspend functions and we will spend the talk talking about
that. This is the suspend from Chris’s app, a data repository
for TV shows, you call the update show function with the
ID, we get some shows, and we get a little bit more data from
a remote source, and a little bit more data from a second
source. We merge all of that together and we save that data.
So these three main tasks where we spend the bulk of the time,
these are done sequentially, but none of them have a dependency
on one another. Wouldn’t it be nice to do it concurrently? With
the async builder, we can do it. We start from the top here, we
open a Coroutine scope, and this allows us to build a parallel
composition using the async builder, it has the receiver in
the scope for the Lambda block block we have. We build the
async operation, the second, and the third. And we await the
result of each one in turn. So the nice one is that we have
launched all of these things, let them run independently and
then we bring them back together again.
So since all of these things can be now done in parallel, things
should complete faster. Trying to do this with raw threads by
hand is a lot more code than you need to maintain along the way.
The async builder is when you want to run something and await
the result after giving it a chance to finish while doing
something else. It is similar to C# or promises in JavaScript.
What if you want to launch something and forget? There is
that, too, it is called launch and works the same way. In this
case, it is a lot more similar to executors and just sort of
submitting a task, submitting something to an Android hand led
handler when we just want to fire and forget and deal with it
later. Those are the basics of running
coroutines in isolation, how do you handle it on Android?
So you may have the components ViewModel, wouldn’t it be nice
to have a way to put all of this stuff together automatically?
You need to open a scope, where can you open those things to
begin with that you can launch into? In this case, the model
has a show repository, the data layer, we have the viewer state
that the viewer activity or fragment observe. We refresh on
construction and we have the construct property.
This is coming up soon. The refresh function uses the
Coroutine, the launch builder, and updates the function on the
repository. And the Coroutine updates back on the main thread
so we can manipulate the view hierarchy, so we have a nice,
clean, sequential ordering of operations.
So those of you who want to check out this thing that is
upcoming, you can go to this link, take a look at the change
so far in advance of the actual release. It is a release of the
KTX libraries. We will demystify how this works.
Before we go deep, we will talk about the other primitives that
are under the hood here. Woah, we lost our deck.
Uh-oh. Did we switch to the wrong — keynote crash .
SPEAKER: We will talk about jobs, so what is a job? So when
you look at the code snippet, you are actually using the
launch method. And now when you are actually running that launch
method, it returns what we call a job. And a job allows us to
reference, keep a reference of the ongoing piece of work, it
has one method on it, it is called cancel. In this example,
we would not call cancel straight away after we launch
something, that is ridiculous. What it does allow us to do is
to handle double refreshes, if you have something that pulled
the refresh in your app, you don’t want the bugs to happen at
the same time and then you have two things happen at the same
time. So here, this code snippet, you can keep a
reference of the one that is currently running, and if first one. And that is the kind
of how the job works, it is a very simple object. It allows
you to keep a reference of the on going piece of work. So you
may have seen that scope and wondered what it is, Adam
explained it earlier. You can have a scope, and it provides
you with all the context that you need to run a launch or an
async. So let’s look at how they work
underneath. So Coroutinescope ask an
interface that allows objects to provide a scope for coroutines,
think things like with a life cycle, like fragment,
activities, ViewModel. They can provide a life cycle full of
Coroutine itself and start and stop it as it needs. Async and
launch used to be global methods, and a recent refactor
brought them as instant methods on the coroutines scope. And
what it means is that, mentally, instead of just launching
something, you are launching a Coroutine on X. So I’m
launching a Coroutine on the activity. It changes it in your
head, so it is tied to the life cycle of something else.
SPEAKER: Right if you are used to working with the life cycle
owner in arch components, the life cycle owner has a life
cycle that you can observe and attach things to. A Coroutine
scope has a context that carries everything necessary to launch
the Coroutine. SPEAKER: YOU CAN s.
SPEAKER: You can essentially map the context.
SPEAKER: And it crashed again. SPEAKER: Well, it is a good day
for slides. SPEAKER: Got it back?
SPEAKER: Okay, cool. SPEAKER: So we will look at
another example. We are not using the V model scope, we’re
going to add the data ourselves . We created a job, a simple
instantiation, and then we are going to keep that — we are
going to create a Coroutine scope using add a job. Anything
that runs on it allows us to track back using that Java
object. We will give it a default dispatcher, talk about
it later. And anything that is launched on that scope will be
run on the main thread, so the Android main thread. So once
we’ve done that, we will have the refresh method, and we will
use our own creative scope, the same code, using a different
scope. And this time the launch we scoped to the job object woe
created — we created earlier. And what has been torn down,
we can call the job.cancel, and any coroutines that are running
when it goes down will be canceled at the same time. It
reduces memory leaks and allows it to tidy up. So if you can
look at how things are running now, so we have launched the
Coroutine and we are going to go into the update/share method.
So here we are in Coroutine, the launch, which is that blue thing
going around. And in the update share method, which is denoted
by the yellow. And this is the async builder, we have a first
Coroutine running. And so it is running nicely, doing its thing.
And the outer Coroutine goes past that, and the second async,
which is the remote. We have two coroutines running, well, we
have three. But two in the launch.
And once they are going along, we go into the first await. And
that first async, the local, to finish itself off and return a
result, what is what the await will return. Because we are
waiting on the first Uync, the outer coroutines are suspended.
And during that time, the ViewModel is down, and we call
job.cancel. And at this point, the outer Coroutine is cancels
and the inner two are canceled. And the child coroutines inherit
from the parent. So if the parent has been canceled,
anything below it will also be. That is something that was added recently.
WLAUPS if what happens if you are using view models? So as
part of the Android architecture components, we added a listener
functionality of the life psych cycles. You create a life cycle
observer, in this case we are using default, you can add
create, destroy, stop, or whatever it be. And you can
create instance or observer, hopefully you have seen this API
before. And this builds a scope of where life cycle observer,
which allows us to scope coroutines to the actual life
cycle instance. The primary API is it passes the Lambda, and
that is what everything runs with when you are started, that
is kind of what you are using most of the time. So it will
allow the governance, and the first thing we want to do is on
start, we are running the piece of code, creating the Coroutine
scope, and running the dispatches domain. And then we
will call script.launch and call the Lambda. Pretty simple. And
finally, on stop, it seems like a good life cycle to use that
will call and will eventually cancel the job. And that means
the Coroutine will be canceled. You can see that it is not
actually that complex. In AndroidX, if you look deep down
into it, it is pretty simple. To finish it off, we will have a
nice build function, you pass the Lambda, and it will observe
it for you. And what it allows us to do is
like this. So here we have a detailss fragment, the live
scope, and then run something. And it will automatically start
it, when we get to — it will be started when we go to start and
new fragment, and then it will be closed or ended when we get
on stop. All right. Brings us to
cancellation. SPEAKER: .
SPEAKER: We talked a lot about cancelling a Coroutine, what
happens when this cancels? If you have a block of code that is
running, what is torn down when you need to clean up? So when a
Coroutine is canceled, if it is while it is suspended and
waiting for something else to happen. In call back terms, it
has not been invoked yet. It will throw a cancellation
exceptionexception, it will resume from the point it was
suspended with a cancellation exception. What does that look
like? This is the example from before. What happens if we need
to clean something up if this is canceled in the middle of the
update show function? Because it throws a cancellation exception,
this is something we know how to do already. The blocks run as
expected, we don’t need to add concepts from what we already
know from the rest of Kotlin. If the block-in code is running,
it requires cooperation in order to stop what it is doing. So we
can check for that cancellation explicitly in a couple ways.
Generally this means checkinging to see if the Coroutine is
active. And one of the patterns is if
you know the job is canceled, you can call the stock
suspendingsuspending method, such as yield, to force the
cancellation to be thrown. If you are canceled, when you are
trying to suspendsuspend, you will resume when the
cancellation exception, so it will immediately throw if we
happen to be canceled. If you throw a tight loop, you can
check the is active that is available from any suspending
scope and you can stop what you are doing. There is no reason
to involve an exception here if you are doing a tight
intercomputational loop you need to break out of. And that leads
into how exceptions are handled with coroutines in general, and
there’s a few things to point out, especially if you followed
Kotlin’s development leading up to release, because there were
significant things that happened. Launch will re-throw
unhandled exceptions as they happen. It fails the parent, it
cancels the parent job. The parent sees the cancellation
with the original exception as the cause. And they get thrown
back to the exception handler in the root of the job tree, and
the Coroutine context gets a chance to intercept. You can
attach a special element to the Coroutine concept itself that
you can use to handle handle unhandled exceptions.
How does it work? So say that save show throws a
domain-specific exception in this case.
So in this case, this will be treated like an on-call
exception at run time, just like anything else that calls the
exception on the main thread. Async is different, if the
exception is thrown while something you launch with async
is running, it will hold the exception and only throw when
the caller calls await. So going back to the example from
before, we will use what we know.
So we throw our exception from one of these async jobs, and
that gets thrown from this call to await itself. So we know
exactly where we need to try and catch that exception in the
normal way and handle that error.
But there’s a gotcha here, and that is that async works the
same as launch in terms of how the nested job tree is handled.
The deferred object is another kind of job, it will cancel the
parent, just like launch does, and it will do it even if we did
an await and called it. This is important, if something throws
an exception, it is important that your app knows about it, it
should not disappear into the Ether, but it we caught it, what do we do?
So instead of using Coroutine, we can use supervisor scope. It
works like a Coroutine scope, but it is a supervisor job that
will not be canceled if it is handled with
an unhandled exception. SPEAKER: And earlier, we
mentioned that me can decide when coroutines are run and what
thread they run on. On Android, we run on threads, and we are
using the thread pull underneath. And we can decide
when that is dispatched on. And now let’s have a look. We
are running a very simple launch, that is missing a scope.
We are looking at the example. And default is the context we
are using what is called a dispatch.default, that is what
is given to you for free, and it is default for everyone
FWLNCHLTS . SPEAKER: A computation thread
pool. SPEAKER: So what is a Coroutine dispatcher?
It schedules it to run on something, in a thread in this
case. And the default, which you can
get, is it uses CPU threads. So your device has four CPUs in it,
you will get a thread pull of four. So it is not as great as
I/O, it is more like a computational-type dispatcher.
And it is also an Elastic thread executor, that I will talk about
in a minute, but it is the default. There are
dispatchers.I/O, that was added recently, and it was designed
for blocking I/O tasks, so things that we care about,
network, image loading, reading disks, database, blah blah blah.
It uses 64 parallelism, which means you can have 64 tasks run
at a time. It can launch it like that. And the really great
thing about the I/O dispatcher is it shares thread pulls with
the default dispatcher. And the point where it is great is this.
We have an async that is using the default dispatcher, and then
we load an image on the I/O dispatcher, we do disk reading,
and then we’re going to use that result and process it somehow.
It is a computational task. And now what?
Because this is running on the default dispatcher, there is no
actual switch in there, we are using the shared thread pools,
the I/O uses the shared threads, and there is no actual thread
switch which makes it a whole lot quicker. And we have
dispatch.main, it allows running coroutines on the main thread.
And it uses service loader to load the dispatcher in your
code, which is tricky when we have things like Pergyle, so you need to be
careful and add the Android dependency. So use it, launch the dispatch. That
brings us to reactivity. How many launches can be
summarized by this slide? I have been guilty of this.
And I will make the premise and the statement that most devs use
RX Java because you can switch a thread, or switch multiple
threads, that is when most RX Java is useful. And I think,
for 80 percent of the cases, it is a switcher thread. And that
is because the APIs that we have, and we spoke about them
earlier, are not so great to use.
And because of that, most people end up using things like single,
maybe, and completeable. That’s what they are, they are a
single, one-shot thing, a single allows you to have a type, maybe
it is nullable, and the completeable doesn’t have a
return type. They are all pretty similar. But in fact,
they only exist on RX Java, RX Ruby.
So they can be replaced by single, maybe, and completeable.
They do what you think. They replace call-backs, and they
replace these nicely. So as an example, we have a retrofit
interface, and it has a Git and returns a single. And the way
you use that a in RX Java, we have the scheduler, and we will
do some calls when it is finish ed. The nice thing about the RX
two libraries of coroutines, you can use that exact single as an
await. So you can actually use it as a suspending deferred. So
it is handy for when you are slowly migrating towards
coroutines and you don’t want to change from day one. You can
actually keep those interfaces, and you can actually just call
await on them and you never receive using coroutines. It is
a handy way to slowly migrate.
And wouldn’t it be great if we can make the retrofit call, a
suspending function, and remove the RX from the start?
Well, we can. And that is coming to retrofit soon. Jake
has a PR, in review, and he tells me it is soon. And if you look at
consuming code, it is a normal suspending call. And that
brings us to a final section, bringing it together and trying
to think of two scenarios that we will show you to use
coroutines to make your lives easier on Android. Both of
these examples are about location. The first is about
getting the last known location, which is the call
back. And we will use the fused location provider client. And
it is kind of cool, it combines all the providers that we have,
like WiFi, GPS, mobile, and bluetooth, it provides all the
providers for you into one single API. It is a nice API to
use. And it returns a task, a futury-type thing that the
library has. And so you get the
classification and it returns the task. And then you add the
complete listener and you get the result back. So it is
completely async. So what we are doing is we are
converting the call back API into a suspended function. The
coroutines library has two builders that do what we want.
The first is suspend Coroutine, and you pass a Lambda under it
that allows you to set up the call back. So we call play
services. And then, at that point, the Coroutine suspends
waiting for the results to come back, and the call back can wake
up, basically. So right now, you are getting a
continuation to later resume. And then you will pass the
result back. And it allows the call.
So there is the suspend cancelable Coroutine. So let’s
say it is canceled, you can — under the line API, play
services, to cancel its call. So we will build that function,
get classification, and it returns a location, not the
future or anything like that. Just a straight location. And
so we are going to use our suspend cancelable Coroutine
builder, and then you get a continuation, the call-back type
thing, and then we are going to set up, so you are going to call
the location client, the play services API, the last location,
and then we get a task back and add the complete listener. At
that point, we are going to wake up our Coroutine. So that is
how we pass back to the suspended Coroutine, the result,
and it wakes up in the suspended function basically, or it
resumes. And because this is a
cancelable, because this we are using task, we are not using
success, we are using the complete listener, which means
that the task itself can throw an exception. It can fail for
whatever reason, so you don’t have location permission or
whatever it be. It will raise an exception on you, so you can
populate that back up to the call, which is done with the
review and exception method. And finally, because we are
using suspend cancelable Coroutine, we need to tell the
play services API that we have been canceled, so therefore it
should cancel. So we did that with a call back, which is
invoke on completion, and once you know it has been canceled,
you have play services. And play services does not have the
cancel method, but imagine it exists.
And now Adam is going to talk about observing.
SPEAKER: Sure. So what happens when you want to observe a
sequence of events over time? This is what RX is good at, if
you are using it for anything t , it should be this. People
compare RX and coroutines quite a bit, so what does it look like
if we emulate this using coroutines as a primitive. So
it is an API in addition to letting you get a one-shot, it
the current location, you have the up adapts updates. So this
is a prime candidate to be an observable. And we get this
composable control over shutting it down the down the updates
cleanly. It offers a lot of functionality to build things
like this. So how many similar benefits can we get if we base
this off of suspending functions?
So let’s again start just writing a simple function, or it
is going to start out simple. Suspending functions do not
return until the work is done. There is no disposable or
closeable return, since the calling scope itself is
cancelable, we do not return until we are done. So the
observer in this case can be a Lambda that accepts a location
result, we will call it whenever a new location is reported
without returning from the observed location. So if you
take in the giant pile of code here, some of you may mote
notice that it looks like an observable.create. We will
create the done signal that we can await on later, so is this
like the observable completion. If you have a stream with a
well-defined end signal, you can clean this up to let it clean up
and return and we will see that in a bit. So the next piece
here, we are creating a location call-back, we need to know when
to receive updates from the provider. We use launch to get the — the Coroutine scope
that we opened here carries the dispatcher with it that it was
called with. So we know that we are going to call the observer
in the same place that the caller wanted the information
reported. So we cancel the old job, and we
call the observer when holding the suspending new text, it
keeps it serialized so we don’t have too many at once.
And it will not have multiple calls open at once either. And
this is an example of some of the things that if you are
building one of these things yourself, this is a comparison
that RX Java does a lot of these things for you. With
coroutines, we have the primitives to build it, but we
need to do a little bit more by hand. And so we register the
call back, and then we await on the done signal. Since we never
complete it, when is the location stream complete anyway?
This waits for the calling job to be canceled. So we have it
on the blog, and request location updates takes a looper
instead of an executor, which is unfortunate. So if you can use
a direct executor, it will just go ahead and run the observer
wherever the update happens on the incoming binding thread, it
shines when you can avoid the extra hops. You get the idea.
And so here is what it looks like in use, and it looks a lot
like a four H call in a collection and behaves the same
way. If we use launch to call the observer, if the location
call itself will throw in itself. So it is a
child job, it will chancel the parent scope with the exception
as well, what wraps the observed location function body. It will
have the await, unregister it from the final block from above
and all of this composes so you can lean on the constructs of
the language that you know by adding some of these suspending primitives.
SPEAKER: So we will wrap up a little bit. What is next?
Well, the first is that, as you saw thin keynote earlier, we
have a code lab for coroutines, that was released three weeks
ago, a really good introduction into coroutines and how to use
it in your app. Read the manual, and the docs are really,
really good. And you can edit them how you
want to, they are use-case based, so I need to do X, how do
I do it? Make sure to check it out if you are using coroutines.
That is it, thank you very much. [ Applause.]
Coming up next: Modern WebView Best Practices by Nate Fischer,
Richard Coles, Toby Sargeant. Coming up next: Modern WebView
Coming up next: Modern WebView Best Practices by Nate Fischer,
Coming up next: Modern WebView Best Practices by Nate Fischer,
Coming up next: Modern WebView Coming up next: Modern WebView Best Practices by Nate Fischer,
Coming up next: Modern WebView Best Practices by Nate Fischer,
Coming up next: Modern WebView Best Practices by Nate Fischer,
Richard Coles, Toby Sargeant. 1XZ Coming up next: Modern WebView Best Practices by Nate Fischer,
Richard Coles, Toby Sargeant. [ Applause ].
SPEAKER: Hi, everyone. My name is Nate, I’m here from the
WebView team, I would like to talk about some modern WebView
best practices or, as I like to call it, using WebView like it
is 2018. Because it is.
Before I dive into what is modern with WebView, we will
talk about what is old. So WebView has been around since
the very beginning, added in API level 1, and it changed in a
significant way, starting with Lollipop. It became updateable,
the implementation was. And this was great, it meant that
users could benefit from security fixes and bug fixes, it
would update every six weeks, just like your browser. And a
lot has changed since then, too. We added 40 new APIs, ZUST —
just to make it easier to work with WebView to help developers.
But what has really changed? Well, when we look at the
ecosystem, it seems like apps are kind of using WebView the
same way they have always used it. And when you look at
StackOverflow, the answers are outdated at best. They are
certainly not best practices and, often times, they are just
wrong. But some of the blame is on our shoulders, too. A lot of
the doc is still written like it is API level one and a lot has
changed since then. So we looked over the past year and we
looked out at the Android ecosystem and the devices that
are out there today. And what we found is that, although we
added, you know, on nougat, all of these great APIs, a lot of
apps cannot take advantage of them because they run on the nougat devices, less than 5
percent of devices today and it has been two years since it came
out. And a lot of devices are running lollipop and the
implementation of the APIs are not exposed on older platform
levels. We thought, can we do better?
So, over the past year, we worked on our AndroidX library.
We launched a new AndroidX library, and we are pretty
excited about it. The basic idea is we will bring in all of
these brand new developer APIs, but we are going to try to give
you the device coverage you need, and we are going to
support lollipop and above. We will leverage the update cycle,
and make sure that it is usable so you can use the APIs to do
productive things. We designed them to be straightforward to
swap out from the framework’s APIs. So this all sounds fine,
but how can we use it to make your apps better?
So let’s take an example. Since the very beginning of Android,
we have given apps a lot of power to customize WebView’s
behavior. And, in particular, we added a call back called
override loading, and the idea of this call back, for search
and navigations, you can cancel things in the WebView and
dispatch them to a different thread instead. Joy can have a
YouTube url that is better suited in the YouTube app. And
this is great, a lot of apps took advantage of this. But
there was a problem with the API, we did not get it right the
first time. And the issues that JavaScript can trigger
navigations, there is malicious JavaScript out in the wild that
actually tries to exploit this app behavior. And, from the useruser’s perspective, they
might be reading web content and without their interaction, it
starts opening up some new Android app that they are not
trying to open. And so we actually already fixed
this issue. We fixed it back in Nougat, where the idea is that
we exposed this notion of user gesture. Did the user trigger
this navigation? And it actually works really,
really well, but it only works on the Nougat and above devices,
and these — before, Nougat devices are still vulnerable.
Weal thought it was a candidate for AndroidX library, we will do
this gesture all the way back to loll Pop for devices and you can make
it a safe experience for all of your users, and we can make it
easy for apps to override T. There is no confusion. I think
we succeeded, but we will look at the code. Before Nougat this
is what a lot of app code looked like, we are doing overview coding,
and in the coding, and this is the insecure
version of the API but in the past the best we could do. Some
better apps that were out there is something like this, you are
overriding the old implement from before, the before Nougat
devices, and the newer devices, you have this implementation, we
are choosing user gesture, we not launching intent if we don’t
have user gesture. So this seems great, but it only
runs on a small number of devices, even today, only 50
percent. So here’s how it looks like with
AndroidX. And the first thing I want to point out is that almost
nothing changed on this slide. I think that is really
beautiful, it means all the code that you already wrote to
handle, you know, the old framework APIs, that code is all
the same. The only difference here is that
we are employing — we are importing our WebView client
compat class from the AndroidX library, we setting this compat
client, and the idea is that we are using the compat client, instead of invoked on Nougat and
involved, they are invoked to Lollipop so you can provide your
users a safer experience without changing a lot of code.
This is one example we have to everyone in AndroidX. O I I
would like you to check it out it to so how it can make your
app better. We are giving you the device coverage you need to
use these APIs, but we will have a lot more APIs available. Some
of these are small improvements on classic APIs, like we’ve
seen, and some them are for en tirely new features, like safe
browsing. And had point is this is not in
a soon to be released library, it is out there, ready to go, so
you can try to 1.0 release. I would like to shift gears a
little bit. We looked at the AndroidX APIs, these same great.
You are going to give us new APIs with, you know, pretty good
device coverage, almost ninety 90 percent. What about the APIs
around forever, they have 100 percent device coverage, but
some of them are hard to use. Even I struggle to use them
correctly and I working on the team. A common use cay we have
seen is loading in-app content. And the idea is that you want to
display some content in your application, but you don’t
necessarily want to fetch it over the internet, you want to
just compile it right into your app.
But you also want to continue to build this content with web
technology. You want to use html, JavaScript, CSS. And
WebView has had pretty good support for this, in fact, we
almost have too much support for this. We have so many APIs that
it is hard to figure out which one is actually the right thing
to use. And some of them have some weird
gotchas that make them kind of hard to use.
And so I thought maybe we could take a look at some of these
APIs and talk about what is so tough about them and recommend
some best practices. You don’t have to start from scratch with
a new API, but you can kind of tweak how you are using these.
So the first API that we can look at is Load Data, and the
basic idea is that this is going to accept some html content. It
accepts it as a string, and it is supposed to display this in
the WebView. And but one of the gotchas is it
does not really accept html content, but encoded content.
The idea is that you need to escape special characters,
replace them with a percent sign and the code following it.
And we call this percent in coding, this is the default
configuration for the API. But there is actually no framework
API to do the percent in coding for you.
It is kind of an oversight. But the end result is that
developers, what we’ve seen, is that developers tend to actually
do this percent in coding by hand themselves. And this is
manual, it leads to bugs, and these bugs can have significant
impacts for your application. You know, one small bug might
seem okay today, but it might break in a future WebView update
if you forget to encode a particular character.
The other issue with load data is this thing called an opaque
origin. So when your content is loaded
with what is called an opaque origin, this means that it is
going to fail all the same origin checks in the web.
And these same origin checks are actually chit critical to
providing powerful web APIs securely. Without these, you
cannot provide great APIs, like html http requests.
So what can you do with this APIAPI? You can escape the
coding-related problems. This API has always accepted an
alternate encoding scheme, base 64. This is not that special of
a coding scheme, it is just a different scheme. It is not
necessarily better, but it is kind of nice because there is
actually a framework API that will do the encoding for you,
and it does it correctly. Great.
So the base64, encode to string, it will take the content, spit
out the right answer. And the only reason it is not documented
is because this came out in API level 8, which today is ancient
history, but was still in the future at the time of writing load data.
But we can also take a look at the same origin restrictions.
So the way we recommend to get around this is to use something
called load data with base url. And one of the nice things about
this, I think of it as a feature, not a bug, is that it
actually accepts the content as is. You can give it content
that is totally unencoded, you don’t have to worry about the
base 64 stuff if you use this API.
The other really nice thing about it is it has this thing
called a base url. You are displaying this as you pass in
as a string, the base url configures the origin that it
operates it. You can control the origin that you get without
disabling the important security settings, just to make the APIs
work. How do we choose the right base url?
So this is something that even I struggle with when I try to use
this API, I know it is the right thing, but I don’t know what the right
thing is to pass to it. So we will go through the common
use cases. So something that we’ve seen in a lut lot of apps
use cached content, it is downloaded from the web over the
internet, but they are saving it for later. Now when they show
it, they need to show it with the right base url. And the url
you choose is just the original url it came from. So if it
worked originally, it has the same origin and all the APIs are
going to continue to work. The other use case that we have
noticed is that apps tend to shift the own content and
display it this way, which is great.
And we recommend that you choose a real internet-type url, and it
should use your organization’s real domain. The reason for
this is so you can import other resources from the servers and
use this content without worrying about same origin
checks, it will all work. And the question is, do we use https
or http? Here is the rule of thumb, you want to use https as
the secure protocol. If you need to share insecure
resources, we recommend that you use the html scheme as opposed
to disabling important security settings just to get your app
working. And as a last point I want to
urge apps to avoid curs custom url schemes. This is something
that cropped up, they make up their own scheme and use that.
But the problem is that the web standards don’t really expect
custom url schemes. They are very vague about how to handle
this, and it turns out that they wind up getting handled very
inconsistently, and this can lead to surprising app breakage.
So if you can stick to one of the internet url schemes you
will have a much better time.
So hopefully I have expressed that we care about developers at the WebView
team and we are working hard to make sure THLT that you have
powerful new APIs, paying attention to the old APIs, and
explaining how they need to be used and they are actually
usable. If you have any questions, me and my colleagues
will be around for the rest of today as well as tomorrow and
would be more than happy to talk to you about WebView usage and
what you need for your application. Thank you very
much. [ Applause.]
Coming up next: Low Latency Audio – Because Your Ears Are
Worth It by Don Turner. Coming up next: Low Latency Audio – Because Your Ears Are
Worth It by Don Turner. SPEAKER: Hello, hello. My name
is Don, I work in the Android developer team, I’m here today
to talk to you about low latency audio on Android.
So why is audio latency important? Well, there are many
use cases where the audio latency is directly proportional
to the user experience. So this can be in games, where
you tap on the screen and you hear a sound, particularly in
rhythm games, like Guitar Hero-style games, you are
tapping in response to some rhythmic event and you need to
have audible feedback as quickly as possible, like the longer the
delay between tapping on the screen and hearing a sound, the
worse the user experience. And also in DJing apps, you are
tapping on the screen and manipulating odd we and you
expect that audio manipulation to happen in realtime. Karaoke, you have the input,
your voice, against a backing track and also your own voice.
So if the delay between you singing and hearing your own
voice is too long, then it sounds awful.
And also, in VR, we have objects that make sound in a virtual
world and, if that sound doesn’t follow you around as your head
moves in this environment, then it kind of distorts the reality.
And lastly, of course, theres a whole load of apps for live
performance, sin hesizers, drumming apps,
anything where you press a key and make a sound, you need
low-latency audio. So, with this in mind, we built
a library to help developers build these kind of apps. It is
called Oboe, and it is available on GitHub now. We just launched
version 1, it is production-ready, to be included
in your apps today. And the way it works is, under
the hood, it uses the audio API on API 27 and above, which is
the new high performance, low latency, audio introduced in Oreo. And on all the
devices, it uses open SLES. It provides a simple, easy-to-use
API that works across the widest range of devices.
So rather than me talking about this API, I thought it
would be fun to build a musical instrument in 17 minutes and 23
seconds. So, before I start that, I’m
going to explain the architecture so it makes sense
when I’m in the code. So we have an app, and I’m going to
build the audioengine part of this app.
This audioengine is going to be responsible for creating an
audiostream that is provided by the Oboe library, we’re going to
be passing audio frames of data into this audiostream.
Ultimately, this stream is responsible for putting data out
of the audio device. In this case, it will be the 3 and a
half millimeter jack on this pixel XL phone. And every time
the audio device needs more information, it is going to give
us a call-back. So we get this call-back loop of, hey, I need
more audio data, and our audioengine is going to be
passing frames of audio data into the audiostream.
For some form of control, we’re going to monitor tap events in
that screen. When you tap down, the sound will go on. When you
lift up, the sound will go off. This works about 50 percent of
the time in rehearsal, we will see what happens.
Okay. First, I need to log in.
Okay, and can you see my screen? Fantastic.
Okay, so I’m just going to run the app. So I have a very
simple shell of an app. It doesn’t really do anything at
the moment, but it has a few little shortcuts that make it
possible for me to do this in a short amount of time.
So I would just run the app on this pixel XL, and hopefully you
will be able to see that it does nothing.
So here we go. Here is the app, when I tap on the screen,
nothing happens. No sound comes out, it is non-functional. I
want you to know that there is no smoke and mir and mirrorers,
it is jen genuinely live.
[ Laughter ]. Thank you, GLRKS Glenn.
I have a few methods I will use, and I will implement them
in a second. So we will create an
audioengine, and we will start by calling start engine.
So we will just jump into our J&I code.
So this is in native lib.CVP here. So I’m going to define an
audioengine up here. I will call it engine.
And then I’m going to call a method on my engine called
start. And now, I have already created
the header and implementation files, just the blank files for
this audioengine class. So I will go ahead now and write the
class. So audioengine.
And I’m going to have one method called start.
Okay, now I can use option enter to generate the definition for
this in my implementation and I’m in the
start method. Before I use the Oboe library, I need to include
the Oboe header.
There we go.
And the other thing I need to do, where it makes it easier for
me, to use the Oboe name space. And this avoids me having to
prefix the objects with the word Oboe.
So, in our start method, I will create an audiostream. To do
that, we use an audiostream builder.
That builder allows us to set properties on the stream. That
is like the format of the stream.
And now, when I set the format, there are two choices I can
choose from, 16-bit integers, or floats. I will use floats. I
can also set the number of channels, so that is two for
stereo, or one for mono. And I can also set properties
which inform the stream of my latency requirements. So the
most important one here is set performance mode.
And there’s a number of options, but the one I want is the low
latency one. The second thing I can do is set
the sharing mode on the stream. We will set that. I will set it
to an exclusive sharing mode. So that means that I’m
requesting the audio device give me an exclusive stream, that
means that my app’s audio is not mixed with any other audio in
the system. If the device supports it, I can avoid having
my odd ye — audio mixed with anything else and I can cut a
few milliseconds of latency in the output.
So that’s all I need to do to open the stream, Joe can go
ahead now and call open stream, this takes a reference now to an
audiostream pointer. I can use option enter to create a new
field called stream. So back in the header, it is done
automatically for me. And once the stream is open, there is one
final step I need to take which, is to set the buffer size on the
stream. I can do this by setting buffer size in frames.
To get the smallest possible buffer size, we have to
interrogate the audio device for the minimal amount of data it
will read in one operation, a discrete chunk of audio data,
and we call it a burst. So we want to obtain the burst size
from the odd — audio device.
We will use stream, gets per burst. And that’s the minimum
number of buffers we can set our stream to have, but we to not
recommend that you use this absolute minimum. We recognize
that you use double this, because it provides a good
protection against under runs, and it is a good tradeoff with latency.
That’s all I need to do to create a low latency stream. So
I can go ahead and start the stream, which will do nothing
because we have not found a way of putting data into the stream.
To get the data into the stream, we use a call back. I’m back in
the builder, I will send the call back method. It will take the call
back object, and using this, I will use my this object, which
means that my audioengine needs to implement this interface.
So I will do this, and I will use control O to show me the
methods I can override in this interface. I want on radio
ready, the method that is called every time the audio device needs more data.
So inside here, I will look at what this method signature is.
So on audio ready is called, it tells me the stream that wants
more data, and it gives me a container array. So this
container array, which is of type void star, because it can
be either 16 bit integers, or floating point samples is
something that I’m going to write my audio data into. SoI
so I write that, that is passed to the audio device. And next
is num frames, it tells me the number of num frames that need
to be populated in this container array. I need an
audio source, I will cheat a little bit here. I created an
oscillator in advance, we will take a quick look at it, and it
is going to generate a square wave here. So that’s a periodic
signal, varying between two points to create a square wave.
We will create an oscillator. So this is the template object,
I need to tell it what type.
I will include the oscillator header. And now I have the
oscillator, I will do ask render, so Android Studio is
complaining about the signature. I will build, and that will
normally sort it out.
Ignore the arrows. So on my oscillatoroscillator, I have a
render method to put the audio frames from the oscillator into
the audio data array. So the first thing I need to do is to
cast it to this — an array of floats.
So audio data. And pass in the number of
frames. So the last thing I need to do
on audio ready is return a result. This can be either to
continue where the call backs will continue, or it can be
stopped and the call backs will stop. So, in my case, I’m going
to continue. And the final thing I want to do is set some
things up on my oscillator. And soso I’m going to set the
amplitude, which is the volume, and I’m also going to set the
frequency, set that at 80 hertz, a nice base freaks frequency,
and the sample rate, which tells the oscillator how frequently
these samples should be produced.
And I can get that from the stream, get sample rate there.
Okay, I know that you are all desperate to hear a sound.
There is a final thing I need to do here, I need to respond to
tap events. So I will just go back into the main activity and
I will override the on touch event so that if the event is down, then I’m
going to call this native method tap, to make that true.
Otherwise, if the event is up, I’m lifting my finger off the
screen, then I will pass in false. Okay, let’s have a look
at the tap method. This needs implementing.
So I will pass in the true or false value, create the new
method, and they will just call ask, set, wave on, and that is
going to pass that in there. Now, a moment of truth.
So I’m going to run this and, in theory, you should hear a sound.
And when I tap on the screen, we should hear an 80 hertz square
wave. The pressure.
[Tone]. There we go.
[ Applause ]. [Buzzing noise].
SPEAKER: So you can see, it is the lowest possible latency
here, and it is actually — [series of beeps] — pretty
responsive. So we have a musical instrument. Admittedly,
it is not the best instrument in the world. It is a little bit
to be desired on the sound. So what I thought would be nice is
if you could add a little bit more control. So for the last
four minutes and 30 seconds, I will tie the motion sensor to the pitch of the oscillator. To
do this, I’m going to cheap. I will call in some code that I
did earlier, and it will register a list now that will
listen to the rotation vector sensor.
So to listen to these events, I need to implement the sensor
eventevent listener interface, implement the methods,
onsensorchanged. So what I want to do is set the frequency based
on the event values of the X axis. And I also need to scale
this value. So I want to have it from, let’s
say, from 20 hertz, the limit of human hearing, and we will go up to, like,- this will give me
up to around 100 hertz. So, yep, that looks good.
So again, I need to implement this. Frequency… so we will just go
ask set frequency, and there we go.
Okay, so we are good to go on that. Brief interlude. Very
brief, in fact. Has anyone heard of the R man break?
One person. So the R man break comes from a song in the ’60s, R
man brother, 4bar four bars of the most incredible drum solo,
the most sampled loop in history, but nobody has heard of
it evidently, apart from one guy. I thought it would be nice
if I can play my new musical instrument over this loop.
So here’s the loop, I need to run the app. We will give it a go.
We will make sure it is here.
So, with a bit of luck [drum sample].
[Oscillating tone].
[Drumming and oscillating tone]. SPEAKER: Right. [ Applause ].
SPEAKER: Okay, so that is about it from me. If you can go back
to the slides.
And, yes, the library is available on GitHub. There’s a
documentation, codelabs, all sort saidsses of stuff in there.
We would love for you to build awesome experiences on Android,
and I’m here for any questions. Thank you all very much.
[ Applause ]. SPEAKER: Thanks, Don. With the
end of that talk, that is the end of the day.
That means that it is time for a par party. So the theory is we
will have food, drinks, and a DJ. I vote for Don do the DJ.
I wanted want to hear that tune all night long, or a two-minute
loop and that is probably about enough. So 6:20, the party starts….

Only registered users can comment.

  1. Thanks for tuning into #AndroidDevSummit livestream for Day 1. Be sure to come back tomorrow for Day 2! Both Theater 1 and Theater 2 live streams start at 9:30 AM (PST)

    All recorded sessions will be available in this playlist →

    Livestreams from Day 1 are still available:
    Day 1, Theater 1:
    Day 1, Theater 2:

    Livestreams for Day 2:
    Day 2, Theater 1 →
    Day 2, Theater 2 →

    Take a look at the event schedules here:
    Day 1 Schedule →
    Day 2 Schedule →

Leave a Reply

Your email address will not be published. Required fields are marked *