Making Android Accessibility Easy (Android Dev Summit ’18)
Articles Blog

Making Android Accessibility Easy (Android Dev Summit ’18)

August 11, 2019


[MUSIC PLAYING] PHIL WEAVER: Good afternoon. Thanks for coming. My name is Phil. I’m here with Casey
and Qasid, and we want to talk to you about making
accessibility easy on Android. Thanks. So first, a bit about
what accessibility does. And ultimately, a
lot of engineering ends up being about
assumptions, right? We find that– usually
when we find we’ve made a– when we made an assumption,
we eventually discover it when we realize it’s not right. And that happens a lot
with accessibility. It’s really easy to make
assumptions about what people– particularly people
with disabilities– can and can’t do. If you meet somebody who
uses a power wheelchair, you might make an assumption
that they probably don’t play soccer. And this on the face of it might
sound like a pretty plausible assumption, but then you haven’t
seen my son’s power soccer team. There was a World Cup of
power wheelchair soccer. So whatever– like if you
meet somebody who’s blind– I’ve heard people who were, for
example, using photo carousels. And they go, well,
maybe blind people wouldn’t be interested in
keeping track of photos. Now we now have technology to
let blind people take pictures of things, and they’re
very interested, it turns out, in sharing
them with people. And then there’s
the simplest one, which is making the
assumption that somebody who’s not able to move
their mouth can’t speak. But of course, Stephen
Hawking demonstrated that that assumption is
wrong long, long ago. And thank goodness he did. So on Android, we make
a lot of assumptions. It seems like a
lot of assumptions are built into our devices. Take a touchscreen device– touch screen, you might assume
people can– all of the users are able to touch the
device, or that they can all see what’s on the screen. But of course, that
turns out not to be true. And so these assumptions, we
have to learn to question them, otherwise they become obstacles. And so a lot of what
accessibility really is is about helping people
with disabilities overcome obstacles. And we want to really– the first thing is
just to make sure that the obstacles we’re
helping them overcome are the ones inherent in
their disability, not the ones that we’ve created by making
additional assumptions about what they’re interested
in, what they’re actually able to do. So the way Android does
accessibility is really– it depends on the
entire ecosystem. So we who work
directly on Android, we build a lot of
the core of it, but we really depend
on the entire ecosystem to make this work. So the accessibility
framework is going to handle a lot of
low-level features, things like magnification. As a developer, you’re not
going to build a magnification experience into your
app in order to help, so that if we try to do
that into every single app on Android, we’re going to
end up with just a nightmare experience. And also it’s something that’s
really best handled down, close to the graphics level. We want to just magnify things. Accessibility
developers, people who work on accessibility
services like TalkBack, Switch Access, Voice
Access, Select to Speak, they build these
plug-in services. And the idea is that
they will generally have a particular
type of user in mind, and they’ll go
through and really understand what
those users need, what their challenges
are, and try to build something that works for them. But they, of course, can’t
do everything on their own. They need to get
information from– like if somebody
is blind, you need to be able to speak
what’s on the screen. We need to find that out
from the apps, what’s actually going on in the UI. And that’s where we
really need the help of the entire ecosystem
and all developers. So as a general model, we
have these plug-in services. So in this case, Switch Access,
Voice Access, and Braille. And they’re able to query an API
for the accessibility service to find out. Like tell me what windows
are on the screen. Tell me what the views are
inside these windows, what text is where, what actions
are available on each view. And then they can
present this to the user in whatever way they need to. If the user’s not able
to touch the screen, do you want to activate
this control, that control? Do you want to
perform this gesture? And do it on their behalf. But we really need
the apps to provide the right semantics to do that. I’m going to try to make
this as simple as possible. So just a few things to
kind of keep in mind. One is just to make sure
that the information that you’re presenting
to all your users is as visible as possible. So color contrast or
[INAUDIBLE] contrast ends up being the most
important single thing that you can do in
that regard, just to make sure that
you’re not using gray on slightly lighter gray
to convey information. And sometimes it can end
up looking kind of cool, but it becomes very difficult
for some people to use. A second thing is to just prefer
controls to be big and simple. It’s sometimes a temptation
to cram a whole bunch of stuff into a small area to get as many
options as possible for users. And often, just
trying to figure out what is really
important for your users can simplify things
for everybody and also make it
possible for somebody who perhaps doesn’t have
perfect dexterity to use it. And one thing this week–
we have a general 48 by 48 density independent
pixels as a guideline that we throw out there just
to have a uniform standard throughout the ecosystem. And that can be set with just
a min height, min width thing. And the other way
is to label stuff. So if you’re conveying
information visually– particularly what a button– if there’s a button that
just has an icon on it– just to provide a label. And these labels should
be concise and precise so that users who are not
able to see the screen can find out what they do. Some times, people
ask, when in my project should I consider accessibility? So the three word answer
is early and often. The slightly longer
answer is in each of these different
phases of your project, there’s something you can do. So in design, generally the
things that I just already mentioned– of keeping
contrast, control size, labels. That will get you a
long way if you’re using standard UI patterns. But the more innovation you’re
doing on your user interface, the further away
you’re going to get from the stuff we built
into the Android framework. And at that point,
it’s really helpful if you can start thinking
more broadly about how the full range of users who
are going to use your product. So the more gesture type
things that you’re doing, you want to make sure that
you’re taking into account how people are going to interact
with that if they perhaps can’t perform that gesture. During development, we
have a series of APIs. Qasid’s going to
present some of those. And for testing,
we have a number of testing tools to help make
sure that as much as possible we can automate the process
of making sure you’re handling these things correctly. So I just explained a
whole bunch of stuff about all the
complexities of users and all these different
things you can do. So you could be forgiven
for wondering why we said making accessibility easy. So I did say making, not
that we already made. But in general, what we want is
we really want this to be easy. I think we want that to be– we want to make sure
that we’re doing that. And if we’re not,
please let us know. But if you’re using standard
components and standard ways, it really should be
as simple as what I’ve already said,
that you should just be having to label things. We should– the questions that
you need to answer should not be complicated questions
about how is a particular type of user going to
use your project– something like, what
does this control do. But as you evolve in
more non-standard ways, it is going to get less easy. But what we want is to make
sure that the incremental work that you do for accessibility
grows more slowly than the work you need to do to serve
whatever users you have in mind to begin with. And now Qasid is going
to talk a bit about a way to think about the
development process for folks with disabilities. QASID SADIQ: Hey, guys. My name is Qasid. I work for Phil on the
Android Accessibility team. So let’s get right into it. Now, if you really
think about it, there are two ways that
your user can interact with your application, right? The first is
consuming information. Now, this can be content
that your user wants, or it could be indications
on how to use your UI. And the way we
consume information varies dramatically
from user to user. So for example, I
consume information by looking at what’s
on screen and consuming and processing it in that way. A TalkBack user,
on the other hand, will hear a
description of what’s on screen from the device’s
speech synthesizer, right? Now, once a user has
consumed and understood that information, and they’ve
combined it with information from the real
world, they can then act on your application
to make it do things. And just like
consuming information, this varies
dramatically from user to user and from
circumstances circumstance. Now, these actions
can be something like a tap, a swipe, a scroll. It can be speaking
into a device. Some Switch users even
drive the whole device with a single button. Now, once we combine these
two modes of interaction, we see this cycle of action
and information, right? And we can see that
if any part of this is suddenly broken for
a user, our applications become fundamentally useless. So our goal becomes,
how do we make sure that this is complete
for every single user? Now, that seems like
a daunting task. Because like we’ve
mentioned before, the ways that the users
interact with our devices actually various quite
a lot, and trying to understand those users
is quite a complicated task. But if you don’t try to do
anything too non-standard, and you trust us,
and you use our APIs, then this is actually
a pretty easy task. So I’m going to show you
some trivial things you can do which are fundamental
and also push your accessibility to be much better than it
otherwise would have been. Let’s start in the
ways that we break consumption of information. Now, for the purpose
of this, I’ve created this nice little search
application– a novel idea. And the way that I’m indicating
that this is a search UI is by that magnifying
glass and that little line. Now, right away you’re
going to notice something. That’s a very light shade
of gray on a very slightly lighter shade of gray. And that’s going to be
difficult for people who– that’s going to be difficult
for a lot of people to see because that’s a
very low contrast ratio. It’s going to be frustrating
and downright impossible for others. So what we should do is
we should darken it up. Just like that,
many more people are able to use our application. The goal here is make sure
all of your information is visible in terms
of size and contrast. And if you want concrete
guidelines on this, you can go to the
Material website, and we can give
you contrast ratios in different circumstances
and also hard numbers in terms of size like the 48 by 48. Now, that was kind
of straight forward, but there are clearly
some dramatic variations in how we consume information. And for those situations,
what we want you to do is we want you to fill in the
blanks for our frameworks. What I mean by that
is our frameworks can infer a lot from your
view hierarchy from the code that you’ve already written,
but there are some situations where we need your help. Let me show you. Let’s just say that
I’m a TalkBack user, and I’m placing my
finger on screen to hear a description of
whatever is under my finger. And I place my finger
on the magnify icon. Now, I want to
hear what that is. But unfortunately, there’s
no way for our frameworks to figure out what that
means because there is no text associated with that. I mean, it’s essentially
an arrangement of pixels on screen. So you guys have to fill in
the blanks as app developers, and you can do that
by adding a label. So in this case, I decided
to label it Search. Make sure you keep your labels
concise and to the point. If it’s something that
indicates an action, then make it a
simple action verb. This is actually pretty easy. In this situation,
all you would do is you set the
content description. Make sure your
string is localized because it’s a
user-facing string, and accessibility users
don’t only exist in America. Now that our user can generally
understand what’s happening on screen, let’s
make sure we don’t– let’s make sure we allow
them to act on our device appropriately or act on our
application appropriately. So I decided to
add a new feature. It’s a Clear Text button,
so you can easily– instead of having to Backspace, you can
easily reset and start typing again. But that’s a really tiny button. You know, I’m going to
have trouble tapping that, and someone who has some
sort of fine motor disability is going to have a lot
more trouble tapping that. It might just be
impossible for them. Let’s make it bigger. And now, again, this
works for many more users. Like Phil mentioned
earlier, make sure your controls are simple,
and they’re large. So I decided to add
another feature. I decided to add
a History feature. Here you can tap
any of the queries and see the results
of the query again. Now, if you end up swiping
on any of these items, you’ll end up
seeing a trash icon indicating that this item
can removed from History. And if you continue
swiping, that item will be removed from History. So this is great
and all, but this is a pretty custom-built
gesture overloaded onto an item, and it’s going to be hard
for our frameworks to detect. So again, you guys
are going to have to fill in the blanks
for our frameworks and for accessibility users. Now, you can do that
by adding an action or an accessibility action. And all you have to do here
is specify a user-facing label or description of that
action– again, a simple verb– and the bit of code
that should be executed when this action is performed. Currently, you can do that with
the accessibility action API, but we are actually
adding something to the Android X
Library which allows you to do this in
a single line call with a lambda and
a string passed in. So I’ve shown you
a way of thinking about accessibility issues
and how to address them, but you still need to know
how to find these issues and how to verify that
you’ve actually fixed them. This is where Casey comes in. Casey? CASEY BURKHARDT: Thanks, Qasid. So I’m Casey Burkhardt. I’m a software engineer
on Google’s Accessibility Engineering team, and
I lead the development of accessibility testing
tools for Android. So as Qasid had
pointed out, there are many common
accessibility issues that actually– fixes are
fairly straightforward from the development side
and can really improve the accessibility of your app. A big question,
though, is how do we go about finding those issues? And there are three types
of accessibility testing today that I’d like to cover. There’s automated testing–
the use of automated tools; manual testing, where you
use Android’s accessibility services yourself to understand
the user’s experience; and finally user testing,
where you would actually bring in users with
various disabilities and truly get their feedback
and understand their experience using your application. My section of the talk
focuses mainly on automation because right now we see
that as one big area where we can make vast improvements
across the ecosystem if we have some developers’
cooperation using these tools. So automated accessibility
testing until several years ago on Android
didn’t really exist. In around 2015, we
launched a project known as Android’s Accessibility
Test Framework, which is an Android and Java library
that essentially houses a lot of detection logic for
the more common accessibility issues that we can
identify within an app’s UI in a rule-based fashion. This is an open source project,
so you can go and find it on GitHub. And essentially, what
it’s aiming to do is look at your
app’s UI at runtime and find some more of the
automatable, mechanical aspects of accessibility that
we see day to day that affect users the most. So we can find a lot
of the common issues that Qasid discussed,
so we can tell you if you have a control
within your UI that is missing a label
for a screen reader. We can identify low
contrast text and images. We can tell you if you’ve got
clickable items within a UI that that fail to meet our
minimums for touch target guidelines. And we also identify a
number of various other implementation-specific issues. And the core testing
library, ATF, is growing constantly to
find more new and interesting issues. We’ve taken this library,
and we’ve integrated it with common developer
endpoints– essentially, tools that you’ll use commonly
throughout the development lifecycle and you can leverage
throughout your project. I want to talk through some
of those integrations today and how you can use them and
get started with them quickly. So first and foremost,
we’ve got integrations with two test frameworks
that are pretty commonly used throughout unit and UI
tests on Android apps. The first is
Espresso Robolectric. So essentially the idea with
these integrations of ATF is they will piggyback on
top of your existing tests. So you have a test that
runs, loads some UI from your application,
interacts with that UI, and asserts some state. During the interaction
phase of your test, if you’ve enabled our
integrations with Espresso or Robolectric, we’ll actually
evaluate the UI at the time that it’s interacted with. So if we identify an
accessibility issue that we believe will affect
a user with a disability’s ability to interact
with your application, we will fail your existing test. So if you already have
strong test coverage with either Robolectric
or Espresso, enabling ATF’s integration
with that test framework might be a great way to get
a decent amount of coverage for automation in terms
of accessibility testing throughout your app. A few other points
to note here– each framework
offers what’s known as an AccessibilityValidator. And this is an
object that you can use to essentially configure ATF
behavior inside of your tests. So you can configure ATF, for
example, to not fail your test and just log something if
we’ve run into an issue. You can set up ATF to crawl from
the root of your view hierarchy whenever you perform an action
rather than just evaluating the item that was
interacted with. It’s a great way to get
some additional coverage. And you can even use
AccessibilityValidator to set a whitelist. So if you want to turn
on accessibility tests within your Robolectric
or Espresso tests, you can do so, maintain
a green presubmit, and burn through issues
that you know about by creating and registering one
of these whitelists for known issues. So specifically inside how you
leverage these between Espresso and Robolectric, in Espresso
you’re going to essentially call
AccessibilityChecks.enable() from within your tests setup. This is going to essentially
trigger our global assertion to run whenever you use a view
action within an Espresso test. So in this case, the
view action is run. We’ll perform an evaluation on
the view that was interacted with and its subtree. AccessibilityValidator is
actually returned by the call to AccessibilityChecks.enable(). So if you need to further
customize ATF’s behavior, you can do so from the object
returned from that call. Within Robolectric,
it’s slightly different. So instead of calling
AccessibilityChecks.enable(), AccessibilityChecks
is an annotation, and you’ll annotate your test
method or class that you’d like to enable accessibility
checking within. And it relies on you using
ShadowView.clickOn to actually interact with
elements in your UI through your Robolectric. tests. So avoid the temptation to use
viewperformClick() directly. The AccessibilityValidator
functionality’s available in Robolectric
through a different class that mirrors the same API. You can just look
for AccessibilityUtil and make static calls there
to configure ATF’s behavior. So in addition to integrations
with these automated test frameworks, we’ve built a
separate standalone tool known as Accessibility Scanner. This is a direct
integration of ATF, but Accessibility Scanner acts
as a front end for the library. It will evaluate the foreground
UI of your application at runtime on an
actual Android device. The way it works
is essentially you install Accessibility Scanner. You turn it on. It will add a floating
button globally to the device’s screen. You’ll open up your
application, navigate to the UI that you’d like to evaluate,
and just tap the button. And what you see is
essentially a report that describes the ways
in which you can improve that UI for accessibility. Again, these will mirror
the same types of issues that ATF, and Espresso,
and Robolectric can identify as well. It’s very easy to
take these reports from Accessibility Scanner
and share them with your team. You can currently export
to email or Drive. And it doesn’t really
require technical skills. You don’t have to have a debug
version of your application or a user debug device. You don’t need to do anything
special to set this up. It works on any app on
any Android device running Marshmallow or later. And really, you don’t even have
to have a lot of experience related to accessibility. Each issue that Accessibility
Scanner can identify comes with extensive
documentation that gives you background on how that
issue might affect users and how to think
about that issue during the design,
development, and test phases of your project. So please do download
Accessibility Scanner. It’s something I
highly recommend we use when we build UI. G.co/AccessibilityScanner will
take you to the Play Store page. Finally, one last integration
I’d like to cover today, and this is a new one. We’ve launched it
a few months ago. It’s an integration of
our Accessibility Testing Framework and the Play Store
Developer Consoles Pre-Launch Report. For those of you who haven’t
used Pre-Launch Report yet, it is a great tool to get a
sanity check during the launch process, the release process
for an APK to the Play Store, either on an open
or a closed channel. Essentially, the way it
works is you upload an APK. Pre-Launch Report will take
that APK, instrument it, and push it to a number of
different physical devices in a lab. And we’ll crawl your
app, essentially, on these different
devices and generate reports that include findings
about performance, security, and now accessibility as well. So ATF is running
alongside Pre-Launch Report as it’s crawling application,
and it’s generating reports at each stage in the crawl. And it’s taking
all of the results and de-duplicating them. So if you’d like
to check this out, it’s available now in Play
Store Developer Console under Release Management. And you should be able to
see accessibility results for any APK that you’ve
uploaded to the Store since fairly early in July. So please do check that out. And here’s what it looks like,
just to give you an idea. Here this– the main
entry point for an APK will show you all different
categories of issues that we’ve identified. It will show you
clusters of issues. And you can click
on any one of those, and it will show
you more details about the specific problem. So in this case, we’re
pointing out an issue relating to touch target size. And in the column
on the left, you see many different examples of
that same de-duplicated issue across different crawls
of your application that Pre-Launch
Report has performed. You have access to the same
additional documentation here as well too. So if you’re not familiar
with a particular issue, the Learn More
link will give you the details you need to resolve
it regardless of what stage your project is currently in. So I want to wrap up by talking
about an accessibility testing strategy. We talked a lot
about automation, but we didn’t go into too
much detail on the other two– manual testing and user testing. And these are equally important. Automation is great
because it helps you find issues very quickly
early in the development cycle, especially if you have good
test coverage with our automated tools. So essentially, think
about automation as a way to catch very
common issues quickly but not as a way to guarantee
that your app or your UI is fully accessible. To really understand
your user’s experience, to get that awareness of how
your UI is actually performing with an accessibility
service, we really highly recommend you go and
actually turn on TalkBack. Turn on Switch Access. Try them out. Learn how to use them and
gain a true understanding of your user’s experience. The way I like to
describe it is automation is capable of finding a missing
label for a screen reader, but we can’t really tell you
if your labels make sense. So really only by
understanding, putting yourself in the user’s shoes
and understanding their experience– or
asking users directly about their experience– only then can you truly
understand how accessible your UI is for users with
various different disabilities. And we have found
at Google, both looking at our first-party
and third-party applications, the most successful, the most
highly accessible apps that we see day to day are
the apps that combine multiple accessibility
testing strategies. So those that
leverage automation during presubmit or continuous
integration, but also have a regular process for
manual accessibility testing or bringing users in and
understanding their perspective on the accessibility
of an application. So these are all
things to consider. With that, I’ll hand
it back to Qasid, who’s going to talk to us more
about some of the newer APIs for expressing app semantics. QASID SADIQ: Hey, guys. I’m back. So let’s just say that you’ve
adopted the APIs that I’ve talked about earlier, and
you’ve adopted the testing that Casey has suggested. And at this point, you’ve
got a pretty good foundation of accessibility in
your application. But the more you’re
testing, the more you realize that there’s some
holes in your experience, and that’s breaking that
cycle of interaction that I mentioned earlier. So we’ve been adding new API
and new things to make it so these holes get plugged. So the first things
first is clickable spans, those a little
clickable bits of text. Before API 26, non-URL
spans were fundamentally inaccessible, and
developers had to write a bunch of hacky workarounds
to make these things work. That’s now changed. In addition to that, in the
latest alpha of the Android X library, we’ve added
API to make this work all the way back to API 19. So you can look
out for that too. Secondly, there’s some
views that, as far as a user is concerned, act and
behave like Windows. They have their own special
content, their own lifecycle, so on and so forth. And we call those
accessibility panes. Now, you as the developer
need to inform our frameworks to present these a little
bit differently to the user. And the way you do that is
by passing a non-null string to setAccessibilityPaneTitle
on the view. And again, this is a
user-facing string, so make sure it’s concise, and
it’s localized. This is available in
API 28, but we’ve also added this API to ViewCompat
in the latest Android X alpha, and that’ll work all
the way back to API 19. And finally, there are headings. These section titles are
used by TalkBack users, for example, to navigate through
sections quickly and easily. And the way you specify that
is exactly what you probably expect. Pass in a Boolean to
setAccessibilityHeading on a view. This is also available
in 28, and we added this to the latest alpha of
the Android X library. That’ll work all
the way back to 19. [INAUDIBLE],, thank
you for your time. Now it’s back to Phil. PHIL WEAVER: All right. So circling back to
the title of this, which was Making
Accessibility Easy, we really want this to be easy. It’s the only way that
users with disabilities are really going to be able
to access the ecosystem. Because occasionally
we’ll counter somebody who’s really willing
to do the extra mile, really dig into their
app and sort out all of the issues
for accessibility, and that’s great. But not everybody, not every
developer can possibly do that. And so we need to make
this as easy as possible so that everyone can build this
into their workflow in a really natural way. And so certainly
to get the basics, we want it to be
straightforward. So if you’ve got
reasonable contrast to make the information
visible to everyone, you’ve got simple big
controls, you’re using labels, we really want that to kind
of get you almost all the way. And then some of these other
APIs like Qasid was just describing should be able
to handle some of these more specialized situations. Like if you’ve got a full
screen fragment transition, you use the pane
title to make sure that that gets handled the same
way a window transition would. So saying that we
want it to be easy– it means if it’s
hard, we’ve messed up, and we should fix that. We’ve found some developers
will really like– well, the framework seems to
be coming up short here. So I’m going to
engineer a workaround to get around the problems
with the framework. Honestly, please don’t do that. Let us fix it because
we can fix it at scale, and only we can fix it. It needs to be fixed
inside Android. If you find it in Android X,
if you want to fix it, by all means upload it to
Android X. And we’re happy to accept fixes. But we want it
fixed centrally so that we can get a
consistent experience throughout the whole ecosystem. If you are engineering
some super custom solution, you think– you know–
is an engineer working at a different company going
to actually do this work? And if the answer
is no, then there’s probably something wrong. And so please reach out
if we’re messing this up. You can file bugs on AOSP. You can ask questions
on Stack Overflow. But we much prefer to– we really prefer to get
the feedback that something is difficult, so we can try
to find an elegant solution that everyone can use. We want the– some of things
that Qasid just presented, before he did that work,
the effort required to do some of these
things really required learning a new API surface. And we really wanted to condense
everything we could down to one line. And so we’re trying to
present solutions that really aren’t– like if you’ve got this
thing, here’s one line of code. It’s all you need. If you get to
something that seems like it should be one line
of code, and it’s not, let us know. And another place to go
just for other resources is g.co/androidaccessibility. There’s a link here to
the developer.android.com for accessibility
testing, the link to get to the
Accessibility Scanner, and also the test framework
project that Casey described. That’s available
open source on GitHub if you’re interested
to look at that. So really appreciate
your time, and we’d be very happy
especially– if you’ve got feedback for us of other
things that should be easier than they are,
we’re going to also be out at the office hours
for a while this afternoon. We’d really love to talk to you. So thanks a lot for coming
and for your efforts in helping to make a
consistent ecosystem of very accessible apps. Thanks. [APPLAUSE] [MUSIC PLAYING]

Only registered users can comment.

  1. Good talk! Some feedback since you asked: We should be able to set TouchDelegates and make the touch area of a view bigger using a single xml attribute, not all the code that we currently need (https://developer.android.com/training/gestures/viewgroup#delegate)

  2. We willingness to learn from Google so many idea so we would like to request to Google do not forget to make public the most important skilled educational material by your media , from Nepal . Thank you !

Leave a Reply

Your email address will not be published. Required fields are marked *