Demystifying Android Accessibility Development (Google I/O’19)
Articles Blog

Demystifying Android Accessibility Development (Google I/O’19)

August 11, 2019


PHIL WEAVER: Good morning. Thank you for coming. To everyone on the Livestream,
thank you for watching. Thank you for your
interest in accessibility. My name’s Phil Weaver. I’m here with Qasid
Sadiq and Isha Bobra. And we want to talk
to you this morning about demystifying Android
accessibility development. And what we mean
by demystifying is there’s a tremendous amount of
complexity in accessibility. And as part of
Android, we’re trying– I think our job is partly to
try to avoid having developers deal with that complexity. And we looked at
some of our APIs, we feel like we’ve maybe not
done as good a job on that as we should have. So what we want to do is
present some ideas here, overall idea of how we
think about accessibility and also some new
APIs we’ve done to try to simplify your
experience and some tools we have to talk about to help
you test your products. So I’m going to
do a bit of a run through of our
high level opinion. Qasid is going to
talk a bit about APIs. Isha’s going to
talk about testing. And I’ll talk a bit
at the end, wrap up. So I’ll start with three simple
things about accessibility. I feel like, as I
think about this, I personally know
more than three things about accessibility. I’m working on this
for close to a decade. So I maybe know
like five things. But there are a
lot of other areas where I work where like
internationalization, version control, sort of a whole range
of things where I kind of do a few things. And that’s kind of enough to
get through my overall workflow. I feel like there’s probably
three things that everybody in the world should know about
accessibility to sort of build into their daily work. I think the first
one is very simply to make information visible. And a lot of people think
about accessibility, they start thinking about screen
readers and people who cannot see. There are a lot of
people in that category, and that’s important. There’s also a lot of people
who just have low vision or, like me, are getting
older, and it’s harder to read small fonts. So just making sure
that your information is as visible as possible
without anybody having to turn on any
tools or do anything fancy with accessibility is actually
one of the most common things we see people struggle with. There’s a lot of designers who
will kind of design something that looks really great because
it’s light gray on slightly lighter gray. And I think a lot
of us have seen that both in designs
we’re trying to implement and designs that we’re
struggling to use or fonts that are just too small
because too much information is being crammed into a page. The other thing to
think about is color. If you’re using color
to convey information, keep in mind that there’s
a lot of people out there who are colorblind,
and so they’re just missing the information
you’re trying to convey. So just in general,
if you can think about how to make
information visible, you can a long ways to
just reaching more people. The second one’s the
complement of that, which is assuming
somebody can see your UI and understand what
you’re trying to convey, are they able to
actually use it? And that’s where just
simple big controls can make a big difference. I think, again, we’ve all had
the experience of somebody who got a little carried away
of just trying to add too much configurability into
too small a space, and it ends up being
kind of random which button you end up hitting. And for somebody with a
little bit less motor control, or somebody with just
really big fingers, there’s a whole range of
things that their experience of using UIs that maybe we
don’t find to be too tight, their experience can be the same
as what was ours when we were sort of struggling with
something that’s just clearly way too close together. Just making sure
that in both cases, information big
and bold, controls big and simple to understand. And the third one
really gets into sort of a lot of what
often is thought of first for accessibility,
which is to help users who can’t see the screen. If you’re using an image
to convey information, after you’ve made it
as visible as possible, also make sure it works for
people who can’t see it at all. And the simplest way to do that
is just to label the image. And we have a very simple
API that Qasid will mention that can help you do that. When you go to label things,
you want to label it precisely, because you’re trying
to convey information, make sure that information
is conveyed in the text. But on the other
hand, if you think about the experience of
a screen reader user, there is– particularly if it’s
like a control or something they’re going to
go to all the time, they don’t want to hear a long
treatise about how exciting this particular graphic is. They really want to just
get on with their day and get on with whatever
action that thing can do. So you want to label
things concisely. These two things
are a little bit in tension with one another. So part of this is are you
trying to really just convey information, and
that’s what’s in there, make sure that’s conveyed
as precisely as possible. If you’re trying to just
explain to somebody how to use something, just
one word, which is a verb, is usually enough. So that’s three things. But you may have come here to
learn more than three things. So we want to dig a
little bit deeper and just think about how users
interact with your app. Much like the three things,
there’s essentially two things that you’re doing. You’re presenting
information to users, and you’re allowing them
to take actions on your UI. Often the first
design ends up being for people who may
not necessarily have an accessibility
need, but ideally, you’re thinking about this upfront. Then it quickly becomes
kind of overwhelming. You start thinking about,
OK, how’s somebody who can’t see going to use my app? How is somebody who’s motor
impaired going to use my app? How is somebody
who’s deaf or hard of hearing going to use my app? So now suddenly
you’ve got four types of users, which is your original
mainstream user and then these three categories. But within these
three categories, the more you stare at them,
the more complexity you see. It’s like a fractal. People who are visually
impaired, there’s all sorts of different
visual impairments. People can see different
levels of detail, different types of things. Motor impairments
come in a radical sort of variety of people out there. And then also, there are
people that have combinations of these disabilities. And so what are you supposed
to do as an app developer? There’s like a billion people
in the world with a disability. Are you supposed to think
through a billion use cases? The answer is no, fortunately. And nobody is ever going to
think through all billion use cases. But with an ecosystem
like this, we do have the ability to scale
to that many use cases. And the tool we use to do that
are Accessibility Services. These are plug-ins to
the Android platform that get information about the UI,
can take actions on the UI on behalf of the user. And those developers
can think about, how do I serve this
particular set of users that I’m targeting? And so they’re the ones that
are presenting the information. If somebody can’t
see the screen, they’re presenting
it audio or Braille. If somebody can’t take actions
on the screen directly, they may be using
a switch to do it. And then these services
can intermediate that. The way that they work is
they get their information from the Android framework, and
they use the Android framework to take actions on the UI. And so down here at
the bottom is your app. In your app, what it needs to
do is present the information to the Android framework so that
the framework can then share it with all these
different services to support these
different users. And then it needs to
allow the framework to take actions on it so that
all these different services, all these different
users can actually get control of your app. So the way that you can do
this is really to use the APIs that Qasid is going to
talk about in a moment to make sure you’re
presenting the information and allowing users
take actions on it. And once you’ve done that, you
can use these testing tools to verify that your
app is actually working for a wide range
of different users. So now let me hand it over to
Qasid to talk about these APIs. QASID SADIQ: Hey, everyone. I’m Qasid. I’m on the Android
Accessibility team. Let’s talk about APIs. So as Phil mentioned, the way
your application communicates what’s visible on screen to
the accessibility service is the accessibility APIs. But thankfully
for you guys, most of the information that an
accessibility service needs can already be inferred
through the view hierarchy. But there are some situations
where you guys actually do have to use our APIs. Thankfully those are minimal. But let me show you what I mean. So let’s you say you’ve
made this application. And let’s just say you’ve
got this More Options button. Now, our frameworks can
infer important information, like its position on screen
and that it’s clickable. The information in view.java. But when a TalkBack
user, a user who may not be able to see the screen
places their finger on this item to hear a description
of this item, they’re not going
to get anything. And the reason is
TalkBack really doesn’t know what to
say in this situation. There is no descriptive
text associated with it. So you as an app
developer have to step in and fill in the blanks for us. And you can do that through
the content description API. All you do is pass
in a localized string into site content description. Remember, keep the
string localized, concise, and descriptive,
because someone has to hear it. A user has to hear it. When Phil says label your
items, he means this, and for good reason,
because this is mostly the accessibility issues your
application is going to have. And thankfully, it’s
trivially simple to fix. Now our user knows that this
is the More Options button, and we have a
successful interaction. So let’s talk about
something different. Let’s say you’ve
got this email UI. And like most inbox UIs
with a list of emails, you can tap and
email to select it. And you can swipe to reveal
that an email was deletable. And if you continue swiping,
you’ll delete that email. This is great and
all, but not all users can tap and swipe on screen. TalkBack users, for
example, drive the UI through a completely
different gesture set. Switch Access users,
on the other hand, they drive the UI through a
series of single switches. So for these
particular situations, the accessibility service,
which Access or TalkBack, need to know what
actions you can perform on each item or
each view in your hierarchy. Now, when a Switch Access
user highlights a certain item as it currently
stands, the user only knows that you can tap an item. And this may be
because of the way we implemented that remove
action or that delete action. So again, like
content description, we have to fill in the blanks. And you can do that through our
new accessibility actions API. All you do is call
ViewCompat.addAc cessibilityAction. You pass in the view of
the action, a localized string describing the action
concisely to the user, and a Lambda to be performed
at the user’s request. Now our hypothetical
Switch user will be able to perform both the
select and the delete action. And also, because this is
an AndroidX, the library we use to back part
a lot of our API, this is going to
work back to API 21. OK, but let’s get into something
a little more complicated. Let’s talk about text and
links, or clickable spans. Now, before AndroidO, our
accessibility frameworks really couldn’t handle
non-URL spans well. And this is a problem,
as you can imagine users like TalkBack users,
they wouldn’t be informed that there are links on screen. Actually, they wouldn’t be
even able to activate them. They would essentially
see nothing. So to solve this problem, we
added some API into AndroidX. All you do for a
text view, which contains these non-URL
clickable spans is called ViewCompat.enabl
eAccessibleClick ableSpanSupport. Pass in the text view or the
view that contains these spans. Now our users all the
way back to API 19 will know that these
links exist and will be able to successfully
activate them. So as app developers,
a lot of you like to roll some
interesting custom UI. So this may look like an alert
dialog, but for whatever reason we decided to implement this
using a view group, a couple of text views and a button. Now, this poses a problem
for accessibility services like TalkBack, because there’s
some information that’s visually expressed about
a context change happening on screen. But that actually
hasn’t happened. This behaves a
bit like a window. So our accessibility user
isn’t informed about this. So the way you solve this is
by treating this view group as an accessibility pane. And you can do that by
calling ViewCompat.setAc cessibilityPaneTitle on the
view that you consider a pane, and you pass in a localized
concise string describing this pane to the user. Now, when our custom
alert appears, TalkBack is going
to speak, alert. This also works all
the way back to API 19. And finally, let’s say
you have a video player in your application. And it’s pretty typical
in that has a play button or some controls that
time out and disappear after a certain period of time. That’s useful because
most users just want to get to your content. They don’t want to fiddle
with your controls. But you can imagine
an accessibility user who needs to take time
interacting with your controls. By the time they’re able
to precisely interact with this play
button, disappears. Now they’ve got to
figure out a way to get that play button
back up on screen. They’ve got to figure
out a way to interact with it before the
timeout disappears again. And this is a pretty
frustrating cycle. So what we ideally
need in this situation is a way to adjust our timeout
based on our current user’s needs. And you can do that through
our new timeouts API. First you get a reference to
the accessibility manager. Then you call getRecommendedTi
meoutMilliseconds. This returns the suggested
timeout for your view. It’s customized for your
view and for your user. It does this by taking the
default timeout that you had planned and adjusting it
based on the type of content this view is. You specify this in
the second parameter. In this situation,
this is a play button, and it’s a control. So we pass in
FLAG_CONTENT_CONTROLS. You can imagine someone with a
motor disability, for example, may need this adjusted
if it’s a control. It also presents
visual information, so we pass in FLAG_CONTENT_ICONS
for people who may have trouble parsing visual information. If it was text, we pass in
FLAG_CONTENT_TEXT for people who have trouble parsing text. Now we’ve got a timeout
that’s customized for our view and for the current user
and a play button that works for everybody. OK. So those are the fundamentals. And let’s just say you’ve used
those fundamentals to make your application accessible. You become a bit of an expert. But you are going to
really quickly discover there are some murky areas,
places where it’s not clear what the right thing to do is. Let’s get back to this email UI. Now, let’s just
say you’re trying to build very specifically for
the TalkBack user, the user that can’t see on screen. You try to determine what
the experience is going to be when a new email appears. And you’re trying to figure
out how to express this change, and you figure the
best thing you can do is by making an announcement. Every time a new email
appears, announce the email. Well, this is a bad idea. And if you catch yourself
using the accessibility event TYPE_ANNOUNCEMENT,
you’re probably conforming to this anti-pattern. You see, changes in the UI are
expressed very differently, depending on the accessibility
service and the user’s preferences. Services don’t need fine
tuning of accessibility UI from the application. They need a generic
representation of the UI that they themselves can
manipulate for the users that they understand so well. So what do you do
in this situation? This is what you do. That’s right. You don’t do anything. And you can do this
by using the widgets we provide to you
in our frameworks, such as AndroidX and Material. These widgets come with
accessibility built in out of the box,
which significantly reduce the amount of work you
as an app developer have to do. So you can really focus on
the very particular value that your application
provides to the world. But if you really must write
your own custom widgets, if you really must
do it yourself, make sure you’re communicating
the exact semantics of the changes in your UI by
using the correct accessibility events and populating
it correctly. Remember, in this
situation, you’re not communicating
directly with the user. You’re communicating with
the accessibility service. So something similar
that people like to do is manage accessibility
focus themselves. And again, this is a bad idea. Accessibility focus
has to be determined by the accessibility service. And just like
announcements, this creates an inconsistency
in experience. And actually, that’s one
of the biggest issues that accessibility users
face, inconsistency across applications
and over time. You see, there are a
lot of applications. And if you as an app
developer decide to break with the paradigms of accessibility
interaction from the rest of the system, you’re making
your users’ lives frustrating, because now that
accessibility user, every time they open
your application, they’ve got to throw out all
of their expectations in terms of how their interaction works. And they’ve got to
relearn this whole new UI at a very fundamental level. The best thing that you can
do for your accessibility user is to maintain consistency
over time and with a system. OK, now that you know
how to fix your issues, Isha’s going to talk
about how to find them and how to make
sure you fix them. ISHA BOBRA: Thanks, Qasid. Hello, everyone. So now that we know
what we are building for and how to build it, the
next obvious question is, how do I know what
I built is correct? Things like, is my text
visible to most of the users? Or is my button large enough,
or even if my button is labeled or not? Wouldn’t it be nice if someone
could check that for us? Well, there are
several approaches in which you can
answer this question and make the testing task easy. On a high level, there
are three approaches that you can leverage
as a developer to ensure you’re creating an
accessible experience for most of your users. The first is automated tests. This technique requires
some coding changes and is very good to detect
accessibility issues at the very early
developmental phases. You can run these tests
alongside your existing UI unit or integration test as part
of resubmit or continuous integration solution. The next tool we’re
going to look at is the accessibility
testing tools. These tools do not require
any technical knowledge and can be run by QA
teams and release managers to perform a sanity check
before your app is released out in public. And the third is a manual
testing, which, by experience, we have realized is one
of the most effective ways to ensure you’re creating an
end to end experience for users with disabilities in
real world scenarios. Let’s take a deep dive
into the three techniques. Let’s first talk about
integrating accessibility into your existing testing code. Most of the Android
Accessibility testing tools are backed by the Android
Accessibility Testing Framework. It is a Java library that is
written on a rule-based system to evaluate Android
UI constructs for accessibility
issues at runtime. Remember, it’s open source. So if you wish to make
contributions and add checks for accessibility, please
reach out to us on GitHub. So what does this
framework test for? It tests for missing labels,
which actually prevents users of screen readers
from understanding the content within your app. It looks for small
touch targets, which can prevent users
with dexterity issues to interact with your app. It also looks for low
contrast text and images, which impacts the
legibility of your app, and it looks for other
implementation-specific issues, which can actually prevent
your app from sending the proper semantics to
the Android Accessibility Framework. So that was about the
framework, and we understood what the framework tests for. The question is, how do
I use this framework? So we’ve made it really easy
to integrate this Accessibility Testing Framework into the
existing testing frameworks like Espresso and Robolectric. These are provided as
an optional competent, and you can use our existing
test code to run these checks. As you interact with
the view in your tests, these accessibility
checks run automatically before proceeding. So if you’re interacting
with a button in your test, we look for the
button and potentially the UI around the button to
look for accessibility issues. For Espresso, you can use
AccessibilityChecks.enable to enable the tests. The result of
AccessibilityChecks.enable is an accessibility
validator, which can be used to
customize your tests. For example, you can use
.setRunChecksfroomRootView to increase the coverage of
your tests by running them onto the on the entire view hierarchy
where a view actions is performed. You can also call
.setSupressingResultMatcher to whitelist known accessibility
issues so that your tests are green as you fix them. It is required that you use
a view action from the view actions class to
perform these tests. In this example, you see
the use of click action. Remember, if you interact
with the view directly, you bypass the
accessibility checks. For Robolectric, you use
AddAccessibilityChecks annotation to enable the tests. Tests will be called
in the view when you call ShadowView.clickOn
the view you want to test. And much like Espresso, you
can customize your tests using Robolectric’s
accessibility tools. So that was all about
automated tests, making changes in your
code, and figuring out accessibility issues at the
very, very early development phases. Next, we’re going to look at is
using the accessibility testing tools. These are automated
tools and do not require any technical knowledge. The first we’re
going to talk about is the Google Play
Pre-Launch Report. It is an automated tool
that controls your app on multiple physical devices and
looks for accessibility issues so that you can fix them
before launching your app. It looks for issues like
crashes, performance, and now even accessibility. We’ve made it really easy
to get accessibility test results by integrating
those directly into its developer console. These checks run on all APKs
released on any Play Store track. Pre-Launch Report is located
within the Google Play Console beneath Release Management. You can see here’s a list of
issues that are highlighted by the Pre-Launch Report. These issues are
clustered, characterized, and ranked by severity. Here’s a detailed view of how
a report looks like generated by the Pre-Launch Report. You can see the text with
the incorrect contrast ratios highlighted
and a suggestion is provided to improve it. On the left-hand
side panel, you can see the occurrences of the
similar underlining issues being highlighted. For each of the accessibility
findings identified by the report, there
is a Learn More link, which gives you a detailed
understanding of the concept and provides suggestions
to improve it. Yeah, that was about the
Google Play Pre-Launch Report. Now we’re going to move on
to the Accessibility Scanner. It’s another app
that scans your UI and looks for potential
accessibility issues, like missing labels, small
touch targets, et cetera. You do not require
any code change to use Accessibility Scanner. All you need to do is go to the
Play Store, download the app, or visit
g.co/accessibilityscanner. Launching the app will take
you through the set-up process. Here’s an app that we’ve
created to highlight some of the known
accessibility issues. You can see a blue
floating button on the UI. This is the button that
appears when you switch on Accessibility Scanner. In order to scan my app, I would
simply tap on the blue button while my app’s UI is
in the foreground. Here is an example of how a
report created by Accessibility Scanner looks like. You can see the text with
incorrect ratios highlighted. It’s not just highlighted, but
also a suggestion to improve it is provided. You can share the
reports by scanner via email or Google Drive. And if you click on
the Learn More link, it opens a detailed
documentation of the concept and the
issue you’re looking at. So that was about
the automated tools. We looked at the
automated tests, which required coding changes
and are good to detect issues while you’re developing. Next we looked at the
automated testing tools, which were the Pre-Launch Report
and the Accessibility Scanner. The third technique we’re going
to look at is manual testing. So automation is really
helpful, because it helps you detect issues
while you’re developing, but it’s not a
complete solution as it comes with certain limitations. Let’s look at them. If you’re depending on
your automated tests, and they’re based on the
test that you’ve already written for your
code, its performance very much depends
on the coverage that your test code provides. Parts of your code
that are not tested can have serious
accessibility issues. Secondly, like any
other automation, there are always chances
of false positives and false negatives. We strive to reduce false
positives as much as we can, but that can result
in neglecting some of the legitimate
accessibility issues. And thirdly, there
are certain issues that require human
judgment and intervention. For example, our
tools can tell you that you’re missing a
label, but whether or not the label string is making
sense to the user is understandable and
acceptable is something that only humans can decide. So manual testing is all
about understanding your users and understanding
how they’re going to interact with your app
using assistive technology. One way to achieve this
is working directly with these users with
accessibility needs and soliciting their feedback. It can be done
formally or informally. Another way to achieve
this is using Android’s own Accessibility Services. So testing your
app with TalkBack can actually ensure that you’re
providing the correct semantics to the Android
Accessibility Framework. And testing your app
with Switch Access can ensure that your
app is reacting well to the actions
initiated by these APIs. At Google, we’ve learned that
the most successful teams are those who adopt both
automated and manual testing in their development process. If you want to learn more about
using Android’s Accessibility Services, please visit
g.co/androidaccessibility. And it can also take you
through step-by-step approach to testing with keeping
accessibility in mind. And with that, I
will let Phil come up and give some key takeaways. Thank you. PHIL WEAVER: Thank you, Isha. So hopefully you found
this information useful. But as you work on your app,
really what we’re asking is for you to help us
help others use your app and help other accessibility
service developers help others use your app. So we’ve presented
a few APIs here. We hope you’ll use them to
share UI with as many people as possible. If you’ve played around
with accessibility before, you’re maybe wondering like
why are we not showing you– we could have gone through the
details of accessibility node info and accessibility event. And our goal is
really to try to limit the number of
people who ever need to crack that API surface open. This new API for
adding actions I think is really important in that
respect, because it used to be, if you were doing
something relatively minor, overwriting the touch
handler, it was like, OK, you did the small
change, and now you’ve got a crack open this
great big API surface and figure out how to modify
your accessibility node info. We have an accessibility
action API you needed to add. You needed to go through and
override a method in view. And a lot of people,
they kind of come up against a learning
curve, they’re like, you know, I’m kind of OK
with my touch handler just kind of being like
that, and I feel bad about not having
it be accessible, but I just don’t have the time. And so we’re hoping that these
APIs reduce that energy barrier to one line of code that’s
pretty straightforward. And so we hope that
you’ll use them. Similarly, testing
for accessibility, there’s all different
ways to do it. We all hope that
you’ll go out and find a wide range of users and
test for accessibility with all of those users. And I know some of you will. But again, we’re trying to scale
this to the entire ecosystem. And so that’s why we’ve got
these different points where, if we want to just
a quick scan, we’ve got Accessibility Scanner. We’ve got something–
after you’ve uploaded to the Play Store,
you can get a quick check for accessibility. But the more you do, the better
the experience will be overall. And also, as you go through
and you do this testing, and you find weird
problems, and then you start thinking, I know
Qasid said don’t do this, don’t add announcements,
don’t try to control focus, but it’s just broken. The only way to
make my app better is by just tweaking
it a little bit. You find yourself sort
of digging through maybe the TalkBack source
code, which is online, to try to figure out
what TalkBack is doing and like, OK, how
do I sort of give it the right signals that it
will do the right thing? Please realize that you
probably found a bug, and it’s probably our bug. And we’re eventually going
to fix that bug, hopefully. We’ll fix it a lot faster
if you tell us about it. So if you find yourself
doing engineering– and realize, if you’re doing
engineering for accessibility, you’re doing a lot more
than the average developer. And we need this
ecosystem to work for every developer in order
for it to work for every user. So if you find yourself
doing engineering, and you realize no one
else in the world is ever going to do this,
as Qasid said, you end up building something that’s
inconsistent with the rest of the platform,
even if it’s better. And so if you found something
like that, please tell us. We’re monitoring the
AOSP issue tracker. We monitor Stack Overflow. Reach out to us if you’re stuck
or you’re just doing something that’s way harder
than it should be, because that’s the
only way that– that’s the best way
we can get feedback, and prioritize our own
work, and really scale the– actually, you can submit AOSP– also, if you want to submit
a code patch to AOSP if you found a bug in
the framework, I’d very much appreciate that, too. But if we do the
engineering, it can scale through the entire ecosystem. And so if you’ve
got the extra cycles to do that much, to think
about it that deeply, please do think about
at the level of like, how could this thing that I’ve
figured out work for everyone? And figure out how to upstream
it and reach out to us. So I started with three things. I’ll finish with
the three things, just making information visible,
prefer simple, big controls, label images precisely
and concisely. We’re doing app reviews
and office hours. And it was striking yesterday
as people were coming out, found that most of
what we’re saying is essentially in
these three categories, that, really, if everybody
does these three things, we’re going to be a
lot of the way there. Speaking of that, we’ve got
accessibility office hours and app reviews that are
happening over back that way. We’ve got an Accessibility
Sandbox, which is, again, just outside on the
other side here, that’s ongoing
through all of I/O. And so if you’ve got
questions to ask us, we’ll also be outside. You can come find
us after the talk. And if you want to just explore
the world of accessibility, we’ve got a Sandbox for you. So with that, thank
you so much for coming, and thank you for your
interest in accessibility.

Leave a Reply

Your email address will not be published. Required fields are marked *