Sonic Boom! Audio Programming on Android and Chrome (Google I/O’19)
Articles Blog

Sonic Boom! Audio Programming on Android and Chrome (Google I/O’19)

August 9, 2019


DON TURNER: We’re both very
excited to be here today. We’ve got a load of
cool stuff to show you. As you can probably see, we’ve
got a whole load of equipment up here. So we’ve got some live demos. We’ve got a musical performance. And we’ve got a whole
lot of live coding, so anything might happen. Hopefully you’re
going to leave here today excited about the
opportunities on Android and Chrome. But, before we start, a talk
about Android and Chrome– it’s a bit strange, right? Are we friends? HONGCHAN CHOI: Well, I thought
we were more family as Google. But anyhow, we have a lot more
in common than you might think. For starters, both
platforms have teams who are passionate
about empowering users to be more musically creative. And we also have a very similar
audio programming model, right? DON TURNER: Right. So both Android and Chrome
have a callback-based audio programming model. You have an audio
renderer, which is responsible for communicating
to the underlying audio hardware. And, every time it needs more
data, it gives us a callback. There’s a similar story with
round trip audio latency. And recent advances
on both platforms mean that there’s a very
similar level of latency. On Android, all Pixel devices
have around 20 milliseconds input to output latency. And we’re now seeing devices
from Samsung, Huawei, and others reach a similar
level of performance. HONGCHAN CHOI: The audio
latency of a Chrome browser largely depends on the
underlying operating system or hardware. But for example, the
device round trip latency of a Chrome Pixelbook is
around 20 milliseconds, which is good enough for most
of our real time use cases. DON TURNER: Again,
with MIDI support, very similar capabilities
on the platforms. In Android, we have
the MidiManager API introduced in Marshmallow. And in Android Q,
there’s a new API called AMidi, which is for high
performance processing of MIDI data. HONGCHAN CHOI: And Chrome has
had Web MIDI API since 2015 and it still remains the same. DON TURNER: Of course, one
of the biggest features of both platforms is the
number of active users. Android has over
2 billion users. HONGCHAN CHOI: It’s
fantastic and Chrome also has a similar number. DON TURNER: Now, obviously,
some Android users are also Chrome users. So it’s not like two
plus two equals four, but we estimate that, by
targeting both these platforms, you can reach around half
the world’s population, which is pretty incredible
when you think about it. HONGCHAN CHOI: And
a company which has been taking advantage
of this incredible reach is BandLab. They achieved huge growth
thanks to their cross platform approach from the beginning. Here’s their story. SPEAKER 1: This is a story
of how three people connected via a free music app
collaborated to create a top 10 commercial dance track. The app is BandLab. JOHN IVERS: BandLab is
a social music platform that enables our users
to create and collaborate on music on all platforms
anywhere in the world all with unlimited cloud storage. We have a Chrome-based
digital audio workstation we call the Mix Editor. It allows our users to create
and record music live, create their own tracks using
our built in instruments, or start a song from the
thousands of free loops in our extensive library. SPEAKER 1: Here are
our three creatives. Andy is an
international producer with two decades of experience. He’s based in San Francisco,
but works around the world. Se-Hwang Kim is a pioneering
guitarist, singer songwriter, and arranger from South Korea. He works with leading
K Pop and US artists, and is on Hollywood’s
Rock Walk of Fame. Linah London is an up and
coming pop and R&B vocalist. Proudly repping East
London to the world, she’s a hopeless romantic
with a golden voice. These three wanted
to create music and needed tech that
worked for them, no matter where they were. JOHN IVERS: I love seeing
stories like these. What these guys are
doing with BandLab is exactly what we had in
mind when we were building it. Our vision is for a future
where there are no boundaries to making and sharing music. This project shows
how musicians can use BandLab them to overcome
barriers like distance or incompatible technology
to connect and make incredible music. On mobile, we’ve
been working hard to create music, editing,
and creation tools to empower everyone
with access to a phone. And as the most popular
operating system in the world, Android is a major
engine to drive that. Today, we have close to seven
million registered users and they’re saving almost 2
million new songs every month on BandLab. A huge portion of our
community is collaborating via Android and Chrome. We are here to
democratize music, and that’s now possible thanks
to developments in technology and partners like Google. DON TURNER: So BandLab has
been successful by targeting both Android and Chrome. And today, we’re
going to show you how you can do the
exact same thing. To do this, we’re going to
build a synthesizer app which runs on Android
and runs on Chrome, but shares the same code. HONGCHAN CHOI: That’s right. We’ll be using the exact
same synthesizer code written in C++. And Don is going to show you
how to make it work on Android, and I’ll do the same on Chrome. DON TURNER: After that, we’ll
add MIDI support to both apps. Hongchan will do that for Chrome
and I will do it for Android. So I’m going to
start and I’m going to start by using a
library called Oboe. Now Oboe is a C++ library
for building low latency, high performance
audio apps on Android. Here’s how it fits into
an app architecture. So we use the Oboe library
to build an audio stream. That’s responsible
for communicating to the underlying
audio hardware. In this case, I’ll be
communicating with this Pixel XL headphone output. Each time the audio
stream requires more data, I get a callback into my app. And my job is to add in a
synthesizer which will then be responsible for
rendering audio frames back into my audio stream. OK, if we could
switch to my laptop, which needs Hongchan’s password. OK, so I’m in
Android’s studio here. Hopefully you can read the code. So I’ve already built kind
of a Hello World audio app. And it has this one
class audio engine. In audio engine,
we have a couple of methods, which the
most important ones are start, which is called when
the application starts, and this onAudioReady
method, which is called every time my
audio stream needs more data. So that’s going to be being
called every few milliseconds. And I’ll cover the
rest a bit later on. So Hongchan and I are going
to be sharing source code. And here’s the source here. So my first stage is I’m going
to include the synthesizer header. I’m now going to create
a synthesizer object. And we’ll just make that
a unique pointer of type synthesizer. OK, now I’m going to go into the
implementation of this class. So in my start method, I’m
going to create an audio stream. And this class is
provided by Oboe. It allows me to build
an audio stream. So I set a few
properties on my stream. For example, low latency to
get a low latency audio stream. And I also set the format
to floating point samples because that’s the
format of my synthesizer. I then open the
stream and I can now create my synthesizer object. So I’ll just do make unique. OK, so in the constructor
for my synthesizer, I have to specify
the sample rate. And that’s the rate at which
I’m going to be generating frames of audio data. And I get that from my stream. Now, the reason I have
to do this at runtime is because the sample
rate of my audio stream will depend on the
underlying audio hardware. OK, so that’s my
synthesizer created. Here’s my audio
callback, the one that’s being called
every few milliseconds. At the moment, I’m just
outputting silence. So I’m going to get rid of that
and have my synthesizer render its data into the audio stream. And the way that I
do that is I have to pass data into this container
array here called audio data. Now, you’ll notice that
it’s of type void star, so I need to cast
it to type float. So I’ll just do that now. And my synthesizer
also it needs to know how much data to render, which
is this num frames parameter here. Right, so almost there. My synthesizer is now
able to render information into my audio stream, but I
need to add a bit of control. So I’ve added a couple
of methods here. The first one is
on touch, which is going to be called each
time I push my finger down onto the screen and each time
I lift it off the screen. So this parameter
here is down will be true if I’m touching down
and false if I’m lifting off. So I’ll just say if the
touch event is down, then tell my synthesizer
to switch a note on. And I’ll just specify kind
of a random note here, note 60 which is middle C. Otherwise, if I’m
lifting off, then I’m going to switch the note off. So the last thing
I’m going to do is add a bit more
control to my synthesizer and I thought it
would be nice if I tied the rotation of this
device with the filter cutoff of my synthesizer. So I’ve got this method here,
on sensor exchange, which is going to be called each time
the value of rotation changes. This comes in– it’ll
be a range of values between minus 1 and 1. So I have kind of some
scaling and some mapping here to get it into the value
range expected by my synth. So here we go. Ignore the red. That’s just Android Studio. OK, so I’m going to run this
app and what should happen is, when I tap down on the
screen, we hear a sound. And when I rotate the device,
that sound should change. So moment of truth. [SYNTHESIZER NOISE] OK, first part done. [APPLAUSE] HONGCHAN CHOI: OK,
it’s my turn now. DON TURNER: Yeah. HONGCHAN CHOI: So before we
move onto the code eater, I’d like to talk a little bit
about how this whole thing works on web browsers. So let me go back to the slide. So obviously we cannot use the
same C++ source code as is, so we need to go through a
few steps to make it work. So first, we start from the
source code synthesizer h. This class has methods like
not on and not off for render, and that needs to be
compiled with Emscripten, the LLVM-based compiler that
compiles C++ source code into WebAssembly
or wasm for short. So ultimately, we will end
up with a synthesizer wasm JS file. And the JavaScript object
from this wasm module will have the same method that
we exposed, which is not on, not off, and render. But this mapping just
doesn’t happen automatically, and that’s what Emscript
and binding API’s for. So I’ll be creating a new C++
file for this binding work. And this file will include
the original source code and will be compiled
by Emscripten. This whole process will give
us the wasm source code, but how do we make
sound from it? That’s where Audio
Worklet comes in. With Audio Worklet, you can
use Web Audio API’s rendering thread to run JavaScript
code and make sound from it. It exposes low level
audio callback, so we can hook up our
synthesizer’s render function in there. You can search Audio Worklet
for more information. So there are
several moving parts in Audio Worklet’s system so
let me illustrate how it works. First, we’re going to use
Audio Worklet processes to wrap this wasm synth module. This is because this class
has the audio callback. And this callback gets fired
about every three milliseconds depending on your separate. And the process
side the whole thing is handled by the audio thread. And the scope is
isolated, so it’s very good for audio processing. And all of this stuff
goes into a separate file since process.js. But at this point, we
cannot control the processor because it’s in the other
thread, the audio thread. That’s why we have something
called the Audio Worklet node. It’s a main thread
counterpart of the processor. But then, how do they
talk to each other? And for that, we have something
called Message Import. So this Message Import
allows these two objects to talk to each
other asynchronously. For all the main thread
meetings like UI and MIDI that goes into index.js file,
and that’s our main script file for our demo application. OK, I’m going to switch
back to my code editor. So let’s talk about
this file first. This file is our binding file. So, first, I need to do– I need to include these
synthesizers h source file here that’s happening right here. Actually, I’m extending the
original synthesizer class to create a small,
thin replicas. But why do I need this? That’s because
Emscripten’s binding API is a little bit picky
about having raw pointer to the floating point array. So actually what
I’m doing right here is I’m doing manual typecasting
to walk around this issue. It’s a little bit odd,
but please bear with me. You can look up Emscripten’s
binding API documentation for more detail. But what I’m doing
right here is, after the manual
typecasting, I’m calling the original
synthesizer’s render function. So let’s take a look at the
second half, the bottom half, of the file. This is where actual
binding happens. So for the first
four lines of code, basically what I’m
doing here is I’m exposing the function of
original synthesizer class. So I’m exposing constructor
and not on function and not on function, right? All of those functions is
already used in Don’s demo. Then I’m exposing
the render function from the wrapper class,
which this function is with a little walk around
that we wrote above. So that’s our binding power. So I think we’re ready
for the compilation. And for the compilation
I already set up the Emscripten
compiler on my machine so we can just do
the compilation. Let me check out
our directory first. So I’m going to move on to
our wasm file directory. So, for the easier
compilation, I already set up my make file there. So I’ll just type make here. And compilation is completed. So, if I click this
link over here, actually we can take a look
at what’s happening inside of this wasm file. In the beginning
of file, it just looks like a random
JavaScript file. But if I scroll
down through enough, kind of interesting things
are happening over here. They just look
like assembly code. So, anyway, this is
our wasm synth module, so let’s just move on. And then this is our
synth process dot js file. And this is where all the audio
related operation is happening. So first step I
need to do is I need to import wasm since module
that we just compile. So now we have the
dependency setup. Then I will touch
the constructor of audio clip processor. This is the class that
has audio callbacks. So we need to set up a
instance of wasm synth. So let me do that. Here we have two lines of code. The first line is constructor,
which is from C++ wall. And so this is
literally constructor that Don used in his demo. So I need to provide the
argument for this constructor, which is separate. And this separate is a global
property of this Audio Worklet global scale. It’s not just random
magic variable. And, a the next line,
I’m using little helper called wasm audio buffer and for
easier wasm memory management. I’ll get to that a little later. And this function right
here is a process function. This is our audio callback
in Audio Worklet system. So here I need to call
my render function there. This render function is
also from the C++ wall. So first, the argument
of this function is the actual roll pointed
to the floating point array. But there is no way to get
that roll pointer here, so what I’m doing– that’s why I’m using my little
high profile wasm memory management. Because this utility function
will give me the address of wasm allocated memory block. Then I’m doing one more
step because the memory space provided by rebooting
API and wasm allocated memory and they’re
completely separated. So I need to actually copy the
data rendered by synthesizer to the web audio
API’s output buffer. So it’s just one more
step to clone the data. For that, I’m using
typed array’s set method. And this method uses
mem copy internally. So it is supposed
to be super fast. So that’s done. Let me write one more
function because I like to have the similar
functionality with Don’s demo. Don uses an on touch function
to create the test tune, right? So here I am. I’m creating play
test tone function. It’s pretty much the same thing
with what Don did in his demo. So when it’s down, is come– basically I can trigger not on. And when button is up,
I can trigger not off. I’ll be using the
same 60 middle C. So now we have this
handling function setup. The very last step here is I
need to setup my message port. So if any messages
from the main thread– if it comes from
the main thread, I can trigger my play
test tune function. So I think we are almost ready
with the synth’s process that JS. This is the very last step. What I’m doing here
right now is I’m registering the
synth’s processor the custom class definition
under the name of my synth. So this will be a
keyword for my class. So let’s remember that. So I’m going to move
onto my index.js. This is the main script file. So first step what
I’m doing here is I’m creating audio
context, which is a gateway object for Web Audio API. So that’s what I have. Then I’m going to setup
audio– the minimum audio graph to test this demo. So here I’m using Audio
Worklet add module. This is how to load
custom process definition into Web Audio API side. So I’m going to type the
file name of synth processor. And once the module
loading is completed, I can create Audio Worklet
node based on the keyword that I used for
the registration. And then I’m using
a small gain node because I found that
my synthesizer was too loud for the demo. So I’m cranking– I’m little
bit lowering my volume there. Then I’m creating audio graph
from synthesizer to my volume controller and the
output of this laptop. That’s kind of really
minimum audio graph. And lets look at the
next function right here. This is on button change. And I already set up this button
with the page on this test pages button over here. So right now if I do
anything like this, nothing happens because
it’s on button change has nothing there. So what I want to do
here is I have to– I have to send this ease
down variable to message port so it can be delivered in the
Audio Worklet processor site. It’s really simple. That’s pretty much
what I will do here. So I think we are ready
for the test tone. So I’m going to
refresh the browser, and we’ll activate the
page and audio engine. [COMPUTER NOISE] That’s our test tone. And let’s move
onto the next step. So I’ll switch back to slide. So we just set up Audio Worklet
node an Audio Worklet processor and played a test tone. So next step is web
MIDI integration. And web MIDI API provides
us with the different types of object. But what we’ll be using
today is actually MIDI input. MIDI input literally receives
a message from a MIDI device. And when that happens, it will
fire on MIDI Message Event Handler. So our job here
today is basically sending any incoming messages
from this on MIDI Message Event Handler and just pass them
back through Audio Worklet processor. And all of our MIDI integration
happens on the index.js. But we still need to go
back for audio clip process just a little bit because our
synthesizer processor is just still not ready for
incoming video messages. OK, I’m going to switch back
to my code editor again. So this is our
index.js file again. And here’s a place
for the web MIDI API. So I’m going to create
MIDI access object. That’s the gateway
for the web MIDI API. And once I have that
object, basically what I’m doing here in
three lines of code, is I’m iterating all
the detective MIDI input in the system and I’m assigned
the same event handling function to all of MIDI input
on MIDI Message Event Handler. And the content of the
event handling function is really simple as well. We’re not doing any
pre processing here. We will just take the data,
just send to the Audio Worklet processor. So that’s done. I think index.js file
is pretty much done. So I’m going to go
back to synth process that JS because we need to fill
this function in because handle media event is
currently doing nothing. So what I’m going to do here
is basically these few lines of code because we
are only interested in node on and node off,
two types of MIDI event. When MIDI event
node on comes in, basically I’m going
to trigger note on function in the
synthesizer just like this. And my argument
will be mode number, which is a second bite of
incoming media messages. And, similarly,
with node off case, I can trigger node off method. All right. One last step. Currently our own
Message Event Handler is tied to play
test on function, but this is something–
this is not what we want. So we’re going to replace
with handle media event. So if there’s any incoming MIDI
message to the process site, our handle media event
function gets called. All right, that’s done I think. So I’m going to move on to my
test page, activate the web page. With some luck I should be
able to play my synthesizer with little MIDI
keyboard over here. [COMPUTER NOISES] All right, that’s
the end of my demo. [APPLAUSE] I’m going to switch back
to my slide, please. Let me remind you
one more time that we started from the same
source code, right? And it comes with amazing
benefits like less code to maintain and also identical
sound across two platforms and so on. You ready for the MIDI? DON TURNER: Sure. Great. OK, so Hongchan created
a synthesizer app and added MIDI support to it. I’ve already created
my synthesizer app, I just need to add MIDI support. And to do that, I’m going
to use the new Native MIDI API in Android Q. So this
is a high performance API that’s designed to use
inside your audio callback. But there’s still
a bit of setup work that you need to do inside Java,
so I’ll just talk through that. In Java, you need to listen
for MIDI device connections. So when you actually plug-in
your MIDI controller, you will receive
a callback which allows you to open this device. You then get a Java
MIDI device object, which you pass through
JNI to the Native API. You convert that from a Java
object to a Native object. And then you open
an output port– so an output port on the
device itself, which you can receive MIDI messages from. And then you can start
receiving MIDI messages. Now, because we’re
short on time, I’m just going to focus on
this very last part here, the received MIDI messages. But there’s source code
online which shows you how to do the rest this. So if we could switch back
to the machine, please. OK, so I’m back in
my audio engine class and here’s my audio callback. So this is the one that’s being
called every few milliseconds. And I’ve got this
process MIDI method here, which is actually
implemented down here. And this is going to be called– sorry, I’ll just
scroll back up there. This is going to be
called before I change– before I render my
synthesizer frames. So what I need to do is call
a MIDI output port receive. So this is the main
method that you use for receiving MIDI data. I’ve already set up an
output port somewhere. So everything is in red because
Android Studio’s indexer has currently broken,
which is going to make this incredibly fun. So the second parameter to
this method is an opt code, and that can be
either a data type or it can be a flush message. So if the app code
is set to data, it means there’s new
MIDI data available. If it’s flush, it means
that any MIDI data that you received before,
you need to get rid of it because it’s now stale. I also specify a buffer to
actually store the MIDI message in. I tell the method
how large that buffer is so that I don’t overflow it. And I also specify a
message size which tells me how big that message was. Typically, this would be
three bytes for a note on or note off message. And the last parameter
is a timestamp. And that allows me to
reorder MIDI messages based on the order
that they were actually tapped by the musician. OK, so this gives me everything
I need to receive MIDI data. So I just need to
check my opt code to see whether it was data. And I’ll also check
the message size just to check that it
is greater than 0. So we now know we definitely
have a MIDI message. And as Hongchan did
in his app, I just need to read the first two
bytes of my MIDI message to understand exactly
what the message was. So message. And I’m actually only
interested in the first four bits of this MIDI message. So I just use a mask
here and we’ll just have a quick look at that
just to get the MIDI status. The second thing that I’m
interested in is the note, and that just comes straight
from the second byte. So I can now say, if the
status was a MIDI note on, then I’m going to switch
my synthesizer on. And I pass in the notes. Otherwise, if the
status was a note off, then I’ll just send in a
note off with the same note. OK, I think that should work. So I’m going to rerun this now. So this is deploying
over this USB cable here. When I plug-in this
MIDI controller, I won’t know if
anything’s gone wrong because I won’t be able to
receive lockout information. So what should happen
is when I plug this in and I tap on a key, we
should hear a sound. [COMPUTER NOISE] And add a stock note
there, so there we go. Yeah, that’s MIDI working
on the Android app. [APPLAUSE] HONGCHAN CHOI: So can we
switch back to slide, please. So I think you’re
ready for the jam now. So I prepared a little
backing track for us. DON TURNER: Yeah, we’ve
got a backing track. We’re just going to
do a very short jam. See what it sounds like. Basically, because we’ve got
a big sound system so why not? HONGCHAN CHOI: Yeah, and Don
will be playing the bass line. And I’ll be playing
the lead synths with our pager activated. So let me change my
setting real quick. [COMPUTER NOISES] All right. I think you’re
ready for the jam. So you ready? DON TURNER: OK, yep. [MUSIC PLAYING] [APPLAUSE] HONGCHAN CHOI: Can we switch
back to slides, please? DON TURNER: So I’m
not sure that’s going to be a number one hit. But it was OK. It was OK. HONGCHAN CHOI: Yeah, it was OK. DON TURNER: I think we
can definitely benefit from some expert advice. HONGCHAN CHOI: Sure, I mean
we can always improve, right? So how about asking
someone who has been working on software
synthesizer for more than a decade. Let’s have Magnus Berger
from Propellerhead Software on stage. [APPLAUSE] MAGNUS BERGER: Thank you. OK, so my name is Magnus Berger. I’m the CTO of Propellerhead
Software in Stockholm, Sweden. Now, we’ve been around for about
25 years or exactly 25 years making music-making
applications. Our most well-known
software is called Reason. It’s a virtual studio
for Windows and Mac OS. And you can create any kind
of music imaginable in it. So can we please
have my computer? This is what Reason looks like. It’s essentially a
virtual rack, that’s the most well-known feature,
into which you can drag and drop synthesizer’s effects. And the interaction model is
actually you can push buttons. You can turn the knobs. Let’s change the
filter frequency here. You can see it change. And let’s load a preset. [COMPUTER NOISE] And that must be the
shortest Reason demo ever. So about 10 years ago,
we wanted to open up Reason for third party
developers to create plugins. We wanted to create a plugin
format that’s future proof, and we call that
Rack Extensions. We did that by
making it sandboxed. But we didn’t know exactly what
kind of hardware architecture we were going to run
on in the future. So, you see, DSP code is
usually written in C++. In our SDK, we hand out
an LLVM based tool chain. But instead of developers giving
us the final binary output, they’re handing us LLVM
intermediate representation. The LLVM bit code. It’s sort of a semi compiled
platform agnostic code, and then they upload
that to our servers. And we run this app store model
with more than 500 plug-ins. And we do the file completion
towards the target architecture of the customer. And then we have
a user interface. It’s a declarative, retain
mode user interface, not unlike a web page. But this is written in Lua. So along comes all of
this cool web technology with Audio Worklets
and Web Audio web MIDI. And the technology fit there
is pretty much one to one with our philosophy
of Rack Extensions, so we just had to make
this internal tech demo. So this is not really a product. It’s an internal tech demo. We had a lot of fun creating it. So we took this Lua code, the
user interface definitions, we made it transpilers. We transpiled that code
into JavaScript and HTML. We’re taking LLVM bit code and
building that into WebAssembly. And this is what you get. Opening up Chrome instead. So this looks sort of
familiar, doesn’t it? Vanilla Chrome, let’s
drag and drop Europa. Europa, it’s a rack extension
so we can board that here. And– [COMPUTER NOISES] Sweeping the filter frequency. Let’s open a preset
that you heard before. [COMPUTER NOISES] Now, this wouldn’t be half
as cool if it wasn’t that we can take pretty much anything
from our own app store and just move it to the web
using this build tool chain. Let’s try something else here. This is Polysix from Korg from
Japan, a very nice synthesizer. And let’s stack a player that’s
a MIDI effect on top of it. It’s also a rack extension,
so it’s fairly easy to move that to the web. [ELECTRONIC MUSIC] Changing the cutoff frequency. Add the rest to it. Pretty cool stuff. [APPLAUSE] Can you go to the presentation? So what could be
better than helping people express themselves? Well, just maybe possibly
helping even more people express themselves. And being backed
by a platform that literally supports
billions of users, that’s definitely an enabler. And, by the way, this also
runs natively on phones. Back to Hongchan. [APPLAUSE] HONGCHAN CHOI: That
was really impressive. Thank you so much, Magnus. DON TURNER: Thanks, buddy. HONGCHAN CHOI: And having
partners like Propellerhead makes me really,
really inspiring. And I hope this can
be a positive signal to the other audio
developer as well. We have some stories
to share, right? DON TURNER: Yes,
I am lucky enough to work with the leading names
in the pro audio industry. And, over the past
12 months, we’ve had some fantastic
success stories. And I just wanted to share
very quickly two of them with you now. The first one is from
Izotope, longtime big player in the world of audio
processing plugins. They recently launched
a product called Spire Studio, which is basically
a mobile recording studio. And it comes with
a companion app, which allows you to do
multi-track recording. When they launched on
Android, they saw a nearly 40% increase in sales. And this is absolutely
fantastic because it shows that Android users have
a demand for high end audio hardware. Another success story comes
from Music World Media. Just a few years ago, there
were a tiny startup in Paris. And since then, they’ve
launched an incredible number of successful apps and games in
the musically creative space. And over the past
12 months, they’ve been able to acquire
an astonishing 45 million new Android users
all across the globe. And for me, this shows
the incredible reach that Android has throughout
the entire world. So some great stories there. You’ve got some stories
from the web side? HONGCHAN CHOI: Sure. So things have been pretty
great on the web world. So the web music ecosystem
has gotten a lot more diverse, and these are my three favorite
digital audio workstation on the web. Audiotool is a fully
fledged music studio with open ended workflow. It’s user interface is amazing. You can collaborate
in real time. And Amped Studio, which I
used for my backing track right there, comes with high
quality sample libraries and instrument. It’s foundation was featured
in Chrome Developer Summit last year and they have
been pioneering cutting edge features for WebAssembly. So it’s amazing
all of these tools are freely available
today on the web. And, lastly, the Ableton. With their learning
music web app, you can instantly
start learning how to create song by going
through masterfully crafted content step by step. And you can even export your
work to Ableton live from here. Ableton put the potential
of the web music platform, where platform as an
educational medium. And this project right here
is a visual realization of such a vision. OK, let’s wrap up what
we talked about today. We showed you that
both Chrome and Android are more than capable
of doing real time audio with MIDI support. And through the
live coding demo, we showed you that you
can use the same source code on both platforms. And finally, the enormous
reach of Android and Chrome. DON TURNER: Yeah,
so I just wanted to make one final
point, which is I think it’s very important
to remember that music is a universal language. And it’s something that’s
understood by everyone, regardless of where
you are in the world, your cultural background,
or economic status. And it can be a
powerful force for good, for people to communicate. And, with your
programming skills, and Android and Chrome’s
incredible reach, then we can give
everyone the tools that they need to be musically
creative so that they can not just listen to this
universal language, but they can speak it as well. Search for Audio Worklet and
Oboe Library to get started. Thank you all very much. [GOOGLE LOGO MUSIC]

Only registered users can comment.

  1. Amazing! I love how things are getting so interesting for musicians and developers with this technology!

Leave a Reply

Your email address will not be published. Required fields are marked *