I’m Aicha Evans,
I am from Senegal, West Africa,
and I fell in love with technology,
science and engineering
at a very young age.
Three things happened.
I was studying in Paris,
and starting
at seven years old,
flying back and forth
between Dakar, Senegal and Paris
as an unaccompanied minor.
So it wasn't just about the travel.
It was really about a portal to knowledge,
different environments
and adapting.
Second thing that happened
was every time I was at home in Senegal,
I wanted to talk to my friends in Paris.
So my dad got tired
of the long-distance bills,
so he put a little lock on the phone --
the rotary phone.
I said, OK, no problem,
hacked it,
and he kept getting the bills.
Sorry again, Dad,
if you’re watching this someday.
And then, obviously,
the internet was also emerging.
So what really happened
was that, in terms of technology,
I really saw it as something
that shaped your experiences,
how you understand the world
and wanting to be part of it.
And for me,
the common thread is that physical
and virtual transportation --
because that’s really what
that rotary phone was for me --
are at the center
of the innovation flywheel.
Now, fast-forward.
I’m here today,
I’m part of a movement and an industry
that is working on bringing
transportation and technology together.
Huh.
It’s not just about your commutes.
It’s really about changing everything
in terms of how we move people,
goods and services, eventually.
That transformation involves robotaxis.
Driverless cars again, really?
Yeah, yeah, yeah, I’ve heard it before.
And by the way, they are always
coming the next decade,
and oh, by the way,
there’s an alphabet soup
of companies working on it
and we can’t even remember
who’s who and who’s doing what.
Yeah?
Audience: Yeah.
AE: Yeah, OK, well, this is not
about personal, self-driving cars.
Sorry to disappoint you.
This is really about a few things.
First of all,
personally and individually owned cars
are a wasteful expense,
and they contribute to,
basically, a lot of pollution
and also traffic in urban areas.
Second of all, there’s this notion
of self-driving shuttles,
but frankly, they are optimized for many.
They can’t take you specifically
from point A to point B.
OK, now we have --
hm, how am I going to say this --
the so-called “personal,
self-driving” cars of today.
Well, the reality is that those cars
still require a human behind the wheel.
A safety driver.
Make no mistake about it.
I own one of those,
and when I’m in it,
I am a safety driver.
So the question now becomes,
What do we do with this?
Well, we think that robotaxis,
first of all, they will take you
specifically from point A to point B.
Second of all, when you're not using them,
somebody else will be using them.
And they are being tested today.
When I say that we’re on the cusp
of finally delivering that vision,
there's actually reason to believe it.
At the core of self-driving technology
is computer vision.
Computer vision is a real-time
representation,
digital representation, of the world
and the interactions within it.
It has benefited from leaps
and bounds of advancements
thanks to computer, sensors,
machine learning and software innovation.
At the core of computer vision
are camera systems.
Cameras basically help
you see agents such as cars,
their locations and their actions,
pedestrians,
their locations,
their actions and their gestures.
In addition, there's also been
a lot of advancements.
So one example is our vehicle
can see the skeleton framework
to show you the direction of travel;
also to give you details, like,
are you dealing with a construction worker
in a construction zone
or are you dealing with a pedestrian
that’s probably distracted
because they are looking on their phone?
Now the reality, though --
and this is where it gets interesting --
is that the camera and the algorithms
that help us really cannot yet match
the human brain’s ability to understand
and interpret the environment.
They just can’t.
Even though they provide you
really high-resolution imaging
that really gives you continuous coverage,
that doesn’t get fatigued, impaired
or, you know, drunk or anything like that,
at the end of the day,
there are still things that they can’t see
and they can’t measure.
So if we want autonomous-driving
robotaxis soon,
we have to supplement cameras.
Let me walk through some examples.
So radar gives you the direction of travel
and measures the agent’s movement
within centimeters per second.
Lidar gives you objects and shapes
in the real world using depth perception
as well as long-range
and the all-important night vision.
And let me tell you about this,
because this is important to me personally
and people who look like me.
Then you have, also, long-wave infrared
where you are able to see agents
that are emitting heat,
such as animals and humans.
And that’s again,
especially at night,
super important.
Now, every one of these sensors
is very powerful by itself,
but when you put them together
is when the magic happens.
If you see with this vehicle, for example,
you have these multiple sensor modalities
at all top four corners of the vehicle
that basically provide you
a 360-degree field of vision,
continuously,
in a redundant manner,
so that we don't miss anything.
And this is that same thing
with all of the different
outputs fused together.
And looking at this, basically,
and looking at what we see
and how we are able to process the data,
then learn,
then continue to improve our driving,
is what tells us that we have confidence,
this is the right approach
and this time it’s actually coming.
Now, this is not, by the way,
a brand new concept, OK?
Humans have been
basically using vision systems
to assist them for a long time.
Let me back up the boat a little bit,
because I know there’s a question
that everybody’s asking,
which is, “Hey, how are you going
to deal with all the scenarios
out there on the streets today?”
Most of us are drivers,
and it’s complicated out there.
Well, the truth is that there will
always be edge scenarios
that sit at the boundary
of our real-world testing
or that are just too dangerous
to test on real streets.
That is the truth,
and it will be the truth
for a very long time.
Human beings are pretty underrated
in their abilities.
So what we do is we use simulation.
And with simulation,
we’re able to construct
millions of scenarios
in a fabricated environment
so that we can see
how our software would react.
And that’s the simulation footage.
You can see we’re building the world,
we’re putting in scenarios
and we can add things,
remove things
and see how we would react.
In addition, we have what's called
a human in the loop.
This is very similar
to aviation systems today.
We don’t want the vehicle to get stuck,
and there are rare times
where it’s not going to know what to do.
So we have a team
of teleguidance operators
that are sitting at a control center,
and if the vehicle knows
that it’s going to be stuck
or it doesn’t know what to do,
it asks for guidance and help
and it receives it remotely
and then it proceeds.
Now, none of these really
are new concepts,
as I alluded to earlier.
Vision systems have been
assisting humans for a long time,
especially with things
that are not visible to the naked eye.
So ...
microscopes, right?
We’ve been studying microbes
and cells for a long time.
Telescopes:
we’ve been studying and detecting galaxies
millions of light-years away
for a long time.
And both of these have caused us,
for example,
to transform industries like medicine,
farming,
astrophysics
and much more.
So when we talk about computer vision,
when it started,
it was really a thought experiment
to see if we could replicate
what humans see using cameras.
It has now graduated with sensors,
computers,
AI
and software innovation
to be about surpassing
what humans can see and perceive.
We’ve made a lot of progress
in this field,
but at the end of the day,
we have a lot more to do.
And with an autonomous robotaxi,
you want it to be safe,
right and reliable every single time,
which requires rigorous testing
and optimization.
And when that happens
and we reach that state,
we will wonder how we ever accepted
or tolerated
94 percent of crashes
being caused by human [error].
So with computer vision,
we have the opportunity
to move from problem-solving
to problem-preventing.
And I truly, truly believe
that the next generation
of scientists and technologists
in, yes, Silicon Valley,
but in Paris,
in Senegal, West Africa
and all over the world,
will be exposed to computer
vision applied broadly.
And with that,
all industries will be transformed,
and we will experience the world
in a different way.
I hope you can join me
in agreeing that this is a gift
that we almost owe
our next generation that is coming,
because there are a lot of things
that computer vision will help us solve.
Thank you.
(Applause)