How to pronounce "codebook"
Transcript
I study how the brain processes
information. That is, how it takes
information in from the outside world, and
converts it into patterns of electrical activity,
and then how it uses those patterns
to allow you to do things --
to see, hear, to reach for an object.
So I'm really a basic scientist, not
a clinician, but in the last year and a half
I've started to switch over, to use what
we've been learning about these patterns
of activity to develop prosthetic devices,
and what I wanted to do today is show you
an example of this.
It's really our first foray into this.
It's the development of a prosthetic device
for treating blindness.
So let me start in on that problem.
There are 10 million people in the U.S.
and many more worldwide who are blind
or are facing blindness due to diseases
of the retina, diseases like
macular degeneration, and there's little
that can be done for them.
There are some drug treatments, but
they're only effective on a small fraction
of the population. And so, for the vast
majority of patients, their best hope for
regaining sight is through prosthetic devices.
The problem is that current prosthetics
don't work very well. They're still very
limited in the vision that they can provide.
And so, you know, for example, with these
devices, patients can see simple things
like bright lights and high contrast edges,
not very much more, so nothing close
to normal vision has been possible.
So what I'm going to tell you about today
is a device that we've been working on
that I think has the potential to make
a difference, to be much more effective,
and what I wanted to do is show you
how it works. Okay, so let me back up a
little bit and show you how a normal retina
works first so you can see the problem
that we were trying to solve.
Here you have a retina.
So you have an image, a retina, and a brain.
So when you look at something, like this image
of this baby's face, it goes into your eye
and it lands on your retina, on the front-end
cells here, the photoreceptors.
Then what happens is the retinal circuitry,
the middle part, goes to work on it,
and what it does is it performs operations
on it, it extracts information from it, and it
converts that information into a code.
And the code is in the form of these patterns
of electrical pulses that get sent
up to the brain, and so the key thing is
that the image ultimately gets converted
into a code. And when I say code,
I do literally mean code.
Like this pattern of pulses here actually means "baby's face,"
and so when the brain gets this pattern
of pulses, it knows that what was out there
was a baby's face, and if it
got a different pattern it would know
that what was out there was, say, a dog,
or another pattern would be a house.
Anyway, you get the idea.
And, of course, in real life, it's all dynamic,
meaning that it's changing all the time,
so the patterns of pulses are changing
all the time because the world you're
looking at is changing all the time too.
So, you know, it's sort of a complicated
thing. You have these patterns of pulses
coming out of your eye every millisecond
telling your brain what it is that you're seeing.
So what happens when a person
gets a retinal degenerative disease like
macular degeneration? What happens is
is that, the front-end cells die,
the photoreceptors die, and over time,
all the cells and the circuits that are
connected to them, they die too.
Until the only things that you have left
are these cells here, the output cells,
the ones that send the signals to the brain,
but because of all that degeneration
they aren't sending any signals anymore.
They aren't getting any input, so
the person's brain no longer gets
any visual information --
that is, he or she is blind.
So, a solution to the problem, then,
would be to build a device that could mimic
the actions of that front-end circuitry
and send signals to the retina's output cells,
and they can go back to doing their
normal job of sending signals to the brain.
So this is what we've been working on,
and this is what our prosthetic does.
So it consists of two parts, what we call
an encoder and a transducer.
And so the encoder does just
what I was saying: it mimics the actions
of the front-end circuitry -- so it takes images
in and converts them into the retina's code.
And then the transducer then makes the
output cells send the code on up
to the brain, and the result is
a retinal prosthetic that can produce
normal retinal output.
So a completely blind retina,
even one with no front-end circuitry at all,
no photoreceptors,
can now send out normal signals,
signals that the brain can understand.
So no other device has been able
to do this.
Okay, so I just want to take
a sentence or two to say something about
the encoder and what it's doing, because
it's really the key part and it's
sort of interesting and kind of cool.
I'm not sure "cool" is really the right word, but
you know what I mean.
So what it's doing is, it's replacing
the retinal circuitry, really the guts of
the retinal circuitry, with a set of equations,
a set of equations that we can implement
on a chip. So it's just math.
In other words, we're not literally replacing
the components of the retina.
It's not like we're making a little mini-device
for each of the different cell types.
We've just abstracted what the
retina's doing with a set of equations.
And so, in a way, the equations are serving
as sort of a codebook. An image comes in,
goes through the set of equations,
and out comes streams of electrical pulses,
just like a normal retina would produce.
Now let me put my money
where my mouth is and show you that
we can actually produce normal output,
and what the implications of this are.
Here are three sets of
firing patterns. The top one is from
a normal animal, the middle one is from
a blind animal that's been treated with
this encoder-transducer device, and the
bottom one is from a blind animal treated
with a standard prosthetic.
So the bottom one is the state-of-the-art
device that's out there right now, which is
basically made up of light detectors,
but no encoder. So what we did was we
presented movies of everyday things --
people, babies, park benches,
you know, regular things happening -- and
we recorded the responses from the retinas
of these three groups of animals.
Now just to orient you, each box is showing
the firing patterns of several cells,
and just as in the previous slides,
each row is a different cell,
and I just made the pulses a little bit smaller
and thinner so I could show you
a long stretch of data.
So as you can see, the firing patterns
from the blind animal treated with
the encoder-transducer really do very
closely match the normal firing patterns --
and it's not perfect, but it's pretty good --
and the blind animal treated with
the standard prosthetic,
the responses really don't.
And so with the standard method,
the cells do fire, they just don't fire
in the normal firing patterns because
they don't have the right code.
How important is this?
What's the potential impact
on a patient's ability to see?
So I'm just going to show you one
bottom-line experiment that answers this,
and of course I've got a lot of other data,
so if you're interested I'm happy
to show more. So the experiment
is called a reconstruction experiment.
So what we did is we took a moment
in time from these recordings and asked,
what was the retina seeing at that moment?
Can we reconstruct what the retina
was seeing from the responses
from the firing patterns?
So, when we did this for responses
from the standard method and from
our encoder and transducer.
So let me show you, and I'm going to
start with the standard method first.
So you can see that it's pretty limited,
and because the firing patterns aren't
in the right code, they're very limited in
what they can tell you about
what's out there. So you can see that
there's something there, but it's not so clear
what that something is, and this just sort of
circles back to what I was saying in the
beginning, that with the standard method,
patients can see high-contrast edges, they
can see light, but it doesn't easily go
further than that. So what was
the image? It was a baby's face.
So what about with our approach,
adding the code? And you can see
that it's much better. Not only can you
tell that it's a baby's face, but you can
tell that it's this baby's face, which is a
really challenging task.
So on the left is the encoder
alone, and on the right is from an actual
blind retina, so the encoder and the transducer.
But the key one really is the encoder alone,
because we can team up the encoder with
the different transducer.
This is just actually the first one that we tried.
I just wanted to say something about the standard method.
When this first came out, it was just a really
exciting thing, the idea that you
even make a blind retina respond at all.
But there was this limiting factor,
the issue of the code, and how to make
the cells respond better,
produce normal responses,
and so this was our contribution.
Now I just want to wrap up,
and as I was mentioning earlier
of course I have a lot of other data
if you're interested, but I just wanted to give
this sort of basic idea
of being able to communicate
with the brain in its language, and
the potential power of being able to do that.
So it's different from the motor prosthetics
where you're communicating from the brain
to a device. Here we have to communicate
from the outside world
into the brain and be understood,
and be understood by the brain.
And then the last thing I wanted
to say, really, is to emphasize
that the idea generalizes.
So the same strategy that we used
to find the code for the retina we can also
use to find the code for other areas,
for example, the auditory system and
the motor system, so for treating deafness
and for motor disorders.
So just the same way that we were able to
jump over the damaged
circuitry in the retina to get to the retina's
output cells, we can jump over the
damaged circuitry in the cochlea
to get the auditory nerve,
or jump over damaged areas in the cortex,
in the motor cortex, to bridge the gap
produced by a stroke.
I just want to end with a simple
message that understanding the code
is really, really important, and if we
can understand the code,
the language of the brain, things become
possible that didn't seem obviously
possible before. Thank you.
(Applause)
Phonetic Breakdown of "codebook"
Learn how to break down "codebook" into its phonetic components. Understanding syllables and phonetics helps with pronunciation, spelling, and language learning.
Pronunciation Tips:
- Stress the first syllable
- Pay attention to vowel sounds
- Practice each syllable separately
Spelling Benefits:
- Easier to remember spelling
- Helps with word recognition
- Improves reading fluency
Definition of "codebook"
Noun
-
A book, table, database, or other object that stores the mapping between plaintext words or phrases and their equivalents in a code.
-
A lookup table.
Related Words to "codebook"
Discover words associated with "codebook" through various relationships - including meaning, context, usage, and more. Exploring word associations helps build a deeper understanding of language connections.
Words That Sound Like "codebook"
Practice these words that sound similar to "codebook" to improve your pronunciation precision and train your ear to distinguish subtle sound differences.
Similar Spelling to "codebook"
Explore words with similar spelling patterns to "codebook" to improve your spelling skills and expand your vocabulary with words that look alike but may have different meanings.