How to pronounce "hairnet"
Transcript
Up until now, our communication with machines
has always been limited
to conscious and direct forms.
Whether it's something simple
like turning on the lights with a switch,
or even as complex as programming robotics,
we have always had to give a command to a machine,
or even a series of commands,
in order for it to do something for us.
Communication between people, on the other hand,
is far more complex and a lot more interesting
because we take into account
so much more than what is explicitly expressed.
We observe facial expressions, body language,
and we can intuit feelings and emotions
from our dialogue with one another.
This actually forms a large part
of our decision-making process.
Our vision is to introduce
this whole new realm of human interaction
into human-computer interaction
so that computers can understand
not only what you direct it to do,
but it can also respond
to your facial expressions
and emotional experiences.
And what better way to do this
than by interpreting the signals
naturally produced by our brain,
our center for control and experience.
Well, it sounds like a pretty good idea,
but this task, as Bruno mentioned,
isn't an easy one for two main reasons:
First, the detection algorithms.
Our brain is made up of
billions of active neurons,
around 170,000 km
of combined axon length.
When these neurons interact,
the chemical reaction emits an electrical impulse,
which can be measured.
The majority of our functional brain
is distributed over
the outer surface layer of the brain,
and to increase the area that's available for mental capacity,
the brain surface is highly folded.
Now this cortical folding
presents a significant challenge
for interpreting surface electrical impulses.
Each individual's cortex
is folded differently,
very much like a fingerprint.
So even though a signal
may come from the same functional part of the brain,
by the time the structure has been folded,
its physical location
is very different between individuals,
even identical twins.
There is no longer any consistency
in the surface signals.
Our breakthrough was to create an algorithm
that unfolds the cortex,
so that we can map the signals
closer to its source,
and therefore making it capable of working across a mass population.
The second challenge
is the actual device for observing brainwaves.
EEG measurements typically involve
a hairnet with an array of sensors,
like the one that you can see here in the photo.
A technician will put the electrodes
onto the scalp
using a conductive gel or paste
and usually after a procedure of preparing the scalp
by light abrasion.
Now this is quite time consuming
and isn't the most comfortable process.
And on top of that, these systems
actually cost in the tens of thousands of dollars.
So with that, I'd like to invite onstage
Evan Grant, who is one of last year's speakers,
who's kindly agreed
to help me to demonstrate
what we've been able to develop.
(Applause)
So the device that you see
is a 14-channel, high-fidelity
EEG acquisition system.
It doesn't require any scalp preparation,
no conductive gel or paste.
It only takes a few minutes to put on
and for the signals to settle.
It's also wireless,
so it gives you the freedom to move around.
And compared to the tens of thousands of dollars
for a traditional EEG system,
this headset only costs
a few hundred dollars.
Now on to the detection algorithms.
So facial expressions --
as I mentioned before in emotional experiences --
are actually designed to work out of the box
with some sensitivity adjustments
available for personalization.
But with the limited time we have available,
I'd like to show you the cognitive suite,
which is the ability for you
to basically move virtual objects with your mind.
Now, Evan is new to this system,
so what we have to do first
is create a new profile for him.
He's obviously not Joanne -- so we'll "add user."
Evan. Okay.
So the first thing we need to do with the cognitive suite
is to start with training
a neutral signal.
With neutral, there's nothing in particular
that Evan needs to do.
He just hangs out. He's relaxed.
And the idea is to establish a baseline
or normal state for his brain,
because every brain is different.
It takes eight seconds to do this,
and now that that's done,
we can choose a movement-based action.
So Evan, choose something
that you can visualize clearly in your mind.
Evan Grant: Let's do "pull."
Tan Le: Okay, so let's choose "pull."
So the idea here now
is that Evan needs to
imagine the object coming forward
into the screen,
and there's a progress bar that will scroll across the screen
while he's doing that.
The first time, nothing will happen,
because the system has no idea how he thinks about "pull."
But maintain that thought
for the entire duration of the eight seconds.
So: one, two, three, go.
Okay.
So once we accept this,
the cube is live.
So let's see if Evan
can actually try and imagine pulling.
Ah, good job!
(Applause)
That's really amazing.
(Applause)
So we have a little bit of time available,
so I'm going to ask Evan
to do a really difficult task.
And this one is difficult
because it's all about being able to visualize something
that doesn't exist in our physical world.
This is "disappear."
So what you want to do -- at least with movement-based actions,
we do that all the time, so you can visualize it.
But with "disappear," there's really no analogies --
so Evan, what you want to do here
is to imagine the cube slowly fading out, okay.
Same sort of drill. So: one, two, three, go.
Okay. Let's try that.
Oh, my goodness. He's just too good.
Let's try that again.
EG: Losing concentration.
(Laughter)
TL: But we can see that it actually works,
even though you can only hold it
for a little bit of time.
As I said, it's a very difficult process
to imagine this.
And the great thing about it is that
we've only given the software one instance
of how he thinks about "disappear."
As there is a machine learning algorithm in this --
(Applause)
Thank you.
Good job. Good job.
(Applause)
Thank you, Evan, you're a wonderful, wonderful
example of the technology.
So, as you can see, before,
there is a leveling system built into this software
so that as Evan, or any user,
becomes more familiar with the system,
they can continue to add more and more detections,
so that the system begins to differentiate
between different distinct thoughts.
And once you've trained up the detections,
these thoughts can be assigned or mapped
to any computing platform,
application or device.
So I'd like to show you a few examples,
because there are many possible applications
for this new interface.
In games and virtual worlds, for example,
your facial expressions
can naturally and intuitively be used
to control an avatar or virtual character.
Obviously, you can experience the fantasy of magic
and control the world with your mind.
And also, colors, lighting,
sound and effects
can dynamically respond to your emotional state
to heighten the experience that you're having, in real time.
And moving on to some applications
developed by developers and researchers around the world,
with robots and simple machines, for example --
in this case, flying a toy helicopter
simply by thinking "lift" with your mind.
The technology can also be applied
to real world applications --
in this example, a smart home.
You know, from the user interface of the control system
to opening curtains
or closing curtains.
And of course, also to the lighting --
turning them on
or off.
And finally,
to real life-changing applications,
such as being able to control an electric wheelchair.
In this example,
facial expressions are mapped to the movement commands.
Man: Now blink right to go right.
Now blink left to turn back left.
Now smile to go straight.
TL: We really -- Thank you.
(Applause)
We are really only scratching the surface of what is possible today,
and with the community's input,
and also with the involvement of developers
and researchers from around the world,
we hope that you can help us to shape
where the technology goes from here. Thank you so much.
Phonetic Breakdown of "hairnet"
Learn how to break down "hairnet" into its phonetic components. Understanding syllables and phonetics helps with pronunciation, spelling, and language learning.
Pronunciation Tips:
- Stress the first syllable
- Pay attention to vowel sounds
- Practice each syllable separately
Spelling Benefits:
- Easier to remember spelling
- Helps with word recognition
- Improves reading fluency
Related Words to "hairnet"
Discover words associated with "hairnet" through various relationships - including meaning, context, usage, and more. Exploring word associations helps build a deeper understanding of language connections.
Words That Sound Like "hairnet"
Practice these words that sound similar to "hairnet" to improve your pronunciation precision and train your ear to distinguish subtle sound differences.