It doesn't have to be
an individual choice.
A single person could let that agent
out there, and the agent could decide,
"Well, in order to execute
on that function,
I've got to copy myself everywhere,"
and, you know.
Are there red lines that you have
clearly drawn internally,
where you know what
the danger moments are,
and that we cannot put out
something that could go beyond this?
SA: Yeah, so this is the purpose
of our preparedness framework.
And we'll update that over time.
But we’ve tried to outline where we think
the most important danger moments are,
or what the categories are,
how we measure that,
and how we would mitigate something
before releasing it.
I can tell from the conversation
you're not a big AI fan.
CA: Actually, on the contrary,
I use it every day.
I'm awed by it.
I think this is an incredible
time to be alive.
I wouldn't be alive any other time,
and I cannot wait to see where it goes.
We've been holding ...
I think it's essential to hold ...