CA: But track record
isn't the issue in a way --
SA: No, it kind of is.
CA: Because we're talking
about an exponentially growing power
where we fear that we may wake up one day
and the world is ending.
So it's really not about track record,
it's about plausibly saying
that the pieces are in place
to shut things down quickly
if we see a danger.
SA: Yeah, no, of course,
of course that's important.
You don't, like, wake up one day and say,
"Hey, we didn't have
any safety process in place.
Now we think the model is really smart.
So now we have to care about safety."
You have to care about it
all along this exponential curve.
Of course the stakes increase,
and there are big challenges.
But the way we learn
how to build safe systems
is this iterative process
of deploying them to the world,
getting feedback,
while the stakes are relatively low,
learning about like,
this is something we have to address.
And I think as we move
into these agentic systems,