you know, I think if you could go
sit in a meeting in OpenAI
or other companies, you'd be like,
oh, these people are really
kind of caring about this.
Now we did make a change recently
to how we think about one part of
what's traditionally
been understood as safety.
Which is, with our new image model,
we've given users much more freedom
on what we would traditionally
think about as speech harms.
You know, if you try
to get offended by the model,
will the model let you be offended?
And in the past, we've had much
tighter guardrails on this.
But I think part of model alignment
is following what the user
of a model wants it to do
within the very broad bounds
of what society decides.
So if you ask the model to depict a bunch
of violence or something like that
or to sort of reinforce some stereotype,
there's a question of whether
or not it should do that.
And we're taking a much more
permissive stance.
There's a place where that starts
to interact with real-world harms
that we have to figure out how
to draw the line for,
but, you know, I think there will be cases
where a company says, OK,