race or sprint or whatever,
misses the nuance of people are trying
to put out models quickly
and make great products for people.
But people feel the impact
of this so incredibly that ...
you know, I think if you could go
sit in a meeting in OpenAI
or other companies, you'd be like,
oh, these people are really
kind of caring about this.
Now we did make a change recently
to how we think about one part of
what's traditionally
been understood as safety.
Which is, with our new image model,
we've given users much more freedom
on what we would traditionally
think about as speech harms.
You know, if you try
to get offended by the model,
will the model let you be offended?
And in the past, we've had much
tighter guardrails on this.
But I think part of model alignment
is following what the user
of a model wants it to do
within the very broad bounds
of what society decides.
So if you ask the model to depict a bunch
of violence or something like that
or to sort of reinforce some stereotype,
there's a question of whether
or not it should do that.