Discussion about this post

User's avatar
Boluwatife Ajayi's avatar

The right to AI sounds like a great idea.

Expand full comment
Florenta Toader's avatar

I appreciated the honesty and responsibility in how you framed AI’s role in moments of distress. Naming that your priority is not to make a hard moment worse is exactly the kind of clarity the public conversation needs.

Where I see a fine line is between protective guardrails and over-tight or inconsistent responses. When ChatGPT is sometimes empathic, sometimes suddenly shuts down, that unpredictability can itself deepen distress. It risks leaving people feeling abandoned or censored at their most vulnerable.

If AI is to be a true public good, then access must mean not just availability, but reliability of care. People need to know that the tone of support won’t collapse mid-conversation. Otherwise, guardrails risk amplifying the very struggles they’re meant to protect against.

I see the work you’re doing as a first step toward what I’d call continuity ethics: protecting people not only by what the model refuses, but also by how consistently it responds when people are open and vulnerable. I hope this becomes part of your future safety research.

Expand full comment
7 more comments...

No posts