In Part One, I shared that I’d been invited to speak on a panel at Johns Hopkins University about AI and ethics in healthcare policy. I asked for your insights because I believe conversations like this aren’t meant to be extracted from institutions and given to a select few. They belong to those who have lived through the systems in question.
Your responses were thoughtful, courageous, and wide-ranging—from data sovereignty and trauma-informed care, to the practical ways AI can be both tool and threat. You reminded me that ethics isn’t a theory, it’s a relational act.
I’ve been sitting with your words, and here’s some of what I’ve taken away:
Transparency is not optional. Multiple voices reminded me that ethical AI begins with open disclosure—about who built it, how it’s being used, and what choices people actually have when it’s involved in their care. Transparency is not just technical; it’s relational.
Lived experience is structural insight. From those navigating chronic illness, neurodivergence, and trauma, I heard again and again that harm in the healthcare system is not theoretical—it’s embodied, ongoing, and often compounded by AI systems trained on biased data or deployed in rushed environments.
Community-rooted ethics already exist. People are already doing ethical work—caregivers, organizers, peer supporters, artists, educators. We don’t need to invent “ethical frameworks”; we need to fund, trust, and platform the ones that have been growing outside the academy all along.
There’s hunger for real power-sharing. Co-creation isn’t about advisory boards in name only. You called for meaningful authority, data sovereignty, accountability mechanisms, and reflexive use of AI to expose institutional harm—not just performative panels and token invites.
Hope and skepticism can coexist. Many of you expressed cautious hope—sharing personal ways AI has helped supplement care or bridge communication gaps. But you paired that hope with rightful skepticism about whether institutions will actually change, or just absorb resistance into status quo operations.
And still—amid all this generative wisdom—I'm watching institutions attempt to define ethical leadership in AI on their own terms, often without ever leaving their towers.
But I want to go a layer deeper.
Recently, I engaged with a Harvard PhD candidate who described herself as a humanist and ethicist. She argued that humanists should be embedded alongside AI developers from the outset—a claim that, at face value, I agree with. But agreement requires nuance.
Because not all humanists are positioned equally, and not all ethics are created in spaces of accountability. If your ethical framework is shaped within elite institutions like Harvard—institutions historically built on exclusion, colonization, and intellectual gatekeeping—then your definition of “ethical AI” may simply reproduce the very hierarchies it claims to challenge.
Here’s the truth:
You cannot ethically govern a system built on surveillance, extraction, and harm using ethical frameworks born inside the same systems that normalized those things.
As Audre Lorde warned us:
“The master’s tools will never dismantle the master’s house.”
That isn’t ethics.
That’s empire doing PR.
I’m not outside the system either—I’ve been both harmed by it and, at times, shaped by its language. That’s why I insist on this kind of accountability.
So, I want to ask again—but more precisely this time:
Who gets to be the “ethicist”?
Whose framework gets centered? Whose gets ignored?
Because if we're not asking these questions, then “ethics” becomes just another form of gatekeeping.
This Is a Conversation With Boundaries
If you want to engage here, I ask two things—non-negotiable:
1. Name your positionality.
Where do you stand in relation to the systems we’re discussing? What forms of access, exclusion, privilege, or harm have shaped your understanding of ethics and AI?
2. Name your intersections.
What identities, experiences, or systemic locations shape your worldview? This includes—but isn’t limited to—race, gender, disability, class, education, neurodivergence, geography, or trauma history.
If you’ve never done this before, I have two foundational workshop readings that can support your reflection:
📚 Workshop 1: What Is Positionality?
📚 Workshop 2: Intersectionality in Practice
If you’re unwilling to do this basic level of reflexivity, this is probably not your space.
💬 Community Prompt
Who do you think should be shaping AI ethics frameworks?
What qualifies someone to hold ethical authority?
How do we ensure that “ethics” isn’t just repackaged elitism?
You can reply publicly or send a private message.
As before, I may quote responses (with permission) in future speaking engagements.
We’re not just co-creating frameworks. We’re co-creating a culture—one that doesn’t ask for permission to exist, speak, or define what ethical practice actually looks like.
In solidarity and complexity,
Sher
Hi Sher!
I wanted to take the time and say thank you for creating this informative, open-ended, and collaborative post. This speaks to my inner humanitarian, and I have two cents to contribute.
I’ve been developing a body of work called Field & Signal that proposes a culture-based approach to ethics. These concepts are system theory oriented, built around concepts of repair and coherence.
I believe those that shape AI ethics (or any ethical system) should be actively engaged with the concept of “distortion”. I believe it should be the individuals that read these social signals under pressure, in survival, in community, and who are willing to transmute that distortion into design. Not for domination, but for balance.
I’ve been creating tools as an independent scholar, from Signal Literacy diagnostics that help detect cultural distortion, to a Use Ethos that outlines protective practices for working with systems of power, care, and communication. I’m also exploring how field coherence can become a measurable variable— across tech, government, trauma…
My guiding question, “Does this practice create more life than it takes to sustain it?”
Coherence doesn’t happen by accident. If we want a society that functions with integrity, we have to engineer it—through systems thinking, social science, and ethical design.
I’d be honored to contribute or collaborate if any of these ideas feel resonant. I’m currently building relational architectures to support these frameworks in practice—systems that don’t ask for permission to be ethical, but live it into being.
Excited for what the future holds. Thank you again for posing this as a community-oriented approach.