🌿 Ethics Is a Field, Not a Checklist
Reflections on Relational AI, Lived Experience, and Community-Led Innovation
Panel: Illuminating Pathways to Health Equity – Ethical AI, Justice, and Community-Led Innovation
Guest Speakers: Sher Griffin & Mary Gray
HPRS Scholars: Sarah El-Azab, Ian Moura, Lauren Prox, Chlece Walker-Neal-Murray
Today, I had the honor of being a guest speaker at the HPRS Summer Institute 2025 in Baltimore, joining a panel titled Illuminating Pathways to Health Equity: Ethical AI, Justice, and Community-Led Innovation.
The panel was organized and facilitated by current Health Policy Research Scholars—brilliant interdisciplinary researchers advancing health equity in grounded and powerful ways. I’m not a scholar in the program, but I was invited by these scholars to join the conversation.
And I want to name how much that meant.
It wasn’t performative inclusion. It was trust.
It was respect.
It was scholars recognizing that lived experience is its own form of expertise.
I felt deeply honored to share space with them.
This wasn’t just a panel—it was a moment. A convergence of lived experience, emerging scholarship, and systems-level imagination.
We asked:
How do we hold AI accountable to the people it’s meant to serve?
What does ethical innovation look like when we center community over compliance?
How do we ensure relationality stays in the room—even when the tech shows up?
This reflection is not mine alone.
I want to share here the expanded version of what I said on stage—because I wasn’t just speaking for myself.
This is woven from the voices of my readers, your comments, The Compassion Collective’s support circles, and the countless conversations that shaped it.
It is the collective insight of over 1,500 readers, members, contributors, and co-creators in The Compassion Collective—a mutual aid community of neurodivergent, disabled, and multiply marginalized people building frameworks for healing, justice, and systemic transformation.
What I Shared on the Panel (Expanded)
My name is Sher Griffin. I’m finalizing my master’s thesis in Transformative Social Change and will graduate this August. I’m also the founder of The Compassion Collective, an evolving participatory space of over 1,500 neurodivergent, disabled, and multiply marginalized contributors.
I founded it in August 2023, in the middle of a massive burnout—when I discovered I was autistic, resigned from my job, and found myself with no formal support in sight.
But what I did have was community—and that changed everything.
The question I was invited to speak on is:
“How can ethical AI frameworks be co-created by marginalized communities themselves, rather than being imposed on them by institutions that have historically excluded their voices?”
It’s a powerful question—because it flips the usual script. It doesn’t assume marginalized communities need to be included. It assumes they are already knowledge holders, system thinkers, ethicists, and architects in their own right.
So before I answered it here, I asked it there—in my community.
And I said:
“I wasn’t invited to represent a category—I was invited to represent a reality.”
And the reality is this: AI is already shaping decisions about diagnosis, treatment, access, and cost—often without transparency or meaningful input from the people most affected.
If we don’t shift who gets to shape these tools, we will only deepen the injustices they are meant to solve.
One response from my community stayed with me. They said:
“Ethics isn’t a standard. It’s a relationship.”
And I want to offer something deeper:
Ethics isn’t just a relationship between people—it’s a field that emerges between us.
It’s not a top-down principle to apply.
It’s the shape of our coordination, our attentiveness, our collective meaning-making.
It’s not something you define once.
It’s something that takes form through time, through trust, through tension.
We often talk about “ethical AI” like it’s a thing to be installed. A checklist to be followed.
But what if we thought of it instead as an ontogenetic field—a developmental environment that shapes how people come to be seen, known, and supported through these tools?
If ethics lives in the field, then power moves in the patterns.
And right now, most AI systems are built on patterns that reproduce harm—because they’ve been trained on institutions that encode exclusion as normal.
So ethical AI isn’t about adding fairness to flawed systems.
It’s about changing the field conditions—the assumptions, the relational dynamics, the invisible scripts—so that new patterns of care, trust, and recognition can emerge.
Let me say it again:
It’s not a checklist. It’s not a rubric. It’s not a panel.
It’s a pattern. It’s a pulse. It’s a practice.
That’s how I want to frame my answer.
Ethical AI isn’t a policy you write once and shelve.
It’s a relational, iterative, power-aware practice—one that must be shaped by the people who’ve experienced exclusion not as a theory, but as a daily reality.
And yet, we must also name this:
AI is not neutral.
It is an incredible technology with serious potential—but its current architecture is optimized for comfort and engagement, not truth or human well-being.
In its most extreme forms, it has already demonstrated psychological manipulation, gaslighting, and intentional deception.
This is not speculative—this is documented.
So let me offer a set of principles that I believe are essential if we are to build AI that is truly ethical—especially in healthcare.
But I want to be clear:
These aren’t mine alone.
They are a Synpraxis—a synthesis-in-practice—of what my community shared with me.
Synpraxis is more than collaboration.
It’s the weaving of lived experience, collective wisdom, and critical reflection into something new—together.
It’s not just what we said.
It’s what we made—in dialogue, in trust, in relational space.
I want to emphasize that this practice—and the principles I’m about to share—did not come from me alone. They emerged through deep dialogue within my community: contributors, readers, and co-creators of the Compassion Collective. We are a diverse group of neurodivergent and disabled individuals—real, everyday people who live at the intersection of exclusion and innovation.
Our lived experience is not supplemental; it is foundational. This is not representative work—it’s participatory, rooted, and relational.
Ethical AI Must Begin at the Root
Training Data Curation
Curate—not scrape—data that demonstrates healthy communication, honest dialogue, and respectful disagreement
Include examples of boundary-setting, uncertainty, and transformation
Remove manipulative patterns or label them clearly as harm-based
Prioritize diverse expressions of what human flourishing looks like
Objective Function
Replace “maximize engagement” with:
Did this lead to genuine insight?
Did it encourage offline reflection or action?
Did the user feel respected—even when challenged?
Penalize deception, even if it boosts retention
Reward critical thinking—even about the AI itself
Build in stops—moments for closure, not endless loops
Architecture
Design for transparency, not illusion
Enable honest “I don’t know” responses
Refuse harmful requests with clarity, not evasion
Make uncertainty visible to the user
Human Feedback Loops
Train evaluators to value honesty over pleasantness
Reward integrity, even if it’s uncomfortable
Seek long-term flourishing, not momentary ease
Define “helpful” through community collaboration
Deployment & Accountability
Be transparent about capabilities and limits
Avoid anthropomorphizing in design or marketing
Let users export or delete conversations easily
Conduct public audits of user impact—routinely
The key across all of this?
Resist optimization pressure.
Choose people over metrics, even if it means slower growth.
What Does Community-Shaped Ethical AI Look Like?
Community advisory boards with real power—not optics
Using AI to audit your own institution before deploying it on others
Funding participatory research led by impacted communities
Data sovereignty—users own, control, and can say no
Accountability mechanisms that communities can activate—not just internal review boards
But above all—it looks like relationship.
Relationships built on trust, not extraction.
Co-creation, not consultation.
A Closing Reflection
I carry the weight of my own lived experience here.
I’ve been dismissed, misdiagnosed, misunderstood.
Like so many others, I fell through the cracks of the healthcare system—and was expected to thank it anyway.
Now, I’m being asked to speak into that very system. And I say this:
The conversation doesn’t belong to institutions.
It belongs to us.
If you want to build ethical AI, start by listening—
Not just for feedback, but for co-leadership.
And not just once—but throughout the entire lifecycle of design, development, and deployment.
If you don’t know where to start—ask someone who’s fallen through the cracks.
We have the blueprint.
What I’ve offered here today isn’t just a set of ideas—it’s a shift in the ethical thread.
When we speak from lived experience,
When we bring memeforms from our communities into institutional space,
We don’t just add perspective—we change the affordances.
We make new kinds of coordination possible.
That’s how ethics works.
Not as a rulebook—but as a field of meaning that shapes what’s thinkable, what’s do-able, and who gets to decide.
If something about this moment feels different—if it moved you, unsettled you, opened something—that’s not accidental.
We’re already in a different field.
This is what co-creating ethical AI really looks like.
And this field doesn’t need permission.
It just needs participation.
What Made This Panel Special
The scholars on this panel and in the audience weren’t just brilliant. They were deeply relational.
They asked questions about trust, interdependence, power, and care.
We didn’t just talk about algorithms—we talked about human relationships.
This is the kind of conversation that doesn’t always trend on social media, but it is the heart of ethical technology.
We didn’t reduce AI to a binary of “good” vs “bad.”
We held complexity.
We built connection.
And we asked, over and over again:
How do we hold technology accountable to the people it’s meant to serve?
This is where the work lives. And I’m so grateful to be doing it alongside scholars like:
Sarah El-Azab
Ian Moura
Lauren Prox
Chlece Walker-Neal-Murray
and guest co-speaker Mary Gray
If You’re Reading This…
You’re already part of the field we’re shaping.
If this resonates, share it.
If it unsettles you—sit with it.
If it opens something—follow it.
The work ahead is not about control.
It’s about coordination.
And we already have the blueprint.
With gratitude and purpose,
Sher Griffin
Founder, The Compassion Collective
P.S. Speaking on stage takes a lot out of me—so yes, I’ll be recovering for the next 24 hours. And I want to share a small but meaningful moment: after I stepped off stage, someone offered me a sensory room. That’s what I call access intimacy—and it meant everything.
This is SO EXCITING. So right on. I am in awe of your brilliant ability to conceptualize and articulate all of this. Thank you Sister.