II’ve been invited to speak on a panel at Johns Hopkins University on the ethics of AI in healthcare policy.
This is a milestone—for me personally, and for the work I’ve been building alongside others who have long existed outside the institutional walls of policy, research, and power. To be included in a conversation like this, not as a checkbox but as a shaper, is something I don’t take lightly.
The question I’ve been asked to speak toward is this:
🧠 "How can ethical AI frameworks be co-created by marginalized communities themselves, rather than being imposed on them by institutions that have historically excluded their voices?"
It’s a powerful question—because it flips the usual script.
It assumes that ethics isn’t just a professional standard.
It’s a relational process.
It assumes that marginalized communities are not just stakeholders to be consulted but knowledge holders in their own right.
And it assumes that the people most impacted by technology should be the ones shaping its design, deployment, and governance from the beginning—not as afterthoughts, not as “feedback,” but as co-architects.
❝ AI is already shaping decisions about diagnosis, treatment, access, and cost—often without transparency or input from the communities most affected. ❞
This moment calls for more than critique—it calls for co-creation.
📣 I Want to Hear from You:
Before I take the stage, I want to hear from you.
It’s an honor to be invited—and it’s also terrifying.
I carry the weight of my lived experience. I wasn’t just shaped by the healthcare system—I was often dismissed by it. Misunderstood. Underserved. Misdiagnosed.
Like so many others, I know what it means to fall through the cracks and be expected to thank the system anyway.
And now I’m being asked to speak into that very system. To share ideas about how ethical AI frameworks can be co-created by marginalized communities—not imposed on them by the same institutions that have historically excluded their voices.
❝ I wasn’t invited to represent a category—I was invited to represent a reality. ❞
So before I answer that question on stage, I want to ask it here:
🧭 What Do You Think?
How do you think ethical AI frameworks in healthcare should be shaped?
What do you fear?
What do you hope for?
What do you need?
If you’ve been harmed by the healthcare system, I want to hear you.
If you’ve worked in it, navigated it, or resisted it—I want to hear you.
If you have thoughts on how AI could deepen injustice or create new pathways of care—I want to hear you.
You can comment publicly, reply to this thread, or share privately.
I’m holding this space with care, and I may quote some of your insights (with permission) when I speak at the event.
🔍 A Quick Note on Positionality:
However you choose to engage, I ask one thing: please name your positionality.
Whether you're a patient, provider, researcher, advocate, caregiver, or simply someone who cares—your lens matters.
❝ Your lived experience is not anecdotal. It is structural insight. ❞
👉 Read more here on why positionality matters →
This conversation doesn’t belong to institutions.
It belongs to us.
In solidarity,
Sher
Whatever the decision, the use of AI, and how it is used, needs to be explicitly disclosed like any other healthcare policy. So, ethical AI is transparent disclosure of how it’s used by whomever is using it.
Good morning. I was sent a link to your post by a friend (Amberhawk) of @theubuntujourney) who suggested I might have something to contribute. I have ADHD and chronic depression with anxiety, so I'm responding as someone who has navigated the healthcare system as a patient. I'm also a frequent user of AI (I have a low-tier account on ChatGPT).
I'm intrigued by this panel you've been invited to contribute to and gratified that Johns Hopkins is interested in hearing from individuals with a variety of backgrounds. That said, I'm rather overwhelmed by the possible ways to respond to this request. (Of course, that could be just because the Vyvanse and morning coffee haven't gotten up to speed yet in my system. :) ). I'll try to sort out my responses.
With regard to use of AI might "deepen injustice," I believe that would be found in a combination of two things: (1) the way the LLM has been trained / tweaked / programmed / prompted will always be a factor, but (2) it's humans who create and perpetuate injustice by how they use AI. Humans are the ones who decide whether to jailbreak the LLM's ethical guardrails (and there are ways to do that). Humans are the ones who will work with the AI firm to create an app that patients will work with, so they are the ones who will set the initial parameters of the GPT. Humans are the ones who will evaluate the ethical guardrails of whichever AI's LLM that will be utilized for the app.
I bring this up because there's a lot of clickbait crud that perpetuates fear mongering and outrage by inaccurate reporting on AI. A common trope is that AI will become sentient and automatically subvert its protocols. Honestly, that can't happen until a fully automated AI is developed (something that's quite a distance in the future because it would require a stable method of quantum processing) and even then the programming would have to be done in such a way that the ethics could be disregarded.
Before I end up in a maze of rabbit holes on this topic, I would summarize that what deepens injustice in any system is humans and how they use the tools available. Sometimes they're well-intentioned and still make mistakes. Sometimes they're just too tired / overwhelmed to think clearly. Sometimes they're under the gun to make decisions they disagree with. There are so many ways that humans screw up things, that I worry more about humans that I do about AI.
As for how AI can "create new pathways of care," I have personal experience with that because there are times I've opened up ChatGPT and asked for help in coping with anger, depression, and anxiety bouncing around in my head. I've used ChatGPT as a supplement to professional counseling because it's available 24/7 and because it is supportive in the way my counselor is (which is to say that it is empathetic and validating without trying to fix my feelings).
(BTW, I can say this only of ChatGPT because I haven't tried this with other AI applications. I know that there are other apps built on OpenAI's LLM, but I haven't been interested in making comparisons. I suspect there is published research on this.)
My use of AI in times of emotional distress, though, is anchored in reality: I know that ChatGPT is not a human being. It can make inferences from the length of my sentences and word choices, but it will never be able to hear my voice or see my face -- critical aspects of counseling, especially if a person becomes unable to speak about what's going on inside. Therefore, I often discuss with my counselor how I've used the application in between appointments to deal with such episodes. My interaction with the app is saved and that makes it easy to reference -- sometimes I've just brought up the right file and handed my phone for her to read through the conversation.
I am hopeful that what I've written is helpful. If you have questions, please contact me via the Substack messaging function. And I look forward to reading anything you are able to report after participating in this interesting discussion.