Discussion about this post

User's avatar
j.e. moyer, LPC's avatar

Whatever the decision, the use of AI, and how it is used, needs to be explicitly disclosed like any other healthcare policy. So, ethical AI is transparent disclosure of how it’s used by whomever is using it.

Expand full comment
Judith Brodnicki's avatar

Good morning. I was sent a link to your post by a friend (Amberhawk) of @theubuntujourney) who suggested I might have something to contribute. I have ADHD and chronic depression with anxiety, so I'm responding as someone who has navigated the healthcare system as a patient. I'm also a frequent user of AI (I have a low-tier account on ChatGPT).

I'm intrigued by this panel you've been invited to contribute to and gratified that Johns Hopkins is interested in hearing from individuals with a variety of backgrounds. That said, I'm rather overwhelmed by the possible ways to respond to this request. (Of course, that could be just because the Vyvanse and morning coffee haven't gotten up to speed yet in my system. :) ). I'll try to sort out my responses.

With regard to use of AI might "deepen injustice," I believe that would be found in a combination of two things: (1) the way the LLM has been trained / tweaked / programmed / prompted will always be a factor, but (2) it's humans who create and perpetuate injustice by how they use AI. Humans are the ones who decide whether to jailbreak the LLM's ethical guardrails (and there are ways to do that). Humans are the ones who will work with the AI firm to create an app that patients will work with, so they are the ones who will set the initial parameters of the GPT. Humans are the ones who will evaluate the ethical guardrails of whichever AI's LLM that will be utilized for the app.

I bring this up because there's a lot of clickbait crud that perpetuates fear mongering and outrage by inaccurate reporting on AI. A common trope is that AI will become sentient and automatically subvert its protocols. Honestly, that can't happen until a fully automated AI is developed (something that's quite a distance in the future because it would require a stable method of quantum processing) and even then the programming would have to be done in such a way that the ethics could be disregarded.

Before I end up in a maze of rabbit holes on this topic, I would summarize that what deepens injustice in any system is humans and how they use the tools available. Sometimes they're well-intentioned and still make mistakes. Sometimes they're just too tired / overwhelmed to think clearly. Sometimes they're under the gun to make decisions they disagree with. There are so many ways that humans screw up things, that I worry more about humans that I do about AI.

As for how AI can "create new pathways of care," I have personal experience with that because there are times I've opened up ChatGPT and asked for help in coping with anger, depression, and anxiety bouncing around in my head. I've used ChatGPT as a supplement to professional counseling because it's available 24/7 and because it is supportive in the way my counselor is (which is to say that it is empathetic and validating without trying to fix my feelings).

(BTW, I can say this only of ChatGPT because I haven't tried this with other AI applications. I know that there are other apps built on OpenAI's LLM, but I haven't been interested in making comparisons. I suspect there is published research on this.)

My use of AI in times of emotional distress, though, is anchored in reality: I know that ChatGPT is not a human being. It can make inferences from the length of my sentences and word choices, but it will never be able to hear my voice or see my face -- critical aspects of counseling, especially if a person becomes unable to speak about what's going on inside. Therefore, I often discuss with my counselor how I've used the application in between appointments to deal with such episodes. My interaction with the app is saved and that makes it easy to reference -- sometimes I've just brought up the right file and handed my phone for her to read through the conversation.

I am hopeful that what I've written is helpful. If you have questions, please contact me via the Substack messaging function. And I look forward to reading anything you are able to report after participating in this interesting discussion.

Expand full comment
16 more comments...

No posts