My Cautious Hope for the New AI Health Assistants
- Amy Fisher, BA, MSW

- 12 hours ago
- 4 min read

Why the future of health AI must prioritize people, not just technology

The conversation around health AI by focusing on the people it affects, rather than the technology, who may be left behind, and what patients worry about. By centering compassion, equity, and connection, the true measure of health AI isn’t efficiency alone, but whether it genuinely improves care for everyone.
I spent last Tuesday morning with an elderly client, Mr. Henderson. We sat at his small kitchen table, a pile of hospital discharge papers between us. The pages were filled with medical terms, conflicting medication schedules, and follow-up appointments.
He was overwhelmed, and frankly, so was I. "Amy," he said, his voice quiet, "I just want someone to explain it to me in plain English. Am I getting better?" In that moment, the complexity of our healthcare system felt like a heavy weight. It is a system that, for all its miracles, often struggles to speak a language people can understand.
It’s with Mr. Henderson’s question in my mind that I’ve been watching the incredible speed at which new health technologies are arriving. We are hearing constantly about Amazon’s health AI, OpenAI’s work in medicine, and Google’s Med-Gemma. These are no longer abstract concepts; they are powerful tools designed to read scans, organize patient data, and even talk with patients about their health. A part of me, the part that sat with Mr. Henderson, feels a flicker of real hope. Could these tools be the translators we so desperately need?
I imagine a world where a patient, home from the hospital, could ask an AI assistant on their phone, "Can you explain my new heart medication to me like I'm a ten-year-old?" or "What are the three most important things I need to do this week to stay healthy?"
The potential for these tools to sort through the clutter and provide clear, accessible information is immense. For individuals managing chronic illnesses or caregivers coordinating complex schedules, this could be a lifeline, helping them organize their journey to wellness in a way that feels manageable. A recent report noted that low health literacy is a widespread issue, affecting patient outcomes and increasing healthcare costs, underscoring the need for a clear communication tool.
One of my clients, a young single mother named Maria, works two jobs and is raising a child with severe asthma. She has no time to sit on hold with a nurse’s line to ask if a certain cough is serious. I can see a future in which a reliable AI tool helps her assess symptoms at 2 a.m., offering peace of mind or clear instructions to seek immediate care. This isn't about replacing doctors; it's about filling the gaps and offering support where it has been absent.
But as a social worker, my hope is balanced by a deep-seated caution. My work is built on the belief that healing happens in the context of human relationships. A machine, no matter how intelligent, cannot replicate the felt sense of being seen and heard by another person. It cannot read the subtle fear in a client’s eyes or offer a reassuring touch on the shoulder.
My concern is that in our rush to adopt these efficient new systems, we might accidentally diminish the very human connection that is so central to care.
Furthermore, I have to ask: who are these tools being built for? Will they work for people like Maria, who might use a pay-as-you-go phone with limited data? What about Mr. Henderson, who does not own a computer? The digital divide in our country is real and deep. If these advanced tools are only accessible to the wealthy and technologically savvy, they will only widen the health disparities that already cause so much harm. We also know that AI systems learn from data, and if that data is biased, the AI’s conclusions will be too.
We must be vigilant in asking how these systems are being tested and who they might leave behind.
I recently sat with a support group for people living with chronic pain. One member shared her fear that her doctor already spends most of their visit typing into a computer. "What happens when the computer starts giving all the answers?" she asked. "Will my doctor even look at me anymore?" Her question wasn't about the accuracy of the technology; it was about her fear of becoming invisible. We talked about how they could prepare for this new reality, not by rejecting it, but by becoming stronger advocates for themselves. They practiced saying things like, "Thank you for that information. Now I’d like to talk to you about how this is making me feel," or "Can we talk about what you think is best for me, as my doctor?"
Final Thought
These new AI tools are coming, and they hold incredible promise. But they are, in the end, just that—tools. Their ultimate value will not be determined by their processing power, but by the wisdom and compassion of the people who use them. Our task, as patients, caregivers, and advocates, is to insist that these new technologies serve our humanity, not the other way around. We must continue to champion the irreplaceable importance of a listening ear, a compassionate voice, and a caring human heart at the center of all medicine.



