
Why I'm interested in this topic.
Originally, I strongly opposed the idea of AI in medicine and had planned to argue against its use. I saw it as cold and incapable of grasping the complexities of care. However, as I gained a deeper understanding of its capabilities, my fear gave way to cautious enthusiasm for what AI could offer. My wariness stemmed from a lack of understanding--and now, I hope to shed light on this topic for others, just as it was for me.
Nearly two years ago, my grandmother was diagnosed with late-stage cancer. I watched her slowly slip away before there was ever a chance for hope--before recovery was possible. Looking back, I can’t help but wonder if AI could have caught it sooner. Now I know: it could have.
I don’t say this to discredit the incredible doctors who diagnosed her. In my eyes, there’s no higher honor than helping others, and that’s exactly what they did. It was their duty, and they fulfilled it with care and commitment.
As life unfolds, it can pass with little change. Then, suddenly, everything happens all at once. That moment when disaster strikes can feel overwhelming and debilitating. Doctors provide answers and healing. They dedicate themselves to helping people, and that’s exactly what I want to do in my future--I want to help others. I too aspire to become a doctor, and so I want to be clear in stating my purpose in focusing on this topic:
AI is simply a tool. Only doctors can heal.
Summary of Investigation
AI is changing medicine--making it faster, smarter, and more efficient. But it doesn’t heal. People do. Medical professionals provide the empathy and connection that machines simply can’t.
History reminds us what happens when we blindly trust the wrong tools--from leeches and bloodletting to smoking for asthma. Today, the risk is putting too much faith in AI. Even though it’s built on data, AI can still be biased or wrong--especially if trained on incomplete information.
In North Carolina and beyond, AI tools like DAX Copilot are helping doctors save time and improve care. New algorithms can even detect rare diseases earlier. But AI must stay just that--a tool. Doctors should lead the way, using AI to support, not replace, their decisions.
Medicine is more than models and code. It’s human. And no matter how advanced AI becomes, healing will always require a human touch.
Summary of Argument
AI has the power to make a real difference in healthcare--especially in rural or underserved communities--but only if it’s used thoughtfully and alongside skilled doctors. It’s not here to replace human care. Instead, AI can act as a second set of eyes, helping doctors catch things they might miss and make faster, more accurate decisions. At the end of the day, healing still depends on the compassion, experience, and judgment of trained medical professionals. In short, the best care happens when technology and human expertise work hand in hand.
Communicate Your Argument
“In Tandem” is a six-part short story I wrote to explore the role of AI in healthcare. It follows a father an husband whose wife, Caroline, dies from a brain tumor that went undiagnosed due to limited rural resources. Later, AI helps doctors catch early signs of multiple sclerosis in his daughter, Molly--something they would have otherwise missed without AI. Thanks to the technology, doctors are able to start treatment before the damage becomes permanent.
This story isn’t about AI saving the day. It’s about what happens when doctors and technology work in tandem--when one helps the other see what might otherwise go unseen. It’s about learning to trust new tools without losing sight of what really matters: the people holding them. Doctors should always lead in healthcare.
Sharing with my Authentic Audience

When I shared my piece with an authentic audience, I noticed a clear pattern. My message resonated most with those who had backgrounds in medicine or technology. Their responses showed a solid understanding of my argument, likely because they were already familiar with the topic and its complexities.
Most audience members expressed discomfort with the idea of AI being involved in their healthcare. That reaction made sense to me. Health is deeply personal, and the idea of trusting a machine with something so fragile can feel unsettling. I addressed this concern in my writing with lines like: “Told me it’s not there to replace them. That AI’s a tool, not a decision. Said doctors still lead--it just helps them see what they’d otherwise miss” (paragraph 29) and “Still, it’s not the AI I thank; It’s the man who listened--the one who looked past the code and saw the child in front of him... That’s what matters here, in Carolina. Not just having the tools, but knowin' how to hold them. Knowin' when to follow, and when to lead” (paragraph 69). These lines helped clarify that AI in my story is meant to support--not replace--human doctors.
One part of the story that stood out to the audience was the contrast between Caroline and Molly. Many understood that while AI helped diagnose Molly early, it couldn’t save Caroline because her illness had progressed too far. That difference highlighted my central argument: AI should be implemented in North Carolina healthcare, but never at the cost of human judgment and presence. Several audience members also emphasized AI’s emotional limitations, noting that while machines can detect patterns, they can’t offer empathy or form human connections.
That feedback confirmed the emotional core of my story was effective.
Some raised concerns I hadn’t considered, such as the possibility of AI providing incorrect information or failing mid-treatment. These points didn’t change my perspective, but they helped me realize I could have addressed them in the story. Medical AI, especially in professional settings, would be trained on peer-reviewed research to ensure accuracy. And while technology can fail, these systems would have safeguards--still, the concern was understandable.
One audience member captured the essence of the story’s message: AI can be a powerful tool in medicine, but it isn’t a replacement for doctors. Caroline’s case showed that AI can’t solve everything--especially when it comes to emotional decisions or compassionate care. Molly’s case showed where AI excels--spotting patterns and helping with diagnosis. Together, they illustrated the point I aimed to make: AI belongs in healthcare, but only as a support to human expertise, not a substitute.

Hope
By sharing my product with an authentic audience, I hope to spark a more thoughtful and cautious conversation about the use of artificial intelligence in medicine. I want it to be recognized that while AI truly is a powerful tool, it is not a replacement for human expertise, care or judgement. Through historical examples of medical errors rooted in ignorance--like bloodletting, humors, and milk transfusions--I want the audience to see how blind trust in any system, even one based on data, can lead to harm if not properly understood and overseen. My goal is to encourage transparency, critical thinking, and responsible integration of AI into healthcare--especially in places like North Carolina, where AI tools are already being used--so that we avoid repeating past mistakes and instead use technology to support, not substitute, the essential human elements of healing.
What I've Learned
My argument is that AI has the potential to greatly improve healthcare--especially in rural and underserved areas--but only when used responsibly and in partnership with trained medical professionals. AI is not a replacement for human care; it’s a tool that can help doctors see what they might otherwise miss, enabling earlier diagnoses, more accurate treatment, and better outcomes. In the end, healing still depends on the compassion, judgment, and expertise of medical professionals. The most effective care happens when people and technology work together--in tandem.