The ethical case for AI

Somewhere right now, a patient is being harmed by paperwork. Not by a misdiagnosis or a surgical error, but by a phone call that was never returned, an insurance authorization that expired in a queue, a referral that has sat unprocessed for three weeks because the front desk team is underwater. The harm is invisible and undramatic. It will not make the evening news. But it is real, it is cumulative, and it is happening at a scale that should trouble the conscience of every healthcare professional alive. Clinic staffing shortages have reached historic levels. The average tenure of a front desk employee has fallen to less than eighteen months. The administrative burden on clinical staff increases every year with no signs of reversal. The result is a slow, grinding degradation of the patient experience that compounds over months and years. A referral processed a day late. A callback that slips to tomorrow, then the next day, then never. Each failure is small. Together, they constitute a crisis.
Every physician learns the principle of nonmaleficence early in their training. First, do no harm. Most practitioners understand it intuitively when it comes to clinical decisions. But the principle extends further than the exam room. If you are a practice owner or hospital manager, you are responsible not only for the clinical care your patients receive but for the systems that determine whether they can access that care at all. And if those systems are failing, if referrals are being lost and calls are going unanswered and follow ups are being missed because your administrative infrastructure cannot keep pace with demand, then harm is occurring on your watch. Not because you are negligent or indifferent, but because the tools you are using are no longer adequate for the world you are operating in. We are no longer in an era where the limitations of healthcare administration are immovable constraints. Technology exists today that can answer every patient call, process every referral, verify every insurance eligibility, and ensure that no interaction falls through the cracks. The question is no longer whether it is possible to deliver a higher standard of administrative care. The question is whether you will choose to.
There is a particular cruelty to administrative failure in healthcare, and it lies in its invisibility. When a surgeon makes an error, it is documented, reviewed, and learned from. But when a patient gives up trying to reach your office after being placed on hold for the fourth time, there is no record of that failure. When a referral sits unprocessed and a patient's condition worsens during the wait, there is no incident report. These are the patients you never see. The elderly patient without a family member to advocate on their behalf. The working parent who cannot spend forty five minutes on hold during business hours. The immigrant who speaks Spanish or Mandarin and encounters a system that only functions fluently in English. It does not matter how many clinicians you employ if patients cannot get through the door. It does not matter how sophisticated your treatments are if the referral never reaches the scheduler's desk. Operational failure is a barrier to care as real and as damaging as any clinical shortage, and it falls disproportionately on the patients least equipped to fight through it.
I understand the hesitation. I am a physician. I was trained in a culture that valorizes caution, that distrusts hype, that demands evidence before adoption. But I want to challenge the assumption that waiting to adopt AI is the conservative and therefore the responsible choice. Caution is responsible when the risks of action outweigh the risks of inaction. But when the status quo is itself causing harm, when patients are already receiving degraded care and staff are burning out at rates that threaten the viability of the practice, caution ceases to be protective. It becomes inertia dressed in the language of prudence. Every month of waiting is a month of calls going unanswered, referrals processed late, staff shouldering a burden that technology could lift. The cost of inaction is not zero. It is compounding. And it is being paid by the people who can least afford it.
The clinical tools available to physicians have never been more powerful. Gene therapies, precision oncology, robotic surgery, diagnostics powered by machine learning. And yet for millions of patients, the bottleneck to receiving that care is not the science. It is the phone call. It is the fax. It is the insurance form. AI does not solve every problem in healthcare. But it solves the problem that sits between the patient and the care they have already been prescribed. It ensures that when a physician says you need to see a specialist, the distance between that recommendation and the patient sitting in the specialist's chair is measured in days, not weeks. For leaders, the decision to adopt AI is ultimately a decision about what kind of medicine you want to practice. Whether every patient who needs you can actually reach you. Whether the infrastructure of your practice reflects the same commitment to excellence you bring to your clinical work. The patients on the other end of the line are waiting. They have been waiting long enough.