Back to blog

Trusting the treatment you can’t explain

25 June 2025· 3 min readAi In HealthcareTrust In TechnologyMedical Decision-MakingEthical Dilemmas

Bart Timmers recently reflected on his journey from medical intern to fully licensed physician, emphasising the stark shift from theoretical learning to making real-life critical decisions. Today, healthcare faces another transformational shift, this time driven by generative AI. AI systems now routinely pass medical exams and can provide detailed, often insightful responses to complex clinical questions. Yet, healthcare leaders largely focus on potential errors and ethical dilemmas, holding back from embracing AI’s immense potential.

Consider a provocative scenario: an oncology AI proposes a treatment plan that surpasses human comprehension. It isn’t wrong; it’s simply beyond current human understanding. The clinician, traditionally the arbiter of medical decisions, suddenly becomes an interpreter, tasked not with approving routine tasks but arbitrating between multiple sophisticated AIs, each offering insights deeper than any human could readily unpack. This shift will transform healthcare providers from supervisors into orchestrators, guiding AI-generated knowledge to ensure patient care advances safely and effectively.

Yet today, institutions like the NHS approach AI cautiously. The recent NHS guidance classifies any AI summarisation tool, even a straightforward transcription device, as at least an MHRA Class I medical device. This cautious classification effectively stalls adoption, burdening already overstretched clinicians with repetitive tasks. Ironically, research published in JAMA found that 20% of human-generated clinical notes contain errors, yet human transcription isn’t nearly as heavily scrutinised. This disparity illustrates a key problem: healthcare remains trapped in debates about AI’s imperfections, often ignoring human fallibility.

This caution parallels how single AI mistakes, such as rare but dramatic hallucinations, dominate headlines. The media’s fixation on a solitary Tesla accident mirrors this selective scrutiny: 3,300 daily fatalities in traditional vehicles rarely make news, yet one autonomous car crash becomes global news. Similarly, while AI errors draw significant criticism, human errors, which are frequent and often equally severe, rarely face the same scrutiny.

Andrej Karpathy, former AI director at Tesla, recently explained at Y Combinator that training neural networks represents a profound shift in computing: “Language is the new binary.” As AI evolves, its outputs increasingly reflect human reasoning patterns, eventually becoming superhuman. This advancement presents a significant shift: AI tools could soon generate solutions humans can’t yet fully comprehend, fundamentally changing our approach to medical practice.

We stand on the brink of this “reverse Turing test,” where the question isn’t whether AI mimics human intelligence but whether humans recognise their own reasoning within AI’s advanced solutions. As Dror Poleg states, “If AI is superhuman, it’s a waste of energy to use it for tasks humans can do themselves.” The real question becomes: what higher, more strategic roles will humans adopt once AI reliably handles mundane tasks?

Ultimately, healthcare must move beyond endless debates about AI’s risks. Instead, we should urgently prepare for a future where humans and superhuman AI collaborate, enhancing outcomes and saving lives. Would you trust an inscrutable yet safe AI-developed cure? Perhaps soon, our survival (and innovation!) will depend on embracing answers we can’t yet fully understand.

💥 May this inspire you to advance healthcare beyond its current state of excellence.