The Art of Medicine

The art of medicine consists of amusing the patient while nature cures the disease.
— Voltaire

In today’s digital age, even Voltaire’s witty observation is being reimagined. Artificial intelligence is stepping into the GP’s surgery, promising to tackle the drudgery of paperwork and allowing doctors to focus on what really matters: treating patients and restoring their spirits.

According to a recent Times article, AI could revolutionise the NHS by taking on administrative tasks, giving doctors more time to spend with their patients. However, while this sounds like a dream solution, it comes with challenges. With great power comes great potential for, let’s face it, accidental chaos.

 
 

Let’s take a brief detour back to the 1960s. Enter Eliza, one of the first chatbots, created by MIT’s Joseph Weizenbaum. Eliza mimicked a psychotherapist by reflecting users’ statements back to them. While this was meant to highlight how basic AI could simulate human conversation, people began opening up to Eliza as if she were a real therapist. Weizenbaum was horrified; he’d built a simple chatbot and accidentally created a mirror for human vulnerability.

Fast forward to today, and the stakes are far higher. AI systems like Suki, which transcribes consultations, are being introduced to save GPs time. But AI, no matter how advanced, is not infallible. Errors, particularly those stemming from biased or incomplete data, remain a concern. At the recent Royal Academy of Engineering event on AI in society, the conversation centred on how to implement AI responsibly. How do we ensure these tools empower doctors rather than diminishing the human element of care?

 
 

This is where the IDEA process comes in:

1. Inform: Doctors and patients alike need clear, accurate information about how AI works in healthcare. Transparency builds trust. For example, explaining how Suki processes medical language can reassure GPs about the tool’s reliability.

2. Develop: Technology only works when people know how to use it. Training doctors to work alongside AI tools—and recognise their limits—enhances their capabilities. Imagine hands-on workshops where GPs practise reviewing AI-generated notes to refine their use of the system.

3. Enable: Armed with the right skills, practitioners can make confident decisions. AI might suggest diagnoses or flag anomalies, but the doctor’s expertise remains at the forefront. Enabling doctors to use AI as a complementary tool ensures it supports, rather than substitutes, their judgement.

4. Animate: For AI to become a trusted ally, it needs to win hearts as well as minds. To build acceptance, we must tell stories of success: times when AI helped save a life, reduced stress, or improved patient care. By shifting perspectives, AI can be seen as a valuable partner, not an impersonal machine. 

The IDEA process provides a roadmap for integrating AI into healthcare in a way that informs, teaches, empowers, and inspires. Want to know how well your organisation embraces these principles? Take the Presentation Pulse Scorecard quiz to see how effectively you’re communicating, enabling and animating change in your teams

Remember, while AI offers immense potential, its true power lies in how we use it. The NHS doesn’t need to replace its doctors with robots—it needs to support them with tools that let them do their jobs better. After all, as Eliza proved all those years ago, technology can be a surprisingly human affair when handled wisely.