Skip to main content
f
TAGS
H

Forget robot doctors: what's holding back AI in healthcare

Introduction: Beyond the Hype

We are constantly bombarded with futuristic visions of artificial intelligence in healthcare. Headlines promise AI doctors on our smartphones, robot surgeons performing flawless operations, and algorithms predicting disease years in advance. While the potential is immense, the reality on the ground is far more nuanced, challenging, and in many ways, more interesting than the hype suggests.

A recent panel brought together a diverse group of New Zealand experts—spanning public health, law, research, and data science—to discuss the real-world application of AI in health. Their conversation peeled back the layers of marketing and speculation to reveal the practical hurdles and unexpected opportunities that are actually shaping our medical future. This is a look behind the curtain at the surprising truths that emerged.

1. AI's Biggest Hurdle Isn't the Tech—It's Our Piecemeal Approach

The biggest barrier to AI revolutionizing healthcare may not be a technical limitation, but a human one: our tendency to adopt technology in a scattered, uncoordinated way. Researcher Sharon Sitters explained that new AI tools are often developed in isolation for niche functions and implemented sporadically across different parts of the health sector.

This "ad hoc" approach, where individual clinics or departments adopt small, specific solutions, creates a fragmented landscape that is the silent killer of big data's promise. It prevents the very large-scale learning that AI is supposed to enable. According to Sitters, until we move from isolated projects to a cohesive "pipeline of solutions," AI’s potential for systemic change will remain locked away. A standardized pipeline, she noted, is essential because it allows us to "get really good data of what is happening, big data sets of the improvements that we're seeing in health."

2. What is the Population Health Perspective?

From a public health lens, Negin Maroufi posited that AI’s greatest potential may lie in prevention, early warning systems, and health system planning, such as forecasting disease burden and detecting outbreaks earlier, if the right foundations are in place. At the same time, this raises important concerns around data completeness, equity, and population coverage. AI systems trained on incomplete or biased health data risk reinforcing existing inequities rather than reducing them. A highly accurate model that only benefits people already well served by the system cannot be considered a public health success.

3. We Have a 156-Page AI Policy, But No Foundation to Build On

On paper, New Zealand appears to be ahead of the curve. The country has a comprehensive, 156-page policy document for AI in health that, as Sharon Sitters described it, is "amazing at writing beautiful policy that... says all the right things." The problem? It remains almost entirely "rhetoric" without a practical path to implementation.

The core issue is a foundational one. Sitters argued AI health policy should be developed in tandem with digital health policy, to strengthen the policy base for AI implementation and prevent oversight relating to data access and implementation support.

This foundational gap is the lack of a national digital health strategy. Without this, even the most beautifully written AI policy has no framework to operate within. Adding to this complexity, legal expert Benjamin Liu noted that the challenge isn't always about creating new laws. Often, he argued, the bigger problem is a failure to properly understand and implement the legal tools and frameworks we already have.

4. Forget an AI Doctor on Your Phone. The Real Goal Is Using AI to Create More Human Doctors.

GPs are already stretched (we have 74 GPs per 100,000 people, Australia has 116, Canada has 122) and things are forecast to get worse (forecast to drop to 70 GPs per 100,000 by 2031). With a nationwide shortage of General Practitioners (GPs) and an ageing population, it’s tempting to see AI as the silver bullet. AI tools can alleviate workload and allow GPs to do more face-to-face patient care, however the opposite is also true, if such tools inadvertently increase workload. The point here is that AI may help to address some of the key issues in NZs GP crisis, but careless AI implementation policy may serve to compound the very issues that it seeks to address. This complements an earlier point that it is essential to focus on the main goal when considering which AI tools to use and how, and to not get distracted by AI novelty. Ultimately, we still need to train more health workers. If we don’t, inequalities will deepen as well-funded practices adopt AI while under-resourced communities fall further behind.

Rather than asking whether AI will replace doctors, we should ask what kinds of jobs society will still need as automation reshapes the workforce. Many roles are being streamlined or eliminated by AI, but healthcare will remain essential—and is likely to become a larger share of employment. Caring for other humans is profoundly meaningful work – it’s part of what makes us human. What is the future of meaningful employment and life purpose? What future do our kids have if they’re constantly told that AI will take over office jobs. Can AI reduce barriers to becoming a healthcare provider? Why not invest in creating rewarding careers in healthcare, supported by technology?

 AI should be a tool for social justice, not a substitute for human connection. Today, Māori life expectancy is 75.8 years, compared to 81.8 years nationally, and Pacific peoples average 76.9 years, while European/Other groups reach 82.8 years. These gaps reflect systemic inequities, including poorer access to care and higher rates of avoidable mortality. By democratising medical education, AI can make training accessible to people from diverse backgrounds, creating a new generation of doctors, nurses, and lab assistants who reflect the communities they serve. This approach uses technology not to replace healthcare professionals, but to create more of them.

5. The Next AI Supercomputer Might Just Be Your Laptop

The prevailing image of powerful AI involves massive, energy-hungry data centers owned by tech giants. However, a surprising trend is emerging: the most advanced AI models are beginning to run on local, personal devices. This concept, known as "Edge AI," is already a reality.

Richard Dean described his team's recent experience with a portable Apple's M3 device, highlighting its incredible power. This isn't just a technical curiosity; it represents a fundamental shift, moving AI's capabilities from the centralized cloud into the hands of local practitioners.

It allows us to download and run the largest language models, the frontier models, the really big ones... all locally on a device kind of this big.

The implications are significant. Running sophisticated AI tools locally means they can operate securely within a hospital's network or on a GP's computer, without sending sensitive patient data to the cloud. This trend has the potential to solve healthcare’s most persistent privacy and accessibility dilemmas, making powerful AI available even in remote areas.

6. Stop Asking if People "Trust" AI. The Better Question is if They're "Comfortable" With It.

"Trust" is a loaded and absolute term. When we ask the public if they "trust AI" with their health, the answer is often hesitant. But the panel suggested we might be asking the wrong question. A more nuanced and useful metric is "comfort."

The moderator, Gill Dobbie, highlighted a key distinction by citing a survey of people aged 65 and over. When asked if they had "comfort in their data being shared to make population type decisions using AI"—in other words, to help doctors understand health trends across the entire population—a perhaps surprising 80% said yes. This reveals a critical insight: people may not be ready to trust an algorithm with a personal, life-or-death decision, but they are broadly comfortable with their anonymized data being used for large-scale public health research. As Benjamin Liu pointed out, public perception of risk is a key factor in adoption, and focusing on comfort may be a more practical path to building acceptance.

Conclusion: The Questions We Should Be Asking

The conversation about AI in healthcare is often dominated by what is technologically possible. But as the panel revealed, the more important questions are about strategy, foundation, purpose, and perception. The real challenges aren't just about building better algorithms, but about creating coordinated systems, establishing foundational policies, and deciding what kind of future we truly want.

As AI becomes more powerful, the most important question isn't what the technology can do, but what we as a society will choose to do with it. What kind of healthcare future will we decide to build?