February 2026

We Can’t Fix Healthcare,
We Have to Rebuild It

Dom Pimenta

For ten thousand years, healthcare has been built on a single assumption: that medical knowledge is scarce.

This was true for almost all of that history. If you got sick in ancient Rome, in medieval England, in 1950s America, the bottleneck was always the same — you needed to sit in front of someone who knew things you didn’t. A shaman, a physician, a specialist. The entire apparatus of modern healthcare — the appointments, the referrals, the waiting lists, the insurance networks, the hospital systems — is downstream of this one fact. Knowledge was rare, therefore the humans who carried it were rare, therefore their time was the scarcest resource in the system.

“That assumption just stopped being true, and almost nobody is acting like it.”

Large language models now hold, within their weights, effectively the entirety of clinical human knowledge. Not approximately. Not “a useful subset.” The whole thing. And for the first time in history, you can have a realistic, sustained, deep medical conversation with a non-human intelligence. Not a chatbot that pattern-matches symptoms to WebMD articles. An actual diagnostic conversation — the kind where context accumulates, where family history matters, where the AI notices that your haemoglobin dropped 30 points over six months even though both readings were technically “normal.”

This changes everything. Not incrementally. Fundamentally.

But here’s what’s actually happening instead: we’re building faster horses.

Henry Ford’s famous insight — “If I had asked people what they wanted, they would have said faster horses” — has become a cliché, but it describes the current state of healthcare AI with painful precision. Every major health system in the world is trying to bolt AI onto existing workflows. Make the EHR a bit smarter. Help the doctor write notes faster. Summarise the discharge letter. Triage the inbox.

These are not bad things. But they completely miss the point. They’re like Blockbuster putting a recommendation engine on their in-store kiosks in 2006.

“The correct question isn’t ‘how do we make the current system faster?’ It’s ‘what would healthcare look like if we designed it today, knowing what we know?’”

Let me tell you what it would look like.

Seventy to eighty percent of correct diagnoses come from what the patient says alone. The history. The symptoms. The family background. The context. Migraine has no imaging, no blood test — it’s pure history. Almost every psychiatric condition is diagnosed entirely through conversation. Even much organic pathology — a 20-pack-year smoker with a wheeze and productive cough has COPD, and there isn’t much ambiguity about it.

This means the most important act in medicine — the diagnostic conversation — is precisely the thing AI can now do. And not just do, but do continuously, contextually, with perfect memory, at any hour, in any language, for any number of patients simultaneously.

Now layer on top of that the ability to integrate diagnostic data — imaging, blood trends, pathology results — and you have something no human doctor has ever had: longitudinal awareness. Not a snapshot every six months. A continuous, contextual understanding of your health. A cancer survivor getting a follow-up CT scan doesn’t just receive a binary “clear” or “not clear” — they can discuss with their AI what the subtle shadows mean, how the findings compare to six months ago, what the radiologist’s uncertainty actually implies for their prognosis. That conversation currently doesn’t happen because no human clinician has time for it.

With AI — Tomorrow

A 55-year-old man, previous smoker. One morning he mentions to his AI that his throat felt scratchy when he swallowed. The AI has been watching. It knows his HRV has been declining for six weeks. He’s lost 2–3 kilos unintentionally. His father died of a metastatic malignancy at 56. So the AI doesn’t say “give it a week.” It orders bloods. The FBC comes back with a haemoglobin of 130 — technically normal, but the AI knows this man used to run at 169. A 30-point drop is significant. FOB positive. Endoscopy arranged. Stage 1 oesophageal cancer. Surgery within two weeks. Three weeks end to end.

Without AI — Today

That same man ignores the scratchy throat for months. Maybe years. He loses more weight. Eventually drags himself to his GP, who tells him it’s probably nothing. Twice. Three times. Two years later he has stage IV oesophageal cancer that’s metastasised to the liver. He’ll never return to work. His chemotherapy costs the system 20 to 50 times what that single surgery would have cost. And he still dies.

The difference between these two stories isn’t a technology gap. It’s a systems gap.

Χρονος   vs   Καιρος
Chronos — Scheduled Time   ·   Kairos — The Opportune Moment

There’s a beautiful Greek distinction between two concepts of time. Chronos is the steady march — the ticking clock, the appointment at 2:30 on Thursday, the six-month follow-up you might or might not attend. Kairos is the opportune moment — the right time, the moment of readiness.

Current healthcare operates entirely in Chronos. You see your doctor when there’s a slot. You get your scan when the waiting list permits. You present with symptoms when they’re bad enough to overcome the friction of booking an appointment, taking time off work, sitting in a waiting room.

An AI healthcare system operates in Kairos. It meets you in the moment you need it. On your walk to work. At 2am when you can’t sleep and you’re worried about that lump. In the accumulated pattern of six months of subtle biosignal changes, long before you feel anything at all.

“This is the Netflix model applied to healthcare. Netflix didn’t just put TV shows on the internet. It fundamentally restructured how visual media is created, distributed, and consumed. Healthcare needs the same transformation.”

From episodes of care to a continuous stream. No one is ever “lost to follow-up.” No letter goes missing. No patient falls through the gaps. A continuous companion, a continuous diagnostician, a continuous preventative health system — always on, always there, as much or as little as you need.

So why isn’t anyone building this?

The honest answer is that the barriers aren’t technical. They’re structural. Healthcare is perhaps the most structurally defended industry on earth. Consider what you’re up against: clinician misalignment, embedded EHR vendors with multi-year contracts, SaaS companies defending their margins, procurement bureaucracies designed to prevent change, regulatory regimes built for a pre-AI world, insurance models that profit from complexity, and governments running health systems with the agility of aircraft carriers.

Every single one of these actors has rational reasons to resist change. And the system is designed so that any innovation has to get permission from all of them simultaneously. This is why every large health system’s AI strategy amounts to “pilot projects.” Nothing that threatens the core operating model.

The Blockbuster Analogy

This is exactly what happened to the entertainment industry before Netflix. Warner Brothers tried to adapt. Blockbuster tried to adapt. They ran incremental experiments within their existing business models. It didn’t work, because the existing model was the problem. You can’t Netflix-ify a video rental store. You have to build Netflix.

The same is true here. You cannot incrementally transform a healthcare system built on the assumption that knowledge is scarce into one built on the assumption that knowledge is abundant. The architecture is wrong at every level. You have to start from scratch.

What does “from scratch” actually mean?

Start with the patient. Not the provider, not the payer, not the regulator. The patient.

Give them an AI-native electronic health record that they own. It ingests their imaging, their blood results, their prescriptions, every interaction with any healthcare professional — all logged and auditable. It knows their family history because it’s been building that picture over years of conversation.

This system is their first point of contact for any health concern. Not a phone queue. Not a receptionist who writes nothing down. An AI that listens, remembers, contextualises, and acts. It can order blood tests to your home. It can arrange imaging. It can escalate to a human specialist when — and only when — a human specialist is actually needed.

“The current NHS model processes patients through a pipeline of scarce human attention. An AI-first model inverts this entirely — abundant AI attention as the default, scarce human attention reserved for where it’s genuinely irreplaceable.”

There’s another argument for AI-first healthcare that doesn’t get enough attention: safety.

We don’t log most of what happens in healthcare. We log what clinicians write down, if they write anything down at all. Receptionists don’t document their interactions. Patients don’t document theirs. Phone conversations, corridor consultations, the GP who glances at a result and moves on — none of this creates an auditable trail.

In the Swiss cheese model of medical error, these undocumented interactions are the biggest holes. Patients fall through them constantly. A missed letter. A result that nobody reviewed. A referral that was never sent. A conversation that was never recorded.

An AI system logs everything. Every interaction, every decision, every recommendation, every piece of context that informed that recommendation. Not because it’s trying to create a surveillance system, but because that’s simply how software works. The audit trail is a natural byproduct, not an additional burden. And that makes the system dramatically safer.

The business model for this is surprisingly straightforward.

Start with direct-to-consumer primary care. Ninety percent of all NHS contacts start and end in primary care. Build something so good that people will pay for it out of pocket — which tells you immediately whether you’ve actually built something patients want, as opposed to something a procurement committee approved.

Add specialty care as a turnkey consultation service. Then vertically integrate. Diagnostics first — blood work, pathology. Then imaging. Bring it all in-house. One operational model, one system, across any region, any scale. What would have taken 20 to 30 years to build as a traditional healthcare company, you build in 2 to 3 years, because AI means a small team can operate at the scale of a large organisation.

And here’s the radical part: make the subscription all-inclusive. Consulting. Diagnostics. Prescriptions. Everything covered in one monthly payment. No hidden fees, no co-pays, no surprise bills. If the barrier to seeking care is friction — financial, logistical, psychological — remove the friction entirely. Make accessing healthcare as thoughtless as opening Netflix.

This isn’t “best intentions” wrapped in a failing experiment, which is what the NHS has become. And it isn’t profit-over-patient dressed up as innovation, which is what the US has always been. It is unbelievably easy access to care, by design. A patient should never have to weigh the cost of asking a question. That calculation — “is this worth bothering the doctor about?” — kills people every single day. Eliminate it.

“The end state is a global healthcare company that doesn’t sell software to health systems. It is the health system.”

I should be honest about what makes this hard.

It will cost an enormous amount of money. Healthcare infrastructure — even AI-native healthcare infrastructure — requires real capital. Regulatory clearance in multiple jurisdictions is slow and expensive. Building trust with patients takes time. The political pressure will be immense, because you’re implicitly arguing that the existing system is failing, which it is, but nobody in power wants to hear that.

And there are genuine clinical safety questions that need rigorous answers. When does the AI escalate? How do you validate diagnostic accuracy at population scale? How do you handle the long tail of rare conditions? How do you ensure the system doesn’t subtly optimise for efficiency at the expense of the edge cases that matter most?

These are serious problems. But they’re engineering problems and operational problems. They’re not “is this possible?” problems. The gap between what AI can do today and what the healthcare system actually delivers to patients is so vast that even a cautious, safety-first AI system would represent a massive improvement over the status quo for the majority of patients.

The deepest reason to build this is moral, not commercial.

My uncle died of a missed cancer diagnosis. Repeated errors. The kind of thing that happens when a system built on scarce human attention fails in the way it was always going to fail — not through malice, but through the accumulated weight of too many patients, too little time, too many cracks to fall through.

That didn’t have to happen. And with the technology that exists today, it doesn’t have to happen to anyone else. But it will keep happening — every day, to thousands of people — as long as we keep trying to patch a system whose foundational assumption is no longer true.

Knowledge is no longer scarce. Time is no longer the binding constraint. The moral imperative is to rebuild — not reform, not optimise, not digitise — rebuild healthcare from the ground up, for the first time in ten thousand years.

We have the tools. We have the understanding. The only question is whether we have the nerve.