LEGAL LIABILITY
Who’s to blame? Law firm Bevan Brittan partner Dan Morris says the emergence of digital healthcare raises important issues about liability when things go awry
T
echnology is radically changing the way healthcare is delivered. Whether it’s patients consulting GPs through online primary care services, the use of clinically applicable artificial intelligence and deep learning in diagnosis and referral pathways, or robotic assisted surgery carried out across continents, rapid technological development is transforming the healthcare landscape. And it follows that the law needs to keep pace with these changes, particularly when things go wrong and questions of liability arise. But is it doing so? The reality is, liability in the digital age is an emerging area with many complex facets. It’s a challenge for legal regimes to move at sufficient pace, but ultimately legislators and courts will wish to ensure an equitable distribution of loss, coherence in the law, and effective access to justice – complexity and blurred lines of responsibility should not result in victims of harm being left uncompensated for their losses.
Existing liability regimes Broadly speaking, when a patient suffers harm while receiving healthcare the currently available avenues of redress are via tort, contract (both fault-based systems) or defective product laws such as the Consumer Protection Act 1987 (where strict liability can apply). In some situations, claimants might pursue an action under a combination of these regimes – for example, where a surgeon implants a prosthetic hip, questions might arise about: (i) The proper surgical technique (which would be dealt with in tort or contract, depending on whether the operation was performed in the public or private sector) and (ii) The safety of the hip implant itself (which would be dealt with under the CPA, as was the case in the Pinnacle metal-on-metal hip group litigation1).
Are the existing regimes fit for purpose in the digital age? The English law of liability has of course developed over centuries and has had to contend with every conceivable technological development along the way, whether that be railways, air travel, organ transplantation, or the latest endovascular stenting grafts. It is often said that the great beauty and elegance of the common law, in particular, is its fluidity and ability to adapt to the mores and problems
50
Dan Morris, Bevan Brittan
of each age. So in principle the law of liability should also be able to cope with emerging digital health technologies.2 But such technologies undoubtedly represent a step change. Because of their complexity, opacity, ongoing selflearning and intelligent adaptability, as well as autonomy – it can sometimes seem almost impossible to determine why a harm has occurred and who should be held responsible for it. There are also questions about scale and replicability of harm. If an individual doctor interacting with an individual patient makes a negligent diagnosis or recommends the incorrect treatment, the harm (albeit sometimes catastrophic) will be limited to one individual. But the same cannot be said about an algorithm. The widely reported Twitter spat3 between Babylon and @DrMurphy11 exemplifies just this point. Dr Murphy4 raised concerns about the appropriateness of advice given by Babylon’s chatbot to a 67-year-old obese, diabetic patient who smoked and was presenting with central chest pain. The point being that if the triage algorithm is flawed, as claimed by Dr Murphy (and I here make no judgment about that) then the same advice could be widely replicated and cause harm to a large number of individuals. It is little wonder then that some have questioned the ability of existing legal frameworks to get to grips with these issues.
HealthInvestor UK • April 2020