Emerging Legal Questions: Who’s to Blame When AI Misdiagnoses You?

While the presence of AI is cropping up all over the world in all industries, businesses, and personal consumer lives, one of the most interesting places it’s popping up is in the field of medicine. Whether you’re looking at your doctor’s office or a hospital, AI is being used to diagnose and manage symptoms and conditions.

This only makes sense. With medical error and misdiagnosis being the third largest cause of death in the US, and around one-sixth of all NHS diagnosis being wrong, it’s no wonder the field of medicine is turning towards AI technology. However, while a doctor getting your problem wrong, you know it’s their fault, but what happens if the AI gets it wrong?

Today, we’re going to find out.

The Success of AI

The introduction of AI into the medical field has proven to be wildly successful, and we’re still in early days, meaning the potential here is still untapped. Of course, the technology is still in its early stage, it’s imperative to remember that, but as it grows, it will only get better and better.

Already, there have been so many proven cases where AI can outperform a doctor from all kinds of diagnosis, ranging from eye and vision conditions to even identifying lung cancers. Seemingly, AI can and will be, able to do it all. These diagnoses are proving to be more accurate, faster, and more personalised, resulting in an overall better experience for everyone involved.

But this doesn’t come without its fair share of problems.

Who’s to Blame?

As we said in the introduction if you go to the doctor’s and they diagnose your illness incorrectly, you know it’s the doctor who saw you’s fault, and you know that they should be accountable for their actions. If you used equipment during your diagnosis, such as a heart rate monitor, and this had delivered false information, you’d be able to investigate what went wrong and who was to blame.

But when it comes to AI, the decision of medical malpractice is still to be discussed, and there are many people who believe lots of different things. Currently, at the time of writing, it should be known that doctors are to use AI as a tool to assist them in their diagnosis practices.

This means that AI is not currently to be used as a replacement for a doctor’s own diagnosis, but more of a tool to help them get it right. This is important because this means the doctor is still responsible for the final decision and will, therefore, remain accountable.

In the event this changes and AI becomes responsible for their own decisions; this is where things start to get complicated. Since AI can’t really be understood and operate as black boxes, you can’t really see why the AI made the decision that it did.

Sure, you could hold the AI itself accountable, but you can’t punish it or seek compensation, which will probably leave people affected unsatisfied. You could hold the developers responsible, although this could be made up of hundreds of people, and it would be very difficult to track down an individual and define what role they played.

Lastly, you could hold the organisation who gave you the diagnosis responsible, but this is all yet to be discussed.

Summary

As you can see, it’s all still very much up in the air when it comes to seeing who to blame is. While doctors are using AI as a tool, this makes things easy, but as AI continues to be used and implemented, this is an important decision that needs to be had.