Individuals are becoming much more accustomed to having smart systems such as Siri, Alexa and Cortana provide assistance, guidance or advice on a daily basis. As this technology increases, more healthcare providers are exploring the viability of artificial intelligence and the benefits that machine learning might ultimately have on diagnostic medicine. The reliance on any emerging technology must be tempered with careful attention to both the benefits and the dangers inherent in its use.

The use of intelligent machines to aid in diagnosing complex conditions might provide a workforce benefit – but it might also provide a dangerous shortcut that could result in a fatally incorrect diagnosis.

The Problem of Bias

For most, the term bias has an inherent negative connotation. At best, bias can account for a form of diagnostic shortcut. At worst, biases can form the backbone of a discriminatory decision-making process. As noted in the article, “bias can be difficult to detect and thus may unintentionally find its way into the logic systems of machine learning products.”

As the machine searches for a diagnostic answer, it may begin to rely on gender, race or cultural similarities between other patients. A poorly designed program might fall victim to reliance on historical decisions and successful diagnoses. Conversely, a well-designed system can be programmed to track either its own or historic decisions in an effort to recognize patterns that disproportionately favor certain results throughout the process. While falling into a pattern does not mean the diagnosis is incorrect, it is crucial to recognize the presence of common results and not rely on them for an easy answer.

Is it Negligent NOT to Consult With an AI?

Numerous software companies are publishing data displaying the increasing accuracy of artificial intelligence. One company, Babylon Healthcare Services, has an AI that correctly answered 81 percent of diagnostic questions on a general practitioner’s exam. Over a five-year average, a passing grade on the Royal College of General Practitioner’s exam is 72 percent.

As new technologies emerge, the public insists that medical professionals stay on the cutting edge. Opponents of artificial intelligence are quick to forget that, at one time, X-ray technology, MRI and laparoscopic surgery techniques were new. When artificial intelligence has been proven to reduce the number of deaths caused by medical error by decreasing the impact of human error, it is conceivable that the failure to include established AI in a diagnostic consultation could be seen as negligent behavior.

In applications that vary widely from personal connectivity such as Alexa to autonomous cars, artificial intelligence is generally considered an “improving” technology rather than “emerging.” Many healthcare professionals are using AI in numerous ways. If you are concerned about the possibility of your chosen medical professional missing or delaying a diagnosis, it is wise to discuss your situation with a trusted malpractice attorney.