Blog Details

These wearable cameras use AI to detect and prevent medication errors in operating rooms

In the high-stress conditions of operating rooms, emergency rooms and intensive care units, medical providers can swap syringes and vials, delivering the wrong medications to patients.

Now a wearable camera system developed by the University of Washington uses artificial intelligence to provide an extra set of digital eyes in clinical settings, double-checking that meds don’t get mixed up.

The UW researchers found that the technology had 99.6% sensitivity and 98.8% specificity at identifying vial mix ups.

“The thought of being able to help patients in real time or to prevent a medication error before it happens is very powerful,” said Dr. Kelly Michaelsen, an assistant professor of anesthesiology and pain medicine at the UW School of Medicine. “One can hope for a 100% performance but even humans cannot achieve that.”

The frequency of drug administration mistakes — namely injected medications — is troubling.

Research shows that at least 1 in 20 patients experience a preventable error in a clinical setting, and drug delivery is a leading cause of the mistakes, which can cause harm or death.

Across healthcare, an estimated 5% to 10% of all drugs given are associated with errors, impacting more than a million patients annually and costing billions of dollars.

To address the problem, researchers used GoPro cameras to collect videos of anesthesiology providers working in operating rooms, performing 418 drug draws. They added data to the videos to identify the content of the vials and syringes, and used that information to train their model.

“It was particularly challenging, because the person in the [operating room] is holding a syringe and a vial, and you don’t see either of those objects completely,” said Shyam Gollakota, a coauthor of the paper and professor at the UW’s Paul G. Allen School of Computer Science & Engineering. 

Given those real-world difficulties, the system doesn’t read the labels but can recognize the vials and syringes by their size and shape, vial cap color and label print size.

The system could ultimately incorporate an audible or visual signal to alert a provider that they’ve made a mistake before the drug is administered.

Michaelsen said the goal is to commercialize the technology, but more testing is needed prior to large scale deployment.

Gollakota added that next steps will involve training the system to detect more subtle errors, such as drawing the wrong volume of medication. Another potential strategy would be to pair the technology with devices such as Meta smart glasses.

Michaelsen, Gollokota and their coauthors published their study today in npj Digital Medicine. Researchers from Carnegie Mellon University and Makerere University in Uganda also participated in the work. The Toyota Research Institute built and tested the system.