In a world that is increasingly concerned about privacy, one in which health care records are both highly protected and simultaneously a high-value target of cyber criminals, putting cameras in hospital rooms would seem to be a fraught topic.
But they could be just the thing for increasing hospital safety, provided they are connected to computers powerful enough to spot potential problems such as patient falls and bed sores before they occur.
That is the assessment at UC San Diego Health, which plans to hire Chicago-based Artisight Inc. to install its artificial intelligence-enabled camera systems in patient rooms, watching continuously for the movements that experience shows often precede harmful incidents.
Dr. Christopher Longhurst, chief clinical and innovation officer at UC San Diego Health, said the university health system plans to install a “computer vision” system in at least some of its facilities in 2025, though those plans are still in development and will be for at least six months.
“We’re very bullish about the opportunities to use AI-enabled machine vision to help provide the safest possible care for our patients,” Longhurst said. “There are a lot of unanswered questions still, and before we move forward, we’ll be working with alacrity on issues of privacy and compliance and patient consent.”
But Artisight has designed a system that many medical providers are already using day in and day out.
Dr. Andrew Gostine, a critical care anesthesiologist and the company’s co-founder and chief executive officer, said that Artisight’s system is designed to have a sort of digital amnesia. While a small camera does continuously take in what’s happening in a patient’s bed, the video stream is never stored in long-term computer memory. Instead, an onboard AI computer chip analyzes the endless stream of data in real-time, looking for patterns of pixel movement, perhaps combined with the frequency of caregiver visits or other observable factors, that prior training has shown indicate a patient may be about to try and get out of bed, increasing the risk of a fall, or may not have been turned recently enough, increasing the risk of bed sores.
“The image comes straight into the (graphical processing unit), we run this calculation, and then the image is gone forever, so there is never any storing of the video,” Gostine said.
While cameras at the bedside might seem like a big leap, they are already routinely used in certain situations, such as keeping watch over multiple patients considered potential fall or self-harm risks, allowing one worker to see multiple beds from a single monitoring station.
But the system UCSD is considering would be much more widely used.
The key, Gostine added, is that the system is able to distill “synthetic” versions of what target motion looks like, capturing how pixels change from frame to frame without needing to keep the actual video frames.
There is only one instance, he added, where the company’s system can capture video, and that is of the surgical field during operating room procedures. But here, he adds, the system is designed to automatically throw away any situation where a face or other possibly identifying feature such as a tattoo is inadvertently captured. These videos of surgeries, he said, are never tagged with a patient’s name and are used by surgeons for reference so they can review how their work went after a procedure is finished.
“You can’t tell who someone is by looking at that video; nobody has a medical record number or Social Security number tattooed on their gallbladder,” Gostine said.
Artisight’s system also can be used to help make hospital operations more efficient, spotting, for example, when operating rooms are empty and ready for the next patient.
While its statements about data protection emphasize the protection of sensitive health care data as required by a federal law called the Health Information Portability and Accountability Act, often called HIPAA, many companies have made such statements only for hackers to discover a flaw in the underlying computer logic that ends up in a data breach. To convince the skeptics, Gostine said that Artisight hired a noted data privacy expert to audit all of its software.
“We meet the requirement for expert determination, which is the highest standard in HIPAA compliance,” Gostine said.
Other hospitals have accepted this proof. The executive said that about 90 hospitals are already using Artisight systems, and an additional 400 are under contract to follow suit.
Using the latest in machine learning, which many now call artificial intelligence, to increase health care safety was the subject of a recent seminar convened by Longhurst in La Jolla. A small group of top researchers examined the state of the art and what is necessary to go further toward the goal of eliminating preventable medical errors expressed in To Err is Human, the seminal treatise on the topic published in 1999 by the Institute of Medicine.
That highly read and referenced document estimated that “as many as 98,000 people die in any given year from medical errors that occur in hospitals.”
Nearly 20 years later, researchers found that while progress has been made in reducing preventable errors, such as conducting surgery on the wrong body part or allowing infection to spread inside hospitals, the goal of eliminating these mistakes has not yet been reached.
Longhurst himself experienced potentially deadly, yet preventable blood
clots in his legs after he was hospitalized following a severe biking accident, and now, finding himself in a leadership role a decade later, he has embraced machine learning as a significant way to prevent harm.
Already, a model has been successful in monitoring the medical data of emergency department patients for early signs of sepsis, a deadly overreaction of the immune system that can be treated with the early application of antibiotics. And there is a growing movement in the industry to have AI systems more involved in everything from helping to fill out paperwork to providing support in making accurate diagnoses.
With the help of a $22 million grant from philanthropists Joan and Irwin Jacobs, UCSD is deeply engaged in the race to adapt such technologies to real-world medical situations.
“If we want to make quantum leaps in the safety of the health care we deliver, we’re going to need to think about how we use AI tools,” Longhurst said. “The machine vision system is one potential way of ensuring highly reliable care that isn’t always delivered today.”
The idea is that a computer watching a video feed, rather than a human, never tires, raising an alarm for human nurses to follow up on without violating the basic human right to privacy.
What might an ethicist make of this assertion?
I. Glenn Cohen, a deputy dean at Harvard Law School and faculty director of the Petrie-Flom Center for Health Law Policy, Biotechnology & Bioethics, co-authored a viewpoint article in the Journal of the American Medical Society in 2020 that examined the “ethical and legal aspects of ambient intelligence in hospitals.” The piece counsels caution because such technology “not only captures video data as many surveillance systems do but does so by targeting the physical spaces where sensitive patient care takes place.”
Does a system that does not store video and uses AI to examine feeds in real time make an ethical difference? Cohen said in an email that he would need more information on the specifics of the system to make any sort of firm determination. But the idea does seem to be headed in the correct direction.
“Certainly, the lack of human review and the fact that feeds aren’t stored is a big help for the privacy issues,” Cohen said.
But he added that in any such system where patients are being recorded, patient notification and consent are table stakes. And, these sorts of systems must determine how to handle other information that may pass before their unblinking eyes.
“I would want to know if it has been designed in a way that avoids ‘catching’ some things that might be concerning, such as elder abuse by a patient family member or physician misconduct,” Cohen said. “I would want to know how they balanced concerns about privacy against preventing those kinds of issues.”
Gostine said that the system does not do any facial recognition or tracking of workers. But he added in a follow-up email that the system could be trained in the future to watch for other types of harm.
“We are probably one to two years away from having incredibly powerful AI’s watch these cameras 24/7 for dollars a day,” he said. “They will watch for all types of harm that could befall a patient whether it’s some action that leads to a surgical site infection or an error made during surgery or even someone deliberately harming an elder or attempting to bring a gun into the hospital.”
As to prior notification, the executive said that posters are put up in every room notifying patients of the camera system’s purpose. Artisight’s systems actually have two cameras, a small unit that the AI watches continuously, and a larger unit on the top that can pan and tilt. This second camera is pointed away from the patient when not in use, but can be tapped into by caregivers or loved ones to do remote consultations or conversations.