By using an artificially intelligent algorithm to predict patient mortality, a research team from Stanford University is hoping to improve the timing of end-of-life care for critically ill patients. In tests, the system proved eerily accurate, correctly predicting mortality outcomes in 90 per cent of cases. But while the system is able to predict when a patient might die, it still cannot tell doctors how it came to its conclusion.
Predicting mortality is hard. Doctors must consider an array of complex factors, ranging from a patient’s age and family history to their response to drugs and the nature of the affliction itself. To complicate matters, doctors have to contend with their own egos, biases, or an unconscious reluctance to assess a patient’s prospects for what they are.
Sometimes doctors are spot on, but other times they can be off by several months (if not years), both in terms of predicting death too late or too early.
This poses a problem for the accurate scheduling of palliative care. Typically, when a patient is not likely to live beyond a year, their treatment is moved to a palliative care team, who try to make the patient’s last days or months as free from suffering as possible.
To that end, they work to manage a patient’s pain, nausea, loss of appetite and confusion, provide psychological and moral support, while respecting the social, cultural, and spiritual needs of the patient and their family.
But if a patient is transitioned to palliative care too late, they’re likely to miss out on this important stage of care. And if they’re admitted too early, it places an unnecessary strain on the healthcare system.
“All too often, advanced illness turns to a medical crisis, and patients end up in the ICU. There, events can attain a momentum of their own, resulting in increasingly aggressive interventions that often do not serve patients and their families well,” Ken Jung, a Standford Medicine research scientist and co-author of the new study, told Gizmodo.
“One of the goals of the palliative care team is to engage in conversations with patients so that they can think through and articulate their preferences before they are in a crisis. Note that this may be appropriate even if the patient is not in danger of dying in the next year — for our purposes, mortality is simply a convenient surrogate for ‘really ill and could possibly benefit from having these talks.'”
Jung says this unmet need was first recognised several decades ago in surveys which showed that 80 per cent of Americans wish to die at home — but only around 35 per cent do so. He says the situation has improved a bit, but we “still have a long way to go.”
It’s important to get the timing just right, which is why Anand Avati and his team from Stanford University developed an AI-based system. The death-predicting algorithm is not meant to replace doctors but offer a tool to improve the accuracy of prognoses. In addition to improving the timing of palliative care, the system could also ease the burden placed on doctors when trying to predict patient outcomes, which is a laborious and time consuming process.
“The problem we address is that only a small fraction of patients who can benefit from palliative care actually receive it — partly due to being identified too late, and partly due to shortage of [human resources] in palliative care services to proactively identify them early on,” Avati told Gizmodo. “We try to solve this problem.”
The system uses a form of artificial intelligence known as deep learning, where a neural network learns from large amounts of data. In this case, the system was fed data from the electronic health records (EHR) of adult and child patients admitted either Stanford Hospital or Lucile Packard Children’s Hospital. After parsing through 2 million records, the researchers identified 200,000 patients suitable for the project.
The researchers were “agnostic” to disease type, disease stage, severity of admission (ICU versus non-ICU) and so on. All of these patients had associated case reports, including a diagnosis, the number of scans ordered, the types of procedures performed, the number of days spent in the hospital, medicines used, and other factors.
The deep learning algorithm studied the case reports from 160,000 of these patients, and was given the directive: “Given a patient and a date, predict the mortality of that patient within 12 months from that date, using EHR data of that patient from the prior year.” The system was trained to predict patient mortality within the next three to 12 months. Patients with less than three months of lifespan weren’t considered, as that would leave insufficient time for palliative care preparations.
Armed with its new skills, the algorithm was tasked with assessing the remaining 40,000 patients. It did quite well, successfully predicting patient mortality within the 3 to 12 month timespan in nine out of 10 cases. Around 95 per cent of patients who were assessed with a low probability of dying within that period lived beyond 12 months. The pilot study proved successful, and the researchers are now hoping their system will be applied more broadly.
“This is a sophisticated triage tool to improve access to palliative care using prognosis as a proxy,” Stephanie M. Harman, Clinical Associate Professor of Medicine at Stanford University and a co-author of the new study, told Gizmodo. “Its intent is not to communicate a time of death,” adding that the system solves the problem of “identifying seriously ill patients who have unaddressed palliative care needs.”
To which Jung adds: “We generally believe that this sort of approach is critical to safe, effective, and ethical use of machine learning in clinical settings. Outside of very niche applications, we think it is almost always better, critical even, to have informed people in the loop.”
During the pilot study, the researchers noticed several shortcomings of the system that will need to be addressed before it can be rolled out for further use.
“For instance, it has turned out that it can be devilishly difficult to find a good time and place for the palliative care doctors to have a conversation with [hospital staff] in a timely manner,” said Jung. “Another detail that surfaced during the pilot study is that we discovered that some of the data that we assumed would be available to the system would not be there — at least in time to be of use.”
Jung says the pilot study was an effort to iteratively work out the kinks to see if it would run smoothly and work as a whole in the desired direction.
Importantly, while the system can make a prognosis and alert healthcare practitioners to the need for end-of-life care, the system can’t tell the doctors why it came to that prognosis, or the kind of medical treatments the patient may require. This situation is similar to DeepMind’s AlphaGo system, which is now capable of defeating the world’s best grandmasters at chess and Go.
Experts say the system makes moves that are “alien” and unpredictable, leaving the defeated grandmasters completely baffled. This is what AI developers call the “black box” problem — when a machine comes up with an answer or solution to a problem, but without an obvious method of how it got there.
Thankfully, the decisions reached by the Stanford algorithm can be studied. Writing in the New York Times, physician Siddhartha Mukherjee explains:
Still, when you pry the box open to look at individual cases, you see expected and unexpected patterns. One man assigned a score of 0.946 died within a few months, as predicted. He had had bladder and prostate cancer, had undergone 21 scans, had been hospitalised for 60 days — all of which had been picked up by the algorithm as signs of impending death. But a surprising amount of weight was seemingly put on the fact that scans were made of his spine and that a catheter had been used in his spinal cord — features that I and my colleagues might not have recognised as predictors of dying (an M.R.I. of the spinal cord, I later realised, was most likely signalling cancer in the nervous system — a deadly site for metastasis).
So the good news is that we should be able to learn from this algorithm’s findings, we just have to do the work.
“We believe that a black-box model can lead physicians to good decisions but only if they keep human intelligence in the loop, bringing in the societal, clinical, and personal context,” Nigam Shah, a co-author of the new study, told Gizmodo.
What’s more, the system can only get better; it was only fed data from patients at two hospitals, so it’s limited and a bit biased. Moving forward, the system will parse through more diverse sets of data and use more sophisticated deep learning model architectures that are better suited for the task.
So the good news is that this system can only get better at predicting when we might die. It’s unsettling, yes, but if it results in better end-of-life care, that’s a good thing.
A preprint of the paper, “Improving Palliative Care with Deep Learning,” is available at the arXiv preprint server.