How Machine Learning Will Revolutionize Healthcare: Applications and Challenges

Machine learning and statistics in healthcare have potentially game changing applications, but also pose new challenges for modeling and analysis. Here we describe some of the applications and challenges.

Applications

In this article we describe how machine learning can be used to recommend and improve treatments to achieve desirable health outcomes. For instance: choose the right treatment or sequential treatment policy for a certain disease or patient. One also might want to encourage behaviors that help people stay healthy. Related to this, we often have the following goals in healthcare settings: both in clinical and mobile health.

Recommend Treatment 1: Prevent an Event

Samuel Mitchell grave restored.JPG

The field of survival analysis deals with modeling the time to a first event. We often would like to prevent this event from happening: for instance with death or some unhealthy behavior such as smoking. In the death setting, we would like to choose a treatment or sequence of treatments, whether surgery, drugs, or some other treatment(s), which gives us a high survival probability across most time windows. For instance, in [1], they show the survival curves for myocardial infarction (heart attack) for two groups: those treated with angioplasty, a surgical procedure, and those treated with streptokinase, a drug. Over an approximately eight year period, survival probabilities are higher at every time window for angioplasty.

Similarly, in the setting of smoking, we might have treatments such as the patch, e-cigarettes, or behavioral therapy. We again want to choose a treatment which gives us a high abstinence probability across time windows.

Finally we might be able to perform sequential treatment policies, where at each time step we decide on a treatment, given the current health context or state. For instance, one can have the treatment decision be whether to transplant a liver at each time step, where the goal is to maximize the survival probability [2].

Recommend Treatment 2: Control a Biomarker

Example of a biomarker: blood pressure (original photo)

Often a biomarker can be used as a surrogate or summary of the health state of an individual. A classic example is diabetes. Instead of trying to control diabetes, one tries to control insulin levels via shots. A statistics or machine learning question might then be: if you want insulin levels to be within some range, when do you give the insulin shot? At what dose?

Control Behavior

Dive off!.jpg
Running: example of a behavior one might want to encourage. Original photo

Often one wants to encourage healthy behavior: examples include encouraging physical activity, reducing portion size, or encouraging meditation. In the example of encouraging physical activity, a machine learning question could be: at every time step, given some context of a person (outdoor temperature, summaries of previous activity, number of messages sent, etc.), should we deliver a text-based recommendation to increase activity? [3,4] If so, which recommendation should we deliver? The outcome variable is the cumulative number of steps taken over some duration of time into the future. In some ways this is similar to controlling a biomarker: physical activity in terms of steps is used as a surrogate for health. However, often when controlling behavior, you expect only a short-term affect on behavior, whereas with many treatments for either biomarkers or those that aim to prevent death, you expect longer term effects.

Identify Refractory Patients

Many treatments only work on a subset of patients. Generally, patients start with one treatment. If it isn’t effective after a period of time (for instance it doesn’t improve measurements of some biomarker), another treatment is tried, and another, until either all treatment options are exhausted, an effective treatment is found, or a highly negative outcome such as death occurs.

Being able to predict which treatments will be effective on which patients can potentially save several rounds of treatments and drastically reduce patient suffering. For some diseases, such as cancer, this can prevent it from spreading before it’s too late.

Challenges

While the above applications are super important, attacking them poses a number of challenges. Health data and problems are different from the standard setup of independent data found in many other fields.

Irregularly Sampled Time Series

Often in healthcare, one has measurements taken irregularly in time. A classic example is appointments. One could have one appointment after one month, and then another appointment two weeks later, and then another appointment several months later. Further, the time between appointments may depend one one’s health: that is the appointment times are informative [5]. In all of these cases, you could fit a classic time series model, but failing to account for either the irregularly spaced times or the informative observation times will introduce bias into the model.

Several techniques from stochastic processes and statistics are a natural fit for this, and each have their own pros and cons. Linear and generalized linear mixed models as well as Gaussian processes model the observations of a health process as noisy observations of a true continuous-valued trajectory, while continuous-time hidden Markov models model them as noisy observations of a true discrete-valued trajectory. More recently, [6] shows a neural-network based model for continuous-time trajectories.

Heterogeneous Data Sources

In many fields where one applies machine learning, you tend to deal with one type of data source. In computer vision you traditionally deal with pixel data without a temporal structure (although that has changed a lot over the past few years), while in natural language processing you deal with unstructured text data. In medicine, you can deal with many types of data: a few are

  • Text data: free-form clinical notes describing a doctor’s summary of the patient’s health
  • ICU sensor data
  • Lab tests: blood/stool/urine/etc
  • Static covariates: age, weight, income, etc
  • Treatment
  • Event logs
  • And more

This poses many challenges. What conditional independence assumptions between these different types of data should we make? In line with this, do we model evolution of these conditionally (only model how some of these evolve, conditional on the others which we don’t explicitly model) or jointly (model how they evolve together)? These kinds of decisions can affect what kinds of models are easy to fit and/or make good predictions with, but we have to trade this off with what kind of models describe the health processes in a scientifically realistic way.

[1] Zijlstra, Felix, Jan CA Hoorntje, Menko-Jan de Boer, Stoffer Reiffers, Kor Miedema, Jan Paul Ottervanger, Arnoud WJ van’t Hof, and Harry Suryapranata. “Long-term benefit of primary angioplasty as compared with thrombolytic therapy for acute myocardial infarction.” New England Journal of Medicine 341, no. 19 (1999): 1413-1419.
[2] Schaefer, Andrew J., Matthew D. Bailey, Steven M. Shechter, and Mark S. Roberts. “Modeling medical treatment using Markov decision processes.” In Operations research and health care, pp. 593-612. Springer, Boston, MA, 2005.
[3] Klasnja, Predrag, Shawna Smith, Nicholas J. Seewald, Andy Lee, Kelly Hall, Brook Luers, Eric B. Hekler, and Susan A. Murphy. “Efficacy of Contextually Tailored Suggestions for Physical Activity: A Micro-randomized Optimization Trial of HeartSteps.” Annals of Behavioral Medicine (2018).
[4] Greenewald, Kristjan, Ambuj Tewari, Susan Murphy, and Predag Klasnja. “Action centered contextual bandits.” In Advances in neural information processing systems, pp. 5977-5985. 2017.
Harvard
[5] Lange, Jane M., Rebecca A. Hubbard, Lurdes YT Inoue, and Vladimir N. Minin. “A joint model for multistate disease processes and random informative observation times, with applications to electronic medical records data.” Biometrics71, no. 1 (2015): 90-101.
[6] Chen, Tian Qi, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. “Neural Ordinary Differential Equations.” arXiv preprint arXiv:1806.07366 (2018).

Leave a Reply

Your email address will not be published. Required fields are marked *