Wearable Technology in Healthcare: Detection and Recognition

We want to use wearable technology in healthcare to drive interventions. For example, an app might send a notification encouraging a user to exercise. Before this, an intermediate goal is detection or recognition of key markers that represent or are associated with health. Here we describe several types of detection that are important.

Note that in mHealth, detection is usually a more general term than recognition: recognition is used to describe activities, while detection could involve physiological or mental states, where the term recognition is less used. This contrasts with other fields that use the terms. For instance in computer vision, detection is recognition along with spatial localization.

Detect Activities and Events

Mobile and wearables can handle activity and event detection, both passively via sensors, and actively via querying the user. One categorization of these activities/events into types is based on the goal of interventions: 1) activities or events one wants to prevent, such as smoking or overeating 2) those one wishes to encourage, such as walking and 3) those requiring an emergency intervention after they happen, such as a fall.

For smoking, in [1], they use a wrist-worn sensor along with breathing patterns to classify smoking or not smoking via a support vector machine. The wrist-worn sensor detects the movement patterns, while the breathing patterns can help avoid false positives due to activities that cause similar movement, such as eating or drinking. A similar idea was used in [2] to detect overeating.

For physical activities including walking, running, and bicycling, Google provides an API for android. This ‘automatically detects activities by periodically reading short bursts of sensor data and processing them using machine learning models. To optimize resources, the API may stop activity reporting if the device has been still for a while, and uses low power sensors to resume reporting when it detects movement.’ More generally, [3] gives a recent survey of the literature on human activity recognition.

Finally, we have events that require an intermediate intervention, such as a fall. Apple made headlines when they released fall detection with the Apple Watch 4. The science for this had been studied previously [4,5].

Depending on the type of activity or event, detection may be subject to noise, with both false positives and false negatives being an issue. To prevent false positives, the app may send a query to the user asking if they did in fact perform the activity. To prevent false negatives, one may have regular queries asking if they’ve performed the activity since the last query. Note that the latter method does not lead to an exact timestamp for the activity: one only knows whether the activity was performed since the last query.

Detect Moods

Mood detection is important, as we may want to use moods as predictive of events and try to control them to help cause/prevent events. Alternatively, we may want to use interventions to simply control mood to make people happier.

Traditionally, mood detection was done via user input through ecological momentary assessments (EMA) [6,7]. Here, an app sends the users prompts throughout a study (generally every few hours) asking them questions about their emotions, their previous activities, and their environment. This has the advantage over questions during clinical visits, as EMA minimizes recall bias and gives a higher frequency picture of when specific moods happened. However, EMA has the problem of high burden: that is, it requires substantial effort on the part of the user, which often leads to lower completion rates than desired [8].

There are two trends to help address the burden in EMA: one is microEMA (\muEMA), and the other is passive sensing. In \muEMA [9], the questions are delivered via smartwatch rather than the phone, and “all EMA questions can be answered with a quick glance and a tap – nearly as quickly as checking the time on a watch.” In passive sensing, one tries to use sensors to predict what an unobserved EMA response would be. For instance, in [10] they train an SVM and then use this SVM along with lagged values in a Bayes net to predict responses to stress questions, while in [11], they attempt to predict responses to craving questions using a conditional random field (CRF).

One can think of the progression EMA->\muEMA->passive sensing as moving from a medium frequency, high information, low noise setting with high burden to a high frequency, low information, high noise setting with low burden.

Conclusion

Detection is an important intermediate step in mHealth that can help us both understand temporal patterns of behavior or moods and drive interventions. Traditionally detection was done via querying the user, but it is increasingly done by passive sensing. Going forward we will also discuss physiological parameter estimation and spatial localization.

[1] Saleheen, Nazir, Amin Ahsan Ali, Syed Monowar Hossain, Hillol Sarker, Soujanya Chatterjee, Benjamin Marlin, Emre Ertin, Mustafa Al’Absi, and Santosh Kumar. “puffMarker: a multi-sensor approach for pinpointing the timing of first lapse in smoking cessation.” In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 999-1010. ACM, 2015.
[2] Zhang, Shibo, William Stogin, and Nabil Alshurafa. “I sense overeating: Motif-based machine learning framework to detect overeating using wrist-worn sensing.” Information Fusion 41 (2018): 37-47.
[3] Morales, Jafet, and David Akopian. “Physical activity recognition by smartphones, a survey.” Biocybernetics and Biomedical Engineering 37, no. 3 (2017): 388-400.
[4] Chen, Zhenyu, Yiqiang Chen, Lisha Hu, Shuangquan Wang, and Xinlong Jiang. “Leveraging two-stage weighted ELM for multimodal wearables based fall detection.” In Proceedings of ELM-2014 Volume 2, pp. 161-168. Springer, Cham, 2015.
[5] Koshmak, Gregory, Amy Loutfi, and Maria Linden. “Challenges and issues in multisensor fusion approach for fall detection.” Journal of Sensors 2016 (2016).
[6] Stone, Arthur A., and Saul Shiffman. “Ecological momentary assessment (EMA) in behavorial medicine.” Annals of Behavioral Medicine (1994).
[7] Shiffman, Saul, Arthur A. Stone, and Michael R. Hufford. “Ecological momentary assessment.” Annu. Rev. Clin. Psychol. 4 (2008): 1-32.
[8] Wray, T.B., Merrill, J.E. and Monti, P.M., 2014. Using ecological momentary assessment (EMA) to assess situation-level predictors of alcohol use and alcohol-related consequences. Alcohol research: current reviews36(1), p.19.
[9] Intille, Stephen, Caitlin Haynes, Dharam Maniar, Aditya Ponnada, and Justin Manjourides. “μEMA: Microinteraction-based ecological momentary assessment (EMA) using a smartwatch.” In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 1124-1128. ACM, 2016.
[10] Chatterjee, Soujanya, Karen Hovsepian, Hillol Sarker, Nazir Saleheen, Mustafa al’Absi, Gowtham Atluri, Emre Ertin et al. “mCrave: continuous estimation of craving during smoking cessation.” In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 863-874. ACM, 2016.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.