A patient’s health state can be characterized by a multitude of signals from many different data modalities. This high-dimensional, personalized data stream aggregated over patients’ lives has spurred interest in developing new clinical AI models. One of the rate-limiting factors in developing AI models that generalize to real-world scenarios is the very attribute that makes the data exciting—their high-dimensional nature.
At DiMe’s #AskMeAnything Journal, author Visar Berisha, PhD will lead a discussion on how “the curse of dimensionality” can doom models to failure, even when they seem to work well during development. As we explore the key highlights of his Nature publication “Digital medicine and the curse of dimensionality“, Visar will also provide some suggestions on how to develop clinical AI models that are more likely to fare well during prospective validation. Register now to join the discussion!
The collaborators on The Playbook have published findings of a systematic review that identifies the need for more research funding related to digital clinical measures.
Among all its findings, one that stands out is that in the last two years, out of 295 research studies published on digital clinical measures, there was only one academic publication reporting cybersecurity research, one study examining data rights and governance, and zero publications reporting research into the ethical implications of remote patient monitoring tools.
Join the co-authors of this manuscript in a discussion about the current state of funding for academic research and help shape an integrated and coordinated effort across academia, academic partners, and academic funders to establish the field of digital clinical measures as an evidence-based field worthy of our trust.
As digital measures become more complex, in terms of 1) the technologies, methods and data used to derive them, and 2) in terms of the aspects of health that they address, computer scientists, electrical engineers and other people coming from a data or computing background are increasingly important members of this village.
What can we do to bridge the gap from data-orientated background to clinical application of their skills? Can a unified lexicon aid communication throughout this process?
Ieuan Clay and Jen Goldsack presented their publication, “It takes a village: development of digital measures for computer scientists” and opened up a discussion on the range of challenges and considerations where computer scientists can have a particular impact on the development of a new digital measure.
Why should you report your modeling plan or statistical analysis plan before seeing any data? Why should we all ditch the term ‘statistical significance’ but keep statistical evidence? And how? A fantastic discussion with Eric Daza, Lead Statistician for Digital Health Outcomes at Evidation Health, as he dives into key themes from his recent pieces: Artifice or intelligence? and Ditch ‘statistical significance’.
Our conversation explored two proposed research changes for our field: 1) Splitting all gathered data into a small number of random subsamples to test reproducibility/replicability of results; 2) For exploratory analyses, continue to report CI’s and p-values—but explicitly state as possible uncertainty to expect using new data.
Do we fully understand the sensitivity of gait speed as a potential endpoint for clinical trials studies? Matthew D. Czech, Isik Karahanoglu, Xuemei Cai, Charmaine Demanuele, and their colleagues aimed to find out through their recent study, “Age and environment-related differences in gait in healthy adults using wearables” in NPJ. Their work shows that a single lumbar-worn sensor can be used for monitoring gait under free-living conditions and capture meaningful information about real-world functions that might not be possible in controlled settings. Their work also shows that despite higher variability, at-home gait speed was able to capture age-related group differences in healthy volunteers, which were not observed during in-lab gait assessments. Furthermore, they present the statistical methodology for deriving the number of monitoring days required to reliably estimate at-home gait speed that can be used to optimize clinical study design.
Science is catalyzed by new technology, but the process of technology adoption and implementation typically requires more time and ingenuity than often appreciated. Join us for DiMe’s #AskMeAnything Journal Club with Dr. David Shaywitz as he reviews the life cycle of technology innovations (Perez model), and then introduces five new books he recently reviewed in the Wall Street Journal that attempt to contextualize where we are in the artificial intelligence (AI) journey.
A physician-scientist by training, Dr. Shaywitz has focused his career on biomedical innovation as an operator and investor. In January 2020, he founded Astounding HealthTech, advising senior executives on digital health and connected fitness. He is a lecturer in the Department of Biomedical Informatics at Harvard Medical School, and an adjunct scholar at the American Enterprise Institute in Washington, D.C. He lives in the Boston Area with his wife, three daughters, and a clumber spaniel named Roscoe.
Listen in to this #AskMeAnything Journal event and join the conversation with Dr. Shaywitz and your fellow DiMe colleagues. You can check out his review of several AI books here!
The EVIDENCE (EValuatIng connecteD sENsor teChnologiEs) checklist promotes high-quality reporting in studies where the primary objective is an evaluation of a digital measurement product or its constituent parts. The checklist is a product of DiMe’s most recent Tour of Duty and will be published on May 18.
The EVIDENCE checklist is applicable to five types of evaluations: (1) proof of concept; (2) verification, (3) analytical validation, and (4) clinical validation as defined by the V3 framework; and (5) utility and usability assessments. Using EVIDENCE, those preparing, reading, or reviewing studies evaluating digital measurement products will be better equipped to distinguish necessary reporting requirements to drive high-quality research. With broad adoption, the EVIDENCE checklist will serve as a much-needed guide to raise the bar for quality reporting in published literature evaluating digital measurement products.
Did you miss it? You can still check out the following:
With the adoption of digital phenotyping in clinical research and patient care, a common vision for the future of these technologies remains unclear. Listen in to this Journal Club recording with author Anzar Abbas, PhD who opened up a discussion on his work, “Digital Measurement of Mental Health: Challenges, Promises, and Future Directions.” The discussion explored how to classify emerging tools for digital measurement of mental health and discuss the promises and challenges they face.
Anzar Abbas is a neuroscientist focused on developing technology to improve the measurement of health, increase access to care, and inform clinical decision-making using data-driven insights. He is one of the co-creators of OpenDBM, an open-source library of methods in digital phenotyping.
“Personalized therapies in the Future of Health: Winning with digital medicine products.” Deloitte Insights. Davis B., Ahmed A., Elsner N., Miranda W. March 2021.
Narayan VA,et al RADAR-CNS Consortium. “Using Smartphones and Wearable Devices to Monitor Behavioral Changes During COVID-19.” J Med Internet Res 2020;22
Manta, Christine, Bray Patrick-Lake, and Jennifer C. Goldsack. “Digital Measures That Matter to Patients: A Framework to Guide the Selection and Development of Digital Measures of Health.” Digital Biomarkers 4.3 (2020).
Izmailova, E., Godfrey, A. A., Vandendriessche, B., Bakker, J. P., Fitzer‐Attas, C., Gujar, N., Hobbs, M., … & Zipunnikov, V. Clinical and Translational Science. (2020).
Gerke, S., Stern, A.D. & Minssen, T. “Germany’s digital health reforms in the COVID-19 era: lessons and opportunities for other countries.” npj Digit. Med. 3, 94 (2020).
Pratap, Abhishek, et al. NPJ digital medicine 3.1 (2020): 1-10.
Bent, B., Goldstein, B.A., Kibbe, W.A. et al. Investigating sources of inaccuracy in wearable optical heart rate sensors. npj Digit. Med. 3, 18 (2020).
Mahadevan, N., Patel, S. et al. Development of digital biomarkers for resting tremor and bradykinesia using a wrist-worn wearable device. npj Digit. Med. 3, 5 (2020).
Mueller, Arne, et al. “Continuous digital monitoring of walking speed in frail elderly patients: noninterventional validation study and longitudinal clinical trial.” JMIR mHealth and uHealth 7.11 (2019): e15191.
Bakker, J. P., Goldsack, J. C., Clarke, M., Coravos, A., Geoghegan, C., Godfrey, A., … & Ramirez, E. (2019).
A systematic review of feasibility studies promoting the use of mobile technologies in clinical research. npj Digital Medicine, 2(1), 47.
Dorsey, E. R., Glidden, A. M., Holloway, M. R.,
Birbeck, G. L., & Schwamm, L. H. (2018).
Teleneurology and mobile technologies: the future of neurological care. Nature Reviews Neurology, 14(5), 285.
Coravos, A., Goldsack, J. C., Karlin, D. R.,
Nebeker, C., Perakslis, E., Zimmerman, N., & Erb,
M. K. (2019). Digital Medicine: A Primer on Measurement. Digital Biomarkers, 3(2), 31-71