Abstract
Artificial intelligence (AI)-based digital phenotyping, including computational speech analysis, increasingly allows for the collection of diagnostically relevant information from an ever-expanding number of sources. Such information usually assesses human behaviour, which is a consequence of the nervous system, and so digital phenotyping may be particularly helpful in diagnosing neurological illnesses such as Alzheimer’s disease. As illustrated by the use of computational speech analysis of Alzheimer’s disease, however, neurological illness also introduces ethical considerations beyond commonly recognised concerns regarding machine learning and data collection in everyday environments. Individuals’ decision-making capacity cannot be assumed. Understanding of analytical results will likely be limited even as the personal significance of those results is both highly sensitive and personal. In a traditional clinical evaluation, there is an opportunity to ensure that information is relayed in a way that is highly customised to the individual’s ability to understand results and make decisions, and privacy is closely protected. Can any such assurance be offered as digital phenotyping technology continues to advance? AI-supported digital phenotyping offers great promise in neurocognitive disorders such as Alzheimer’s disease, but it also poses ethical challenges. We outline some of these risks as well as strategies for risk mitigation.