Mental health apps don't protect your data

Mental health apps don’t protect your data

Imagine calling a suicide prevention crisis hotline. Do you ask for their data collection policy? Do you assume that your data is protected and kept safe? Recent events may cause you to think more carefully about your answers.

Mental health technologies such as robots and chat lines are at the service of people in crisis. They are among the most vulnerable users of any technology, and they should expect their data to be kept safe, protected and confidential. Unfortunately, recent dramatic examples show that extremely sensitive data has been misused. Our own research found that when collecting data, developers of sanity-based AI algorithms are simply testing whether they work. They generally do not address ethical, privacy, and political concerns about how they might be used. At a minimum, the same standards of health care ethics should be applied to technologies used to provide mental health care.

Politicsrecently reported that Crisis Text Line, a non-profit organization claiming to be a secure and confidential resource for people in crisis, is sharing the data it collects from users with its for-profit spin-off company Loris AI, which develops the Software customer service. A Crisis Text Line official initially defended the data exchange as ethical and “fully compliant with the law”. But within days, the organization announced that it had ended its data-sharing relationship with Loris AI, while saying the data had been “securely processed, anonymized and removed personally identifiable information”.

Loris AI, a company that uses artificial intelligence to develop chatbot-based customer service products, had used the data generated from the more than 100 million Crisis Text Line exchanges to, for example, help service agents understand customer sentiment. Loris AI is said to have deleted all data received from Crisis Text Line, although this extends to the algorithms trained on this data is unclear.

This incident and others like it reveal the growing value placed on mental health data as part of machine learning, and they illustrate the regulatory gray areas through which this data flows. The well-being and privacy of people who are vulnerable or perhaps in crisis are at stake. They are the ones who suffer the consequences of poorly designed digital technologies. In 2018, US border officials denied entry to several Canadians who had survived suicide attempts, based on information from a police database. Let’s think about that. Non-criminal mental health information had been shared through a law enforcement database to flag someone wanting to cross a border.

Policymakers and regulators need evidence to properly manage artificial intelligence, let alone its use in mental health products.

We surveyed 132 studies that tested automation technologies, such as chatbots, in online mental health initiatives. Researchers in 85% of studies did not address, either in the study design or in the reporting of results, how technologies might be used in negative ways. This is despite some of the technologies presenting serious risks of harm. For example, 53 studies used public data on social media — in many cases without consent — for predictive purposes, such as trying to determine a person’s mental health diagnosis. None of the studies we reviewed addressed the potential discrimination people might face if this data were made public.

Very few studies included input from people who had used mental health services. Researchers in only 3% of studies appeared to involve input from people who had used mental health services in the design, evaluation, or implementation in any substantial way. In other words, the research that drives the field sorely lacks the participation of those who will bear the consequences of these technologies.

Mental health AI developers need to explore the long-term and potential adverse effects of using different mental health technologies, whether it’s how the data is used or what happens if the technology fails to perform. ‘user. Scholarly journal editors should require it for publication, as should members of institutional review boards, funders, etc. These requirements should accompany the urgent adoption of standards that promote lived experience in mental health research.

In policy, most US states give special protection to typical mental health information, but emerging forms of mental health data seem only partially covered. Regulations such as the Health Insurance Portability and Accountability Act (HIPAA) do not apply to direct-to-consumer healthcare products, including the technology used in AI-enabled mental health products. The Federal Drug Administration (FDA) and Federal Trade Commission (FTC) may play a role in evaluating these direct-to-consumer technologies and their claims. However, the FDA’s scope does not appear to apply to collectors of health data, such as wellness apps, websites, and social media, and therefore excludes most health data.” indirect”. The FTC also does not cover data collected by nonprofit organizations, which was a major concern raised in the Crisis Text Line case.

It is clear that the production of data on human distress is about much more than a potential invasion of privacy; it also presents risks for an open and free society. The possibility of people controlling their speech and behavior out of fear of the unpredictable datafication of their inner world will have profound social consequences. Imagine a world where we have to seek out expert “social media analysts” who can help us create content to appear “mentally well” or where employers routinely screen potential employees’ social media for “mental health risks” .

Data from everyone, whether or not they have used mental health services, could soon be used to predict future distress or impairment. Experimentation with AI and big data drives our daily activities to refine new forms of “mental health data” – which may escape current regulation. Apple is currently working with multinational biotech company Biogen and the University of California, Los Angeles to explore the use of phone sensor data such as movement and sleep patterns to infer mental health and cognitive decline.

Analyze enough data points about a person’s behavior, according to the theory, and signals of poor health or disability will appear. This sensitive data creates new opportunities for discriminatory, biased and invasive decision-making regarding individuals and populations. How data labeled as “depressed” or “cognitively impaired” – or likely to become do these things—have an impact on a person’s insurance rates? Will individuals be able to challenge these designations before the data is transferred to other entities?

Things are changing rapidly in the digital mental health sector, and more and more companies are seeing the value of using people’s data for mental health purposes. A World Economic Forum report values ​​the global digital health market at $118 billion worldwide and cites mental health as one of the fastest growing sectors. A dizzying array of start-ups are jostling to become the next big thing in mental health, with “digital behavioral health” firms reportedly attracting $1.8 billion in venture capital in 2020 alone.

This flow of private capital stands in stark contrast to underfunded health care systems in which people struggle to access appropriate services. For many people, cheaper online alternatives to face-to-face support may seem like their only option, but this option creates new vulnerabilities that we are only beginning to understand.


If you or someone you know is struggling or having suicidal thoughts, help is available. Call or text 988 Suicide & Crisis Lifeline on 988 or use the lifeline cat.

This is an opinion and analytical article, and the opinions expressed by the author or authors are not necessarily those of American scientist.

#Mental #health #apps #dont #protect #data

Leave a Comment

Your email address will not be published. Required fields are marked *