Will AI Be the Next Frontier in Behavioral Health?

Will AI Be the Next Frontier in Behavioral Health? A profile of a human head with a 3D brain sitting on a mobile phone.

In 2017, researchers from the World Well-Being Project conducted a study of 1,200 Facebook users to determine if they could create better screening and diagnostic tools for depression by examining users’ social media posts.

A social media algorithm was used to analyze data from more than 500,000 posts from consenting users and picked out linguistic cues (e.g., words related to depressed moods, loneliness, hostility, etc.) that might predict depression. The study found that linguistic markers could predict depression with “significant” accuracy up to three months before a person receives a formal diagnosis.

Today other researchers are expanding their focus to assess the potential of artificial intelligence (AI) in psychiatry and therapy to include facial recognition, text analysis software and other tools. The aim is to supplement clinicians’ efforts to spot mental illnesses earlier and improve treatments for patients. But, as with many issues involving AI, researchers have questions about its effectiveness and remain wary of potential bias in detecting conditions and other ethical issues.

A New Frontier in Detection

Daniel Barron, M.D., a Seattle psychiatrist and author of a new book “Reading Our Minds: The Rise of Big Data Psychiatry,” believes having access to quantitative data about conversations, facial expressions and intonations and other data would provide another dimension to clinical interaction.

If proven effective, AI behavioral health tools could aid in earlier diagnosis, some experts believe.Technology in development could prove useful, Barron maintains in a recent report in The Guardian. Algorithms, he says, could be used to help spot when a person’s facial expressions subtly change over time or whether they’re speaking much faster or slower, which might indicate signs of being manic or depressed.

These technologies could help doctors identify these signs earlier than they otherwise would, Barron notes, because software would gather and organize the data. Between exams, a doctor could review the data, focusing on a clip of a recording flagged by the algorithm. Other information from the doctor’s records could be included as well. Additionally, data from apps, audio and personal wellness or fitness devices could be used in the clinician’s work to employ the most effective treatment.

Of course, the technologies used to evaluate patients still need to be proven effective, and many questions remain about potential bias and ethical issues. Johannes Eichstaedt, Ph.D., a psychologist at Stanford University, has worked with AI detection-screening systems and gave them a C grade for accuracy in his comments to The Guardian.

3 Takeaways on AI Use in Behavioral Health

1. AI Data Diagnostic Value Remains in Question

Algorithms can track a sequence of facial expressions or words, but those are only clues to a person’s inner state. The data may help doctors recognize symptoms, but they can’t reveal what’s causing them.

2. AI Effectiveness Needs to Be Measured More Closely

AI can serve as a screening and early-warning system, Eichstaedt says, but as of now it can’t beat traditional patient survey methods.

3. Bias Needs to Be Addressed

If AI systems can be made more effective, researchers will have to pay close attention to unintended biases in the data the technology produces. When training AI programs, they are fed huge databases of personal information so they can learn to discern patterns. In this process, white people, men, higher-income earners and younger people often are overrepresented, researchers note.

AHA Center for Health Innovation logo