- Kelly Shen ’23, Unsplash.com
Facebook is a one of the biggest social media platforms. Users share posts, comment, send messages, go through their feeds, as well as many other things. This makes it a really good place for targeting ads. With an app that has as much data as Facebook, marketers will easily find which ads to give you, tailored to your recent searches or liked posts. However, there also are people out there trying to use Facebook for good—or, at least, to enhance the diagnosis of mental disease.
On December 3, a group of researchers reported that they had managed to predict psychiatric diagnoses with Facebook data—using messages sent up to 18 months before a user received a diagnosis. The team worked with 223 volunteers, who all gave the researchers access to their personal Facebook messages. Using a man-made intelligence algorithm, the researchers leveraged attributes extracted from these messages, also because the Facebook photos each participant had posted, to predict whether or not they had a mood disorder (like bipolar or depression), a schizophrenia spectrum disorder, or no psychological state issues. Consistent with their results, swear words were indicative of mental disease generally, and perception words (like see, feel, hear) and words associated with negative emotions were indicative of schizophrenia. And in photos, more bluish colors were related to mood disorders.
To evaluate how successful their algorithm was, the researchers used a typical metric in AI that measures the trade-off between false positives and false negatives. Because the algorithm categorizes more and more participants as positive (say, as having a schizophrenia spectrum disorder), it’ll miss fewer participants who really do have schizophrenia (a low false negative rate), but it’ll mislabel some healthy participants as having schizophrenia (a high false positive rate). An ideal algorithm can haven’t any false positives and no false negatives at an equivalent time; such an algorithm would be assigned a score of 1. An algorithm that guessed randomly would have a score of 0.5. The research team achieved scores starting from 0.65 to 0.77, counting on the precise predictions they asked the algorithm to form. Even when the researchers restricted themselves to messages from over a year before the themes received a diagnosis, they might make these predictions substantially better than would be expected accidentally.
According to H. Andrew Schwartz, a professor of computing at Stony Brook University who wasn’t involved within the study, these scores are like those achieved by the PHQ-9, a standard, 10-question survey wont to screen for depression. This result raises the likelihood that Facebook data might be used for mental disease screening—potentially long before a patient would otherwise have received a diagnosis.
Michael Birnbaum, an professor at the Feinstein Institutes for Medical Research in Manhasset, New York, who led the study, believes that this type of AI tool could make a huge difference within the treatment of psychiatric illnesses. Previously, researchers have used Facebook statuses, tweets, and Reddit posts to spot diagnoses starting from depression to attention deficit hyperactivity disorder. But he and his team broke new ground by working directly with patients who had existing psychiatric diagnoses. Other researchers haven’t, generally, been ready to get rid of clinically confirmed diagnoses—they have taken subjects’ word for his or her diagnoses, asked them for self-diagnoses, or had them take questionnaires just like the PHQ-9 as a proxy for diagnosis. Everyone in Birnbaum’s study, in contrast, had a politician diagnosis from a psychiatric professional. And since the researchers had definitive dates for when these diagnoses were made, they might attempt to make predictions from messages sent before the patients knew about their mental illnesses.
Sharath Guntuku, a professor of computing at the University of Pennsylvania who wasn’t involved within the research, cautions that, albeit these algorithms achieve impressive results, they’re nowhere near replacing the role of clinicians in diagnosing patients. “I don’t think there’ll be a time, a minimum of in my lifetime, where just social media data is employed to diagnose an individual. It’s just not getting to happen,” Guntuku says. But algorithms just like the one designed by Birnbaum and his team could still play an important role in psychological state care. “What we are increasingly watching is using these as a complimentary data source to flag people in danger and to ascertain if they have additional care or additional contact from the clinician,” Guntuku says.
There is already precedent for using social media to stop psychological state crises. “Facebook and Google, they’re already doing this at some level,” Guntuku says. If a user searches for suicide-related terms on Google, the National Suicide Prevention Lifeline number appears before all other results; Facebook uses AI to detect posts which will indicate suicide risk and sends them to human moderators for review. If the moderators agree that the post indicates a true risk, Facebook can send suicide prevention resources to the user or maybe contact enforcement. But suicide presents a transparent and imminent danger, whereas the mere act of receiving a psychological state diagnosis often does not—social media users could also be willing to sacrifice more privacy to stop suicide than to catch the onset of schizophrenia a touch earlier. “Any kind of public, large-scale psychological state detection, at the extent of people, is extremely tricky and really ethically risky,” Guntuku says.
For his own part, Birnbaum sees a less grand, but nevertheless impactful, use case for this research. A clinician himself, he thinks that social media data couldn’t only help therapists triangulate diagnoses but also aid them in monitoring patients as they progress through long-term treatment. “Thoughts, feelings, actions—they’re dynamic, and that they change all the time. Unfortunately, in psychiatry, we get a snapshot once a month, at best,” he says. “Incorporating this sort of data really allows us to urge a more comprehensive, more contextual understanding of somebody’s life.”
Researchers still have an extended thanks to enter designing these algorithms and deciding the way to implement them ethically. But Birnbaum is hopeful that, within the next five to 10 years, social media data could become a traditional part of psychiatric practice. “One day, digital data and psychological state will really combine,” he says. “And this may be our X-ray into somebody’s mind. this may be our biopsy to assist support the diagnoses and therefore the interventions that we recommend.”
Citations:
Huckins, Grace. “An AI Used Facebook Data to Predict Mental Illness.” Wired, Conde Nast, www.wired.com/story/an-ai-used-facebook-data-to-predict-mental-illness/.