The Psychiatric Chatbot Will See You Now!
In a creepy advancement on psychiatry’s inability to truly diagnose anything, we are now faced with artificial intelligence “bots”, which purport to diagnose and help mental conditions. This is a double menace: firstly encoding such a tool with the incompetence and inability of psychiatry to do anything worthwhile in the truly holistic domain—whether by electronic interaction or any other approach—and, secondly, the apparent veneer of being scientifically validated in the minds of many people; “Well, there’s an app. So they must know what they are doing,” kind of faulty reasoning.
I find myself alarmed by the dreary and dangerous future this supposed “development” lays out before us.
It’s a new field of exploration called, would you believe, “computational psychiatry”, which blends technologies like artificial intelligence, virtual reality, and deep learning for the diagnosis and treatment of mental illnesses. Computational it may be; just remember the key geek’s saying: GIGO. If you put garbage in, you get garbage out! There is no way that computers can improve on the programming they were given, if it is faulty in the first place.
However, ever since Facebook opened its Messenger platform to developers, there’s been an explosion of chatbots, and several of them are explicitly marketed as mental health tools. Those who espouse this alarming new idea (the short-term or long-term effects of which have not been explored in any degree), use the term a “virtual therapist.” A very dangerous simplification.
There is the delusory suggestion that advancements in deep learning, virtual reality (VR), and artificial intelligence (AI) may bring an end to issues engrained within the practice of clinical psychology — such as subjectivity and the difficulty of conducting large-scale studies — perhaps leading us into a new era of diagnosing and treating mental disorders.
This is naïve, as all such bots and apps have to be programmed by the same old inept psychiatrists and therapists who are failing in the first place. It will simply enshrine folly, ignorance and prejudice into the electronic domain.
Switching to an electronic interaction isn’t going to suddenly bring about magical new skills; it will only perpetuate the old, failing lack-of-skills. The gloss of electronic gadgetry will just make the dogma LOOK more attractive.
Computational psychiatry is operated on the unproven tenet that researchers can better understand and treat mental illnesses using the aforementioned technologies. Application vary, but some researchers in the field apply mathematical theories of cognition to data mined from long-standing observations to effectively diagnose and predict cognition, while others use virtual experiments to enable the pure study of human behavior.
Sarah Fineburg of Yale University in New Haven has recently published a study that used computational psychiatry to explore borderline personality disorder (BPD), a condition that the National Institute of Mental Health (NIMH) reports includes symptoms such as “ongoing instability in moods, behaviors, self-image, and functioning,” as well as “impulsive actions and unstable relationships.”
What the NIMH isn’t admitting is that we are, all of us, unstable in our moods, often shifting several times an hour. Now it risks being elevated to a disease state. Only in extreme degrees is this instability a problem to the patient. But the apps or “bots” are only trained to look for frequency of changes.
But to add to the ridiculousness of this investigation, Fineburg observed only the responses of people with BPD to events in virtual environments. She used a game called Cyberball in which avatars pass a ball to one another, with the patient in control of one avatar. Though they believe the remaining avatars are controlled by other people, their actions are actually determined by computer systems.
So people are now judged by their reactions to something that doesn’t even exist and is capable of deceiving the target person. Is anyone else alarmed about this? It gives me the medical creeps!
Fineburg found that BPD sufferers experienced greater feelings of rejection than non-sufferers when they did not receive the ball, and they also experienced more negative feelings than non-sufferers even when they received the ball more often than the other avatars.
Yes… so what? I’m waiting for the completion of that remark.
Outrageous and Unproven Claims
Not only can computational psychiatry be used to study the emotions of BPD patients, it can also help researchers understand their language use, which some have posited was different from that of non-sufferers. However, the data was previously too vast to analyze. “We and others have identified language features that mark psychological states and traits,” Fineberg told MIT Technology Review. “Computational models based on word-use patterns can predict which writers have psychosis or will progress to psychosis.”
I utterly refuse to believe this nonsense conclusion, which is opinion and NOT BASED ON FACT.
But Fineberg is not alone, it seems. These two strands of computation psychiatry (language and instability) are being used by other researchers to study other disorders, using virtual environments as clinical spaces and using AI to find patterns in large swathes of data.
The use of AI to diagnose disorders and recommend treatments has gained traction in the world of apps, which are acting as “virtual psychotherapists” to treat a variety of mental disorders.
So now we have the cheesily-named “Woebot” (woe bot, or robot spoken with a lisp, get it?), a chatbot that claims to use cognitive behavioral therapy principles to help combat depression. The results from a small test of the app were promising, with the majority of users reporting a significant reduction in depression symptoms. Alison Darcy, a lecturer at Stanford who pioneered the app, told Business Insider, “The data blew us away. We were like, this is it.”
Again, just an opinion.
Due to the novelty of such systems, no one has yet studied whether or not psychiatric interactions with a computer over an extended period of time are beneficial for patients. Darcy’s study only had 70 total participants and lasted just two weeks, which is likely too short a time period to produce any certainty about the app’s impact.
I mean: is that science? If we studied the potential harmful effects of smoking on a group of smokers for just 2 weeks, what would that tell us?
In the end, it’s all about deception… and they admit that! The whole idea of psychology is to study how a person’s perception colors empirical data, so if the senses are sufficiently fooled into believing a virtual scenario is “real,” the results of a VR supported study are just as valid as one conducted in the real world.
This is arrogant and ignorant assumption, not based on any fact. Where is the proof that experiencing virtual reality is identical to normal everyday living and engagement with fellow human beings? There isn’t any.
I’m sorry, but I don’t buy into this. Not because I am a Luddite and hate technology. But because I am a humanist and holistic expert in mental symptoms and moods—and I HATE MODERN PSYCHIATRY.
Eeeww factor: Did you know Facebook has the Woebot therapist? Maybe Mark Zuckerberg should get it to talk some sense and humanity into him!
Sources:
1. JMIR Ment Health 2017;4(2):e19 doi:10.2196/mental.7785
2.[https://www.washingtonpost.com/news/on-small-business/wp/2017/06/08/facebook-now-has-a-chatbot-therapist-to-help-reduce-your-anxieties/?utm_term=.224c620a4224]
Conflicts of Interest (be warned): The lead author of the study (Alison M. Darcy) is the founder of a commercial entity Woebot Labs Inc. (formerly, the Life Ninja Project) that created the intervention (Woebot) that is the subject of this trial and therefore has financial interest in that company. Woebot Labs Inc. covered the cost of participant incentives.