Technology has the power to surprise us in many ways and if we review the progress of the last ten years, we will find ample proof of this. Data and artificial intelligence (AI) have been instrumental in changing the way we perceive work and how we take care of ourselves. Today, that progress is trying to change the way we deal with mental stress and health too.
Chatbots for mental health are the latest innovations. These are bots designed to help provide the first step, for people undergoing mental stress and anxiety, to find a safe place to share their thoughts. They are created to be conversational, sensitive to keywords so as to detect the mood of the person, available 24/7, and also help out by providing insights using cognitive behavioural therapy (CBT). A structured talk therapy, CBT has proved to be effective in dealing with anxiety and depression.
However, will it be awkward talking to a bot rather than a real person? Not entirely. While a part of the population may think so, the new generation, especially the millennials and Gen Z, may think otherwise. The young crowd is digitally savvy and may be more comfortable sharing with a responsive bot rather than a real person. This makes sense when we think about how workplaces are becoming more remote, and increasing internet conversations are making human interaction more limited.
The algorithms behind the bot have to be designed in a way that ensures it behaves in a particular manner, which is helpful for the person talking to it on the other side. The people developing the bot need to be well aware of nuances of mental health
A WHO survey reveals that depression is a leading cause of mental health disorder across the world, affecting over 300 million people globally. A clear link has been established between suicide in people aged 15–29 and depression. Moreover, in our day-to-day lives, it is seen that talking about mental health issues is still a stigma in most places or considered a fancy or first-world problem.
With such a scenario, having a first aid kind of response in a chatbot can be a useful thing. Let us understand that these bots are not a solution but merely intended to be the first step in dealing with one’s feelings.
Although not yet widely used, they are not new either. In 2017, Dr. Alison Darcey founded the world’s first exclusive mental health chatbot, called Woebot, for young adults in college and graduate school. A randomised control trial at Stanford showed that the students aged 18–28, who used the therapeutic bot, reported significantly reduced symptoms of depression in two weeks.
Today, there are many other such bots, some of which are available online for free. Kamlesh Dangi, group head-HR, InCred, agrees, “It is certainly a good idea as some from that age and generation are very comfortable with digital tech. It is a preferred form of engagement for them. While others will certainly find it odd to talk to a computer, a bot will certainly appeal to them to approach.”
Are chatbots really the answer?
Well, not everyone is convinced and it can be debated whether one ought to be. Mental health is a complex issue and articulating how one feels can be extremely difficult even to a trained professional who can see the facial expressions, nuanced inflections in tone and even body language. Compared to that, using a bot may seem superficial.
Moreover, there are other issues as well. A chatbot will be trained with underlying neural nets and learning algorithms and can potentially inherit the prejudice of its makers. Prasad Kulkarni, VP-global HR operations, Accelya Group, points out, “The algorithms behind the bot have to be designed in a way that ensures it behaves in a particular manner, which is helpful for the person talking to it on the other side. The people developing the bot need to be well aware of nuances of mental health.”
It is certainly a good idea as some from that age and generation are very comfortable with digital tech. It is a preferred form of engagement for them.
Another area of concern is data security, privacy and trust. Many of these interactions will contain sensitive information, which the users may not have shared with their families or friends. With no fear of being judged or made fun of, they may spill all their emotions and give vent to their feelings without any filter. By doing so, they become vulnerable for they have no idea of how their data will be used.
While chatbots are not a panacea for mental health problems, they may prove useful as a support intervention, if the technology is transparent. The use of such bots and the ethical issues surrounding them remain debateable, and they are not widely used among organisations, yet. Whether they will be in the future still remains to be seen.
All that needs to be kept in mind is that AI is merely a support and not a solution. If we start relying on AI to resolve all mental health issues, then we may be playing a dangerous game ourselves.