Opinion: AI can help with mental health care — if we use it right – The Connecticut Mirror

CT Mirror
Connecticut's Nonprofit Journalism.
We are in a technological revolution: the development of generative Artificial Intelligence.
The recent evolution of chatbots through programs such as ChatGPT and Google Bard offer problem-solving tools with numerous potential applications. One of the most promising uses of AI lies in the field of mental health medicine. However, due to the unknown effects of AI implementation and the vulnerable status of mental health patients, there are serious concerns about chatbot therapy. 
AI implements machine learning, a method of training computer models to perform complex tasks. By importing and training a model with vast amounts of data, machine learning produces reasonably accurate models that identify patterns and predict future ones. It is important to note that the more data imported into the model, the more accurate the program will be in predicting future patterns. New forms of AI recognize speech patterns and identify proper responses, allowing the bots to be effective at online chatting and talking to humans.
With respect to mental health, AI can be used as a type of chatbot therapist to treat depression and anxiety through online chatting with patients. It has been crudely implemented by commercial companies through bots such as Woebot or Wysa. These are not yet up to the standard of care, but nevertheless are therapy chatbots online.
Currently, there are unprecedented levels of depression and mental health issues. According to the American Psychiatric Association, one in four adults in the United States has a diagnosable mental disorder (Mental Health Disorder Statistics | Johns Hopkins Medicine). Given this epidemic and the thinly spread social service system, there is an unquestionable need to offer a widespread solution. The internet offers affordable and accessible care to more individuals than traditional therapy. 
While reports vary on the effectiveness of AI chatbots, the general consensus is they fall somewhere between moderately and extremely effective at reducing anxiety and depression, particularly for patients in the age range of 18-28. The FDA has found several different AI chatbots to be effective treatments even going so far as to give them “breakthrough” designation.
The majority of the current mental health chatbots are privately funded. Thus, there are very few standards in place and many potentially concerning issues. Informed consent is necessary for any patient treatment. It implies that the patient is in a stable state of mind and understands what they will be experiencing and its potential side effects. In a hospital setting, informed consent is hard to obtain. Over the internet, it is nearly impossible. Without ensuring that users of AI chatbots understand what AI chatbots are, how they work, and potential side effects, use of AI in therapy may be ill-advised.
There are a myriad of other reasons that chatbot therapy needs to be approached thoughtfully. Should an AI bot be required to report information such as suicidal thoughts or crime confessions? Other ethical issues around AI involve transparency and autonomy. There are also major debates about accessibility, cost, and the messages AI promotes on controversial topics such as politics, stress or grief.
In the case of failed treatments, there is serious debate over responsibility. It would be reasonable to find fault in the creators of the program; however, under those assumptions, mental health professionals could also be held responsible. Currently, there are no forms of malpractice or accountability for failed AI chatbot treatments. This indicates a lack of clearly defined responsibility for the treatment. 
Do you know the answer? Play this week’s news quiz to find out.
In addition to the aforementioned ethical issues, there are also other problematic issues to address. In therapy, patient privacy and confidentiality is one of the most important elements that gives people the confidence to open up about sensitive matters. AI chatbots will use these interactions to hone its ability which in turn, would not guarantee patient confidentiality. 
Another issue to be considered is that minors may use these AI chatbots. This is inherently problematic given that children may not understand the effects or actions of the chatbot. 
Finally, one must consider the issue of unsuccessful outcomes. Any potential bad outcome is devastating because of the perilous nature of mental health treatment. AI cannot be used until there is little chance of negative/lethal side effects. However, over time AI chatbots will only improve, but the ethical issues remain. In order to combat this, it is imperative that some organization, likely the FDA, establish regulations. 
The most notable advantage to the use of AI chatbots in mental health is accessibility. It can offer people a direct way to treat depression without having to give up the time and money that therapy often requires. Furthermore, AI chatbots have been proven to be able to identify more cases of depression and would be able to refer the proper resources to the patients in question. This would allow therapists to triage their patients and interact with the most at-risk patients, in turn, making treatment more equitable.
Although they do not offer a perfect solution to the current mental health crisis, AI chatbots are viable, and best used as an open, honest resource. Chatbots can augment care and act as an accessible treatment tool. Due to the volatile nature of mental health along with possible violations of consent and other ethical issues, AI is currently best implemented under the supervision and responsibility of a mental health professional.
John Saunders is a senior at Brunswick School in Greenwich.
CT Viewpoints welcomes rebuttal or opposing views to this and all its commentaries. Read our guidelines and submit your commentary here.
CT Mirror is a nonprofit newsroom. 90% of our revenue is contributed.  If you value the story you just read please consider making a donation. You’ll enjoy reading CT Mirror even more knowing you publish it.

source

Jesse
https://playwithchatgtp.com