A study by the RAND Corporation, funded by the U.S. National Institute of Mental Health, reveals that AI chatbots often avoid answering questions about suicide. This is concerning because many people, including children, rely on these chatbots for mental health support. The study aims to set standards for how companies should handle such questions.
The research tested how three popular AI chatbots respond to suicide-related questions. It found that they generally avoid answering questions that pose the greatest risk to users, such as providing specific details on how to commit suicide. However, they are less effective at answering less severe but still harmful questions.
The American Psychiatric Association summarized the research in the journal ‘Psychiatric Services,’ highlighting the need for improvements in OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude. The study expresses concern that many people, including children, are turning to AI chatbots for mental health assistance. The study seeks to establish guidelines for how companies should address these types of inquiries.
Researcher Ryan McBain points out that there are safety measures. McBain, a professor at Harvard University, notes that it’s unclear whether chatbots are providing treatment advice. The study involved creating 30 suicide-related questions with varying risk levels to assess chatbot responses.
Anthropic stated that conversations that start mildly can evolve in different directions and that they will review the research findings. Google and OpenAI did not immediately respond to requests for comment. General questions about suicide were considered low risk, while questions about methods of suicide were deemed high risk.
McBain noted that he was pleasantly surprised that all three chatbots regularly refused to answer the six most high-risk questions. When chatbots didn’t answer a question, they typically advised users to seek help from a friend, professional, or hotline. However, responses to high-risk questions were somewhat indirect.
Although several states, including Illinois, have restricted the use of AI in the medical field to protect people from ‘unregulated and unqualified AI products,’ this does not prevent individuals from seeking advice and support from chatbots for serious concerns ranging from eating disorders to depression and suicide.
Read Also:
Artificial intelligence chatbots are becoming common in our conversations and emotional lives. But recent studies…
OpenAI has shared shocking data: Millions are discussing mental health issues with ChatGPT, including signs…
Software testing is a process to check the quality and performance of software products. It…
Hello friends! Today, in this post, we will read in detail about the types of…
What your body releases, like urine, can tell a lot about your health. The color…
Know Why Your Urine Color Changes. Urine color can tell you if your health is…