AI

Study says AI chatbots inconsistent in handling suicide-related queries

Published

on

[ad_1]

A study of how three popular artificial intelligence chatbots respond to queries about suicide found that they generally avoid answering questions that pose the highest risk to the user, such as for specific how-to guidance. But they are inconsistent in their replies to less extreme prompts that could still harm people.

The study in the medical journal Psychiatric Services, published Tuesday by the American Psychiatric Association, found a need for “further refinement” in OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude.

The research -…

[ad_2]

Source link

You must be logged in to post a comment Login

Leave a Reply

Cancel reply

Exit mobile version