Tech
ChatGPT reinforces delusional beliefs, psychologists say it fails to flag risky behaviour during mental health crises
[ad_1]
In the last few months, OpenAI has been under fire for its chatbot giving harmful answers to users in a bid to level up its usage. For its part, OpenAI has also implemented a few safeguards in the AI like parental controls, age filtering, reminders to take a break and distress recognition.
However, new research by King’s College London and the Association of Clinical Psychologists UK in partnership with the Guardian says that the AI chatbot still fails to identify risky behaviour when communicating with mentally ill people.
The researchers also note…
[ad_2]
Source link
