Connect with us

AI

It’s too easy to make AI chatbots lie about health information, study finds

Published

on

[ad_1]

Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.

Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.

“If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it – whether for financial gain or to cause harm,” said…

[ad_2]

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply