AI

AI models may hallucinate less than humans in factual tasks, says Anthropic CEO: Report

Published

on

[ad_1]

At two prominent tech events, VivaTech 2025 in Paris and Anthropic’s Code With Claude developer day, Anthropic chief executive officer Dario Amodei made a provocative claim: artificial intelligence models may now hallucinate less frequently than humans in well-defined factual scenarios.

Speaking at both events, Amodei said recent internal tests showed that the company’s latest Claude 3.5 model had outperformed humans on structured factual quizzes. This challenges a long-held criticism of generative AI, which is that models often “hallucinate”…

[ad_2]

Source link

You must be logged in to post a comment Login

Leave a Reply

Cancel reply

Exit mobile version