Tech
New paper pushes back on Apple’s LLM ‘reasoning collapse’ study
[ad_1]

Apple’s recent AI research paper, “The Illusion of Thinking”, has been making waves for its blunt conclusion: even the most advanced Large Reasoning Models (LRMs) collapse on complex tasks. But not everyone agrees with that framing.
Today, Alex Lawsen, a researcher at Open Philanthropy, published a detailed rebuttal arguing that many of Apple’s most headline-grabbing findings boil down to experimental design flaws, not fundamental reasoning limits. The paper also credits…
[ad_2]
Source link
You must be logged in to post a comment Login