Business

Frontier models are failing one in three production attempts — and getting harder to audit

Swipe to see the full story...

Key Highlight

AI agents are now embedded in real enterprise workflows, and they're still failing roughly one in three attempts on structured benchmarks..

Key Highlight

That gap between capability and reliability is the defining operational challenge for IT leaders in 2026, according to Stanford HAI's ninth annual AI Index report.This uneven, unpredictable performance is what the AI Index calls the "jagged frontier," a term coined by AI researcher Ethan Mollick to describe the boundary where AI excels and then suddenly fails.“AI models can win a gold medal at the International Mathematical Olympiad,” Stanford HAI researchers point out, “but still can’t reliably tell time.” How models advanced in 2025Enterprise AI adoption has reached 88%..

Key Highlight

Notable accomplishments in 2025 and early 2026: Frontier models improved 30% in just one year on Humanity's Last Exam (HLE), which includes 2,500 questions across math, natural sciences, ancient languages, and other specialized subfields..

Key Highlight

HLE was built to be difficult for AI and favorable to human experts.Leading models scored above 87% on MMLU-Pro, which tests multi-step reasoning based on 12,000 human-reviewed questions across more than a dozen disciplines..

Want the full analysis?

Detailed coverage and expert insights available on our main news hub.

Read Full Article