DeepMind CEO: AI Not Yet PhD-Level

Demis Hassabis, CEO of Google DeepMind, has pushed back against the growing narrative that today’s leading AI models exhibit PhD-level intelligence. In a recent interview, he called such labels “nonsense,” stating that general intelligence requires consistency, creativity, and reasoning — traits current systems still fall short of delivering.

Advanced, but not general

Hassabis acknowledged that some models display impressive, high-level abilities in narrow domains. However, they often struggle with basic tasks like simple arithmetic or logical reasoning when prompts are rephrased — a limitation that would not be expected from truly intelligent systems. “They have some PhD-level capabilities,” he said, “but they’re not PhD intelligences across the board.”

His comments follow OpenAI’s branding of GPT-5 as a “PhD-level” model, sparking debate across the AI research and industry community.

Also read: 64% of Women in Tech Prioritise AI Over Degrees: Report

AGI still years away, with key gaps in reasoning

According to Hassabis, artificial general intelligence (AGI) is still five to ten years away. He identified several missing capabilities — including continual learning, intuitive reasoning, and cross-domain pattern recognition — that need to be solved before AGI can become a reality. “Consistency and creativity are fundamental,” he noted, drawing parallels between how great scientists think and how machines must evolve.

Hassabis also emphasized that AGI is not just about scaling existing models. While increasing parameters has brought progress, he argued that new breakthroughs will be needed in areas such as memory, architecture design, and multi-modal interaction.

Not all progress is stalling

In contrast to some recent reports suggesting plateauing performance in language models, Hassabis said DeepMind continues to see strong internal progress. He dismissed the idea of benchmark saturation, arguing that current metrics may not fully capture the complexity and adaptability required for real-world use cases.

The remarks underline the need to recalibrate expectations about today’s AI capabilities — and to separate hype from actual readiness for high-stakes decision-making.

Latest articles

Related articles