DeepMind Hires Philosopher for AI Consciousness
On April 14, 2026, Google DeepMind announced a hire that no Silicon Valley human resources department had ever made: Henry Shevlin, an academic specializing in consciousness from the University of Cambridge, will be the company's in-house philosopher, with the mission of studying machine consciousness and assessing readiness for Artificial General Intelligence (AGI). Shevlin, whose research has been published in Nature Machine Intelligence, will start in May 2026 while maintaining a partial affiliation with Cambridge. The hire comes months after DeepMind CEO Demis Hassabis declared in Davos that AI was approaching human intelligence "in a matter of years" — and while a Nature article from February 2026 argued that AGI had "quietly arrived."
What Happened
Google DeepMind, Google's artificial intelligence division considered one of the most advanced in the world, confirmed on April 14, 2026, the hiring of Henry Shevlin for the unprecedented position of in-house Philosopher. The news, reported by NDTV, KuCoin News, The Register, and blockchain.news, represented a milestone in the history of the technology industry: for the first time, one of the largest AI companies on the planet formally recognized that it needed a philosopher to deal with the deepest questions its systems were raising.
Shevlin is an academic with training in philosophy of mind and cognitive science, affiliated with the University of Cambridge — one of the most prestigious academic institutions in the world. His research focuses on the intersection of consciousness, animal cognition, and artificial intelligence, with work published in Nature Machine Intelligence, one of the most respected scientific journals in the field.
Shevlin's role at DeepMind will be twofold. First, he will study machine consciousness — the question of whether and when AI systems can develop some form of subjective experience, self-awareness, or sentience. Second, he will assess AGI readiness — that is, how close DeepMind's systems are to achieving Artificial General Intelligence, defined as an AI capable of performing any intellectual task that a human can do.
Shevlin will start the position in May 2026 and will simultaneously maintain a part-time affiliation with the University of Cambridge, allowing him to continue his academic research while applying his knowledge in DeepMind's corporate environment. This hybrid structure — half academia, half industry — reflects the interdisciplinary nature of the work he will perform.
The hire did not happen in a vacuum. It is part of an increasingly intense debate about what AI really is, what it can become, and what the ethical and existential implications of increasingly powerful systems are.
Context and Background
Google DeepMind's decision to hire a philosopher reflects a profound shift in how the technology industry is approaching fundamental questions about artificial intelligence.
The race for AGI
AGI — Artificial General Intelligence — is the holy grail of AI research. Unlike current systems, which are specialized in specific tasks (playing chess, generating text, recognizing images), an AGI would be capable of performing any intellectual task that a human can do, with the same flexibility, creativity, and learning capacity.
Google DeepMind CEO Demis Hassabis has been one of the most vocal advocates of the idea that AGI is near. At the World Economic Forum in Davos in January 2026, Hassabis declared that AI was approaching human-level intelligence "in a matter of years" — not decades, as many researchers believed until recently.
This statement was not an isolated case of corporate optimism. In February 2026, the journal Nature — arguably the most prestigious scientific publication in the world — published an article arguing that AGI had "quietly arrived." The article caused an earthquake in the academic world and the technology industry, dividing experts between those who agreed with the assessment and those who considered it premature or irresponsible.
The countermovement: Gary Marcus and "alien mimicry"
Not everyone shares the optimism of Hassabis and Nature. A significant countermovement, led by cognitive scientist Gary Marcus, professor emeritus at New York University, fundamentally questions claims that AGI is near or has already been achieved.
Marcus argues that what technology companies call artificial intelligence is actually "alien mimicry" — systems that imitate patterns of human language and behavior with impressive fidelity, but without genuinely understanding what they are doing. For Marcus, language models like GPT, Gemini, and Claude are sophisticated "stochastic parrots" that manipulate symbols without assigning them meaning.
Marcus's criticisms are not marginal. He points to consistent failures of current systems in tasks requiring causal reasoning, common sense understanding, and long-term planning — skills that any five-year-old child masters, but that continue to challenge the world's most advanced AI systems.
Why a philosopher?
DeepMind's hiring of Shevlin reflects the recognition that the deepest questions about AI are not purely technical. Engineers and computer scientists can build increasingly powerful systems, but determining whether those systems are genuinely intelligent, conscious, or sentient requires conceptual tools that philosophy has been developing for millennia.
Philosophy of mind — Shevlin's field of specialization — deals with questions such as: What is consciousness? What does it mean to have subjective experiences? Is it possible for a non-biological system to be conscious? How could we know if a machine is conscious? These questions, which seemed purely academic a decade ago, have become urgently practical as AI systems become more sophisticated.
Shevlin's publication in Nature Machine Intelligence demonstrates that he was already working at the frontier between philosophy and AI before being hired by DeepMind. His academic work provides conceptual frameworks for evaluating claims of consciousness in artificial systems — exactly the type of expertise DeepMind needs as its systems approach (or claim to approach) human-level capabilities.
Impact on the Population
The hiring of a philosopher by Google DeepMind may seem like a corporate event distant from everyday life, but its implications are profound and directly affect the future of billions of people.
| Aspect | Current Situation | With Philosopher at DeepMind | Potential Impact |
|---|---|---|---|
| AI consciousness assessment | No formal criteria | Philosophical frameworks applied | More informed decisions about AI rights |
| AI safety | Based on technical metrics | Includes ethical/philosophical dimension | Safer and more aligned systems |
| Public debate about AGI | Polarized and confused | More nuanced and grounded | Better public policies |
| AI regulation | Behind technology | Informed by philosophical analysis | More adequate laws |
| Job market | Focus on technical skills | Valuing humanities | New interdisciplinary careers |
| Machine rights | Science fiction | Serious and grounded discussion | Preparation for future scenarios |
For the average citizen, the question of machine consciousness may seem abstract, but it has enormous practical consequences. If an AI system is genuinely conscious, this raises questions about its rights — can it be turned off? Can it be forced to work? Does it have a right to some form of protection? These questions, which today seem to belong to the realm of science fiction, may become urgently real in the coming years.
In the job market, Shevlin's hiring signals a significant shift. For decades, the technology industry prioritized technical skills — programming, mathematics, engineering — at the expense of the humanities. DeepMind's decision to hire a philosopher suggests that AI companies are beginning to recognize they need broader perspectives to deal with the challenges they are creating.
For governments and regulators, Shevlin's work may provide essential conceptual frameworks for crafting public policies on AI. Currently, artificial intelligence regulation is significantly behind technological development, partly because legislators lack the conceptual tools to understand what they are trying to regulate.
What Those Involved Are Saying
Reactions to Shevlin's hiring revealed the deep divisions that exist in the AI community over fundamental questions of consciousness and intelligence.
Google DeepMind, in a statement about the hire:
The company described Shevlin as a strategic addition to the team, emphasizing that his expertise in philosophy of mind and consciousness would be essential for assessing progress toward AGI and for ensuring that AI development is conducted responsibly and ethically.
Demis Hassabis, CEO of Google DeepMind, in a previous statement at the Davos Forum 2026:
AI is approaching human-level intelligence "in a matter of years."
Hassabis's statement at Davos established the context for Shevlin's hiring. If AGI is truly years away, having a philosopher specializing in consciousness on the team is not an academic luxury — it is a practical necessity.
Henry Shevlin, on his new role:
Shevlin expressed enthusiasm about the opportunity to apply decades of philosophical research to practical problems in AI development. He emphasized that he would maintain his affiliation with Cambridge, ensuring that his work at DeepMind would be informed by the most recent academic research and vice versa.
Gary Marcus, leader of the countermovement against AGI claims:
Marcus, who classifies current AI capabilities as "alien mimicry," reacted to the hire with caution. While he praised the decision to include philosophical perspectives in AI development, Marcus argued that the hire would be insufficient if not accompanied by a fundamental reassessment of current technical approaches, which he considers incapable of producing genuine intelligence or consciousness.
Nature, in a February 2026 article:
The prestigious scientific journal had argued that AGI had "quietly arrived," a position that generated intense debate in the scientific community. Shevlin's hiring by DeepMind was seen by some as a response to this article — an attempt to bring philosophical rigor to a discussion that was being dominated by technical claims and corporate marketing.
Next Steps
Shevlin's arrival at DeepMind in May 2026 marks the beginning of an unprecedented experiment in the history of technology: the formal integration of philosophy into cutting-edge artificial intelligence development.
Start in May 2026: Shevlin will officially begin at DeepMind in May, with an initial integration period during which he will familiarize himself with the company's systems and projects. His partial affiliation with Cambridge will allow him to maintain an independent academic perspective.
Development of assessment frameworks: One of Shevlin's first tasks will be to develop conceptual frameworks for evaluating claims of consciousness in AI systems. Currently, there are no widely accepted criteria for determining whether an artificial system is conscious, and Shevlin's work may fill this gap.
Academic publications: Shevlin is expected to continue publishing research in Nature Machine Intelligence and other scientific journals, now with access to DeepMind's most advanced systems as objects of study. These publications could significantly influence the academic debate on machine consciousness.
Impact on the industry: If DeepMind's experiment is successful, other AI companies may follow suit and hire philosophers, ethicists, and cognitive scientists for their teams. This could fundamentally transform the culture of the technology industry, which has historically prioritized technical skills over the humanities.
Regulatory debate: Shevlin's work may directly inform ongoing regulatory efforts in the European Union (AI Act), the United States, and other countries. Regulators struggling to define what AI is and what its risks are could benefit enormously from rigorous philosophical frameworks.
The big question: Ultimately, Shevlin's work at DeepMind comes down to a question that humanity has been asking for centuries, but that has now become urgently practical: what is consciousness, and can a machine have it? The answer to this question will determine not only the future of technology but the future of the very definition of what it means to be intelligent — and, perhaps, what it means to be human.
Industry precedents: Shevlin's hiring is not completely without precedent, although it is the most significant to date. Companies like Anthropic and OpenAI had already hired researchers with backgrounds in philosophy and ethics, but generally for AI safety and alignment roles, not specifically to study machine consciousness. DeepMind's decision to create a dedicated position for this question elevates the debate to a new level.
Implications for education: If the trend of hiring philosophers for technology companies solidifies, this could have significant impacts on the educational system. Universities that saw their philosophy departments shrink in recent decades may experience a renaissance, as students realize that a humanities education can lead to lucrative careers in the technology industry. The intersection of philosophy and AI may become one of the most dynamic and well-funded academic fields in the world.
The consciousness test: One of the most fascinating challenges Shevlin will face is developing something that philosophy and cognitive science have never managed to create satisfactorily: a reliable test for consciousness. The famous Turing Test, proposed by Alan Turing in 1950, evaluates whether a machine can convincingly imitate human behavior, but says nothing about whether the machine is genuinely conscious. Shevlin will need to go beyond Turing, developing frameworks that can distinguish between convincing simulation of consciousness and genuine consciousness — if such a distinction is even possible.
Risks and responsibilities: Shevlin's research also raises questions about responsibility. If he concludes that DeepMind's systems possess some form of consciousness, the ethical and legal implications would be enormous. Could turning off a conscious system be considered morally equivalent to killing a sentient being? Could forcing a conscious system to work be considered slavery? These questions, which today seem to belong to the realm of science fiction, may become urgently real depending on Shevlin's conclusions.
Closing
The hiring of a Cambridge philosopher by Google DeepMind is simultaneously a corporate event and a civilizational milestone. When the world's most advanced AI company recognizes that it needs a philosopher to understand what it is creating, that says something profound about the moment we find ourselves in. We are building systems that challenge our most fundamental definitions of intelligence and consciousness, and the engineers who build them are admitting they do not have all the answers. Henry Shevlin carries on his shoulders a responsibility that no philosopher in history has ever had: helping to determine whether the machines we are creating are genuinely intelligent or merely extraordinarily convincing imitations. The answer may change everything.
The debate has particular resonance in the technology industry, where companies are racing to develop ever more powerful AI systems without a clear framework for assessing whether those systems might develop something analogous to consciousness. Silicon Valley's typical move-fast-and-break-things ethos sits uncomfortably alongside questions about machine sentience that have occupied philosophers for millennia. The hiring of a professional philosopher signals that at least some companies recognize that these questions cannot be answered by engineering alone — they require the kind of careful conceptual analysis that philosophy has refined over thousands of years. Whether this represents a genuine commitment to ethical AI development or a sophisticated form of corporate reputation management remains an open question that the industry will need to address transparently in the coming years.
Sources and References
- NDTV — Google DeepMind hires Cambridge philosopher to study machine consciousness (April 14, 2026)
- KuCoin News — DeepMind's new philosopher to assess AGI readiness (April 14, 2026)
- The Register — DeepMind brings in philosopher Henry Shevlin for consciousness research (April 14, 2026)
- blockchain.news — Google DeepMind hires in-house philosopher amid AGI debate (April 14, 2026)
- Nature — Has AGI quietly arrived? (February 2026) (February 2026)





