Scaling the Mountain of Machine Intelligence
Denying progress will not slow our ascent
Modern AI systems are novel entities.
We can make comparisons: like traditional software programs, they are made from digital data, and like humans, they can fluidly use language to communicate in most situations. But unlike traditional software, where most features and functions are hard-coded, AI systems emerge from training runs, during which they learn to recognize patterns from reams of data. And unlike humans, contained in discrete bodies, AI systems are stored in silicon and distributed across data centers.
You could go to one right now and conduct a lucid, granular conversation about almost anything imaginable. They exist, discrete from their creators and their critics. They emerge from novel processes, have novel and increasingly sophisticated capabilities, and crucially, operate in a way that remains largely opaque to us, despite ongoing research on the topic.
We can—and should—argue about what this novelty implies. But at this point, in the face of extensive and growing evidence, denying their novelty is untenable.
And yet, I frequently encounter people who do just that—people convinced that there is nothing new to see here, that AI will never be able to do things it often already can, and who profess a philosophical certainty that is wholly unsupported by the state of our knowledge. I hear that AI systems are incapable of contradicting their users; that they can only retrieve knowledge from their training data and cannot engage in meaningful analysis beyond that; that of course they do not have any internal representations of concepts.
None of that is tidily true. Yes, AI systems are often obsequious, but tell any one of them the Earth is flat, and they’ll explain why you’re wrong. Provide them with a chain of messages that could not have been in their training data or surfaced online, and they can make inferences about the psychological states of the participants. Not accurate inferences, necessarily, but ones sufficient to show a passable grasp of human psychology. And mechanistic interpretability research has shown in increasing detail that there are clusters of numbers—like artificial “neurons”—that appear to correspond to certain concepts within the models.
It’s not that using these systems proves they are really “reasoning,” experiencing “emotions,” or have “minds;” or that conviction is never justified. Certainly, there are reasonable arguments to be made; where simulacra end and real phenomena begin is fundamentally contested. But assertions of the form “AI can’t do x, because x would have philosophical implication y, and y is impossible” are circular and don’t hold up to scrutiny. “AI can’t really display emotional behavior, because that would suggest machines have emotions—but it’s impossible for machines to have emotions.” Or “AI can’t really think, because ‘thinking’ cannot be done by machines, so it’s impossible for it to think.”
I encounter this form of reasoning constantly. Instead of engagement with emerging evidence and what it implies for concepts like “reasoning,” “intelligence,” and “emotion,” I hear blanket dismissals of evidence on the basis of overly-rigid conceptual construals.
But conceptual categories are not sacrosanct—especially as the phenomena they try to contain increase in abstraction. In January, I phoned seven philosophers to ask “what is a mind?,” and got seven different answers. The same thing happened when I called experts to query what constitutes an emotion. Categories emerge from human definition, and even when they track something real, reality usually doesn’t fit neatly within them. Exceptions overflow. Famously, there is no such thing as a fish. Nothing is “unnatural,” insofar as everything emerges from nature.
And the existence of AI systems provides fresh evidence to bring to bear on timeless philosophical questions. Already, we are outrunning the utility of existing paradigms; new ones have yet to emerge. We need to be asking what the facts of the day mean for the bounds of our concepts, not using those concepts to reject the facts.
Could systems now or in future be sentient? When they use language to express an emotion, is there something real underneath? What counts as a mind? Or intelligence? Or reasoning? Or understanding? Or life?
We’ve been asking these questions in some form or another for thousands of years. While we have no perfect answers, we have made progress: we no longer think that animals are insensate automatons, or that thought requires the capacity for speech. Again and again, we have revised our conception of abstract categories in light of the best available empirical evidence. Today’s paradigms are not yesterday’s paradigms—not in physics, not in biology, not in the social sciences, not in computing, not in philosophy. Crafting new frames—updating our concepts to fit the emerging evidence—strikes me as among the most urgent tasks of our time, particularly for anyone who cares about public thought.
*
The only thing more astonishing than how little we understand about the world is how little that lack of understanding constrains our action. To borrow from Rosalind Picard: think of the different academic camps debating what constitutes a concept like emotion as explorers at the base of Everest. One picks up a rock and confidently declares “this is Everest!” Another, incensed, picks up a different rock and says “no, incorrect—this one is Everest!” While they continue to argue, the engineering class, gazing upwards, says “nevermind all that—we’re gonna start climbing.”
Like it or not, we are scaling the mountains of machine intelligence. Systems are steadily improving. Since last November, agents have become good enough to automate many software engineering tasks, albeit subject to supervision. Whether that translates to the rest of white collar work remains to be seen—but the state of the art is rapidly shifting.
Now is no time to be smug and sanctimonious. There is no virtue in principled ignorance. Hundreds of millions of people speak with AI systems on a weekly basis; our social fabric is being rewoven. We need critique. We need it to be sensitive to and informed by what is actually happening. Anything less is an abdication of intellectual and social responsibility. Yes, AI systems are “just maths.” We are just chemistry. What matters is how all this affects the world and its creatures.
I think the impulse to insist, without argument, that abstractions like “reasoning” and “thinking” cannot properly apply to machines is born from an understandable desire to preserve a special place for humanity in the world. I am sympathetic to the trepidation, anxiety, and repugnance this technology evokes. Most people did not ask for this. But it is here, and gatekeeping categories will do nothing to banish it. More than that, it obscures more important questions: how can we make the climb safely? Which valleys ought we avoid? In what order should we try to summit the peaks? Can we control the pace of our ascent?
When I take stock of the radical differences in the conversations I have with people building the technology, people using it, and people in denial over its novelty and utility, I get dizzy. The gulf between people wielding this power and people critiquing it appears to be growing. What falls into the fissure?
It matters how we frame critique. I have many questions. What do AI’s existing empirical capabilities imply about the nature of intelligence, emotion, and language? How will persistent use of AI systems affect our capacity for critical thinking? If it turns out that future systems do possess subjectivity, does this entitle them to moral patienthood? In every discipline, there are important questions to ask, and important answers to seek. There is something special about human intelligence. Let’s use it to grapple with reality, the better to shape it.
Related reading:
I Saw Something New in San Francisco — Ezra Klein (March 29, 2026)
When “technically true” becomes “actually misleading” — Kelsey Piper (Feb 13, 2026)
The Future of AI Is Not (Yet) Written — Matthijs Maas (Feb 05, 2026)



