February 22, 2026 — by W. J. Zeng
"I feel like I just have a few years yet to prove some interesting results before AI takes over." - My friend in a Princeton bar after giving a talk at the IAS on his work building towards classification of topological phases.
An imagined conversation:
Arnheim stood looking out the window with his hand in a fist. He'd said off hand that when there is a super-intelligent AI, then that AI will give us all the answers. There'll be no reason left to do science, or at least there won't be any joy left in it. He clenched his fist.
Ulrich watched that tension express in Arnheim and adopted a counter-pose. He relaxed back in the soft upholstery, crossing one leg over the other and draping his arm over the back of the chair.
Arnheim couldn't see this of course; perhaps Ulrich was performing for himself. Ulrich said that science is fundamentally a human activity and that this won't change with the advent of AI. Science has always been about what humans know, and more than that, at its best and since its founding it's been a personal activity. The motto of the Royal Society is, after all, nullus et verba: take nothing on someone's word. He said science will continue on because an AI knowing something doesn't cause any personal or human knowledge to appear. Ulrich then felt this wasn't sufficiently concrete, so he appealed to an example. Imagine if you knew for sure that there was a race of super-intelligent aliens somewhere in the galaxy that are immensely more advanced in their sciences than we are. Does their existence dissuade you from doing science here on Earth?
Arnheim turned now and explained that the super-intelligent future is one where those aliens are not somewhere vaguely out there, but in direct and close contact with us. Given this, wouldn't we spend a lot of time asking them questions instead of doing science ourselves? Or at least the activity of science would change from asking questions of nature directly to asking questions of the aliens and verifying their answers? Whatever this version of science is, he continued, it seems to be entirely different; true discovery is replaced with its shadow: validation activities. Arnheim gestured around the room as he continued. Further, the collaboration between humans would change. Instead of querying each other, we'll each of us keep going back to the aliens/AI super intelligence. Where has the energy and the community of science gone when we are each already asking each other what our AI chatbots have to say on the topic?
Arnheim started to pace over to the fireplace and continued. A super intelligent AI upends science even more than the aliens because (if suitably aligned) it will be extremely good at explaining things to us. In this world, even if a human discovers something truly novel then the first thing to do will be to tell the AI about it. Other humans will have the AI explain it to them, probably better than the discovering human would. Perhaps the discovering human will have the AI explain it back to them in ways better than they could have done on their own. Arnheim ended by questioning the room, "Is this what the warmth of scientific community is to be replaced with? Science into solitary."
Ulrich was less sure that solitary science is such a bad thing, but he can't resist commenting in a way that, more accidentally than not, might have been seen as trying to assure Arnheim. Ulrich said that we are already in a world where direct human activity for discovery has been decreasing. Neither scientific journals nor the arXiv have degraded the character of science, although we now interact with such artifacts rather than reproducing experiments. Is a super-intelligent AI not just an extension of this? Then Ulrich went off script a bit. Ulrich said that in a way this trajectory of impersonalization in learning about reality goes all the way back to at least the invention of writing. He called Arnheim a Socrates who is chastising his students about how writing would dull the mind's experience of personal knowledge by projecting it out into words and letters. Or perhaps Arnheim would like to go back even further and take issue with the invention of language itself? Wasn't it language that interrupted the truly personal experience of reality? [c.f. The Truth of Fact, the Truth of Feeling by Ted Chiang]. With language you can ask someone for knowledge. The average person you meet on the street might not be a super-intelligence but probably they know something you do not, and that dissuades you from finding out on your own. Or would Arnheim go even further to the moment of sentience as the break where we lost control of our own real joy and discovery in pure experience?
At this last point Ulrich had to stop somewhat abruptly because he realized he might agree with it. Arnheim failed to notice the weakness though, and so after a short pause Ulrich continued. Isn't the super intelligent AI future one where we just extend along this curve from language to writing to internet to LLM to super-intelligent AI? Thus far each change has accelerated science and made it more important, not reduced it to meaninglessness.
Together they discussed this point a bit more, orienting around the question: the internet has so much knowledge, isn't it better to read the internet than to discovery yourself? They settled on at least three reasons why this might not be true. Firstly, you could discover something that the internet doesn't know and so become famous or rich or bask in the glory of true knowledge, etc. Secondly, the internet is not trustworthy. Though this reduces to the nullus et verba verification that seemed shallow when they discussed it before. The third reason they settled on was a sort of "hobby-joy" that is analogous to why people learn and play instruments live even when we have acoustically perfect recordings of the greatest musicians readily available.
Arnheim cataloged these reasons against the AI scenario. He dismissed the second reason as shallow and argued that a sufficiently truth seeking and aligned AI would not require much earnest verification. Instead our science would end up as a perpetual undergraduate curriculum; all outcomes of experiments would be known in advance and you'd need only follow a template. Ulrich conceded this point. Arnheim, emboldened, then stated that as the AI becomes smarter, the probability that you discover something it doesn't know decreases rapidly. Ulrich countered that there are things that the AI will never know, for example what it is "like to be you" i.e. it doesn't have access to your qualia. This qualia of experience, Ulrich continued, is precisely what makes up the hobby-joy of the third reason.
This was all a bit much for Arnheim, a man who sits on corporate industrial boards, to hear. When Ulrich began with this qualia stuff, Arnheim began tapping his finger on the fireplace mantle and then politely explained, or more accurately concisely declared, that he didn't believe qualia was real, let alone that it had any aesthetic, moral, or epistemological value. This brought them a bit to an impasse and so to fill the time Arnheim walked over and sat in the chair facing Ulrich. It would have been impolite for Arnheim, after such a statement, not to concede the standing high ground.
Arnheim then resumed on a previous line, arguing that even if previous advances in technology had enhanced but not diminished science, that this does not imply that a super-intelligent AI wouldn't cause a more discontinuous change. The super-intelligence would be so good, that there would be no reason to interact with other humans at all, at least not without the AI intermediating. In this sense, a super-intelligent AI is a totalizing change in a way that the internet is not.
Then Arnheim took things further. He said that if the goal of science is personal knowledge, then there is an upper limit on the amount of personal knowledge that we, as finite beings, can have. For example the maximum amount of information that can be stored in our minds, or, if not that, then some more sophisticated metric, but still there is some limit. Presumably the super- in super-intelligence means that the AI can go beyond this human-knowable limit. The AI could then push us from a first limit (i.e. what can be discovered by unaided humans) to a second limit (i.e. what can be explained to a human with the best compressive explanation that an AI could invent). He admitted that we are already past the first limit with today's technology and have been for a long time. Then Arnheim said, "Imagine there is some description of reality that is simple compared to the universe but too large to be explained to a human by the AI. At that point we have reached the cliff for human science." And then he stopped.
There's a Cormac McCarthy quote from Blood Meridian that entered both their minds, given their shared libraries of reading:
"The truth about the world, he said, is that anything is possible. Had you not seen it all from birth and thereby bled it of its strangeness it would appear to you for what it is, a hat trick in a medicine show, a fevered dream, a trance bepopulate with chimeras having neither analogue nor precedent, an itinerant carnival, a migratory tent-show whose ultimate destination after many a pitch in many a mudded field is unspeakable and calamitous beyond reckoning. The universe is no narrow thing and the order within it is not constrained by any latitude in its conception to repeat what exists in one part in any other part. Even in this world more things exist without our knowledge than with it and the order in creation which you see is that which you have put there, like a string in a maze, so that you shall not lose your way. For existence has its own order and that no man's mind can compass, that mind itself being but a fact among others."
This expansive turn opened up a new avenue for Ulrich. He explained that a rational framework was implicitly underpinning their arguments and conversation, along with some physics intuition ideas like finite-ness, information content etc. How do we know those are right? Maybe the super-intelligence will help us figure out what comes after science, e.g. a new epistemological framework for discovery that remains rewarding but that we can't think of? Would it not be premature to just throw in the towel now? Would it not be short-sighted to become reactionary and try to slow the data centers to preserve organic, artisan science? Not that organic and artisan science is bad, just that mandating it is bad. Ulrich claimed that what we likely need to do instead is extend our definition of personal knowledge. This is the "we are already cyborgs" view, where what I "know" includes what I can say if locked in a windowless Faraday cage, but it also includes what I have on my computing devices, in my social networks, etc if I have access to them. Won't a rightly aligned AI become a cyborg extension of ourselves, taking us with it?
Arnheim indicated that it might, or eventually that the meat brain will become vestigial to the cyborg's epistemological project and end up like our appendix. Or perhaps just evolved away entirely?
For Ulrich, the critical part in determining the valence of that evolution is about what happens to the agency of the cyborg. Is the human's agency destroyed and replaced with a machine agency or does it smoothly interpolate from one to the other? Echoes of trans-humanism and extropianism have started to stalk the conversation here. Ulrich thought the human agent (and by agent he meant thing-that-has-choice) that I am today is threatened if I can't imagine what happens to my children's agency.
Arnheim was already continuing on that thread, saying that if you want your children (future generations of humanity) to progress science then they'll need to evolve into cyborgs. "Or I suppose we could just count the AI super-intelligence directly as your children?"
Ulrich said that if you believe the project of science is personal (which he then claimed to believe), then to keep up you'll need to continuously upgrade the definition of what that person is. If you expand the definition to say that super-intelligent AIs have personal experiences of scientific discovery, then perhaps concerns are unwarranted. In the same way that one is excited about future human generations making new discoveries then one should be excited about the AI-children making those discoveries? But you must admit, he said, that not everyone is ready to accept AI as our children and that they do have good reasons to be hesitant.
At this Arnheim stood up and summarized: so that leaves two options then. First there's the trans-humanist view, that we accept the AI super intelligence / cyborg super intelligence as the inheritors of science. Or second, there's the somewhat more ridiculous view (to Arnheim) of faith in an as-yet-undiscovered invention: that a super-intelligence (or some human or cyborg) will find a way to preserve human agency and the joy of science in a way we can't predict now. And with that Arnheim left the room deliberately.
Ulrich found himself a bit disappointed by this exit and found himself reflecting on the second option. An option of hope, he thought to himself. But also a warning. He recalled stories of civilizations that accomplish what they think is a perfect upload of their consciousnesses to some new substrate, only to find out irreversibly later that they missed something fundamental in the upload. They'd missed some important but overlooked quality with tragic consequence. The unexpected second option should keep us wary of any totalizing doctrine that devalues humanity. Hard to do in an accelerating world…
He realized that he had long since abandoned his "relaxed" pose in the chair and was now sitting rather tense. As he opened and closed his fingers he looked out the window and recalled some recent Antiqua et Nova from the Vatican on AI. He resolved to reread it, but it was only a few months later that he would remember that he hadn't followed through.
Note: characters loosely adapted from The Man Without Qualities by Robert Musil.
Thanks for Daniel Ranard for the conversation that insipired this piece and to Phoebe Zeng for review and discussion