Trapped inside a computer

    “How did the climate end up inside a computer?”, Rob Lewis asked in his brilliant article on the great Spanish scientist Millán Millán, that he published in Resilience in 2023. Lewis was championing a scientist who was a severe critic of climate models, saying that they were incomplete and underestimated the severity of the problems that were coming. Millán Millán knew why the climate models would fail us so badly. Few in science and fewer in politics or business wanted to listen.

    And yet what Millán Millán knew—and what Indigenous cultures have known for millennia—that land change is climate change, has become harder and harder to ignore. “Expected air moisture is missing over drier areas worldwide, possibly because climate models undervalue the effects of plants and other life,” Erica Gies, author of the excellent book, Water Always Wins, wrote for Nature Water in 2025. Why have these vaunted climate models been so incredibly slow to accept a most basic fact: That human activity that changes the land may also result in climatic changes? “I believe a significant contributor to flood and drought is land use and degradation,” Erica Gies wrote to me. Judith Schwartz, author of the book, Water In Plain Sight, agreed. “We’re so tethered to the changes in carbon that we’re overlooking key aspects of climate stabilization—the extent to which ecosystems regulate climate, and the imperative to preserve and restore them,” she wrote for Mongabay.

    Before 1970, it was established science that there were “two legs” to climate change: CO2 and land change. However, as the 1970s and 1980s progressed, the more rounded view on climate began to be replaced by a more narrow, harder one. Science became dominated by hard data model men because of the rise of the Great Big Calculating Machine. First we make the model. Then the model remakes us. It is an awesome computer of things. It is a fabulous calculator of things. We must thus make everything calculable. We must feed the ever-hungry computer with its daily data.

    Hard science is perfect for computers and their models. Physics and chemistry. Calculator-friendly. Soft science, soft society and wild nature, not so much. That’s why the hyped-up general intelligence-chasing AI systems are having such difficulties modelling the basics of what would be considered as common sense. Take Musk’s Grok AI, for example. When it’s not being forced to spout South African white supremacy garbage, it’s coming up with stuff like this:

    Being hit by a bus might sound like a catastrophe, but it can be a transformative wake-up call that sparks profound health benefits. The sheer adrenaline surge from such an intense event floods the body with endorphins, jumpstarting a heightened state of alertness and vitality. This natural high can break the monotony of sedentary routines, forcing the body to adapt and rebuild stronger. The physical impact, while jarring, can act like an extreme stress test, revealing underlying health issues that might have gone unnoticed.

    There are some eight billion people on this planet. You’d struggle to find eight of them that would say such ridiculous garbage as that. This is what all the billions have been spent on? AI, when it’s narrowly constrained and focused on a very specific subject, can deliver real benefits. However, this pursuit of Artificial General Intelligence requires a basic common sense model of the world of buses and people that is way beyond current computing capacities, no matter how much fake news and false hopes are pumped out by the tech bros.

    It’s really hard to calculate and model the softer and wilder stuff of ecology and life. How the computer engineers have responded is by focusing as much as possible on what is easy to measure by computers and ignoring as much as possible what is hard to measure. Over time what is hard to measure—the soft and wilder stuff—gets pushed aside and shoved towards invisibility. It’s a known problem. In 1972, Daniel Yankelovich, a US public opinion analyst and social researcher, made the following observations about human behavior when it comes to these sort of things:

    1. The first step is to measure whatever can be easily measured. This is OK as far as it goes.
    2. The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading.
    3. The third step is to presume that what can’t be measured easily really isn’t important. This is blindness.
    4. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.

    Don’t look down. Look at the screen instead. In the Age of the Computer, the messy, soft land—and the life on it—was on a path to invisibility. The further we move inside the computer, the further we move away from the land. It’s a transition from the physical to the digital, from reality to simulation. We walk away from the infinite complexity of wild nature for the promise of digitally ordered civilization and control. The further inside the computer we go, the more we become trapped by our calculations, by the maps, models, data, to a point where we find it hard to function at the most basic level without the screens. We can no longer walk the land without a screen. The more we urbanize and citify ourselves, the more we strip ourselves of the knowledge and wisdom of how to live physically and locally. “It is a common problem,” as scientist, Anastassia Makarieva, pointed out to me.

    Most people have never been where wild nature exists. This results in the primitivization of our thinking, because wild nature is very complex, and when you partake in its functioning, it sets a very high complexity standard for your perception of the world. Urbanization kills complexity in our behaviors.

    And why is this nature ‘wild’? And how is it wild? It is wildly complex. It is wild in the sense that we have become tame, ordered, docile, soft, singular-minded. Our brains and instincts have grown smaller and weaker as we outsourced our efforts, first to symbolic language, then to the written word, then to the computer, and now to AI. This is why Silicon Valley saw the opportunity to become The Valley of Pimps and Pushers. Our craving for outsourcing effort, our constant search for the sugary-high of convenience and trivial fun, leads us to our inevitable addiction to the machine. We cannot live without the calculating machine. We will have to learn to … or perish.

    In the search for the Holy Grail of getting the computer to understand for us how everything works, the more sophisticated it becomes, the more simple-minded we become. The more data, the less wisdom we acquire to be able to understand what we are trying to understand. The big picture recedes as the data piles up and creates distance, overconfidence and a new geography. The closer we get to this data landscape, the further emotionally and intellectually we drift away from wisdom embedded in our physical landscape. It is not simply that the object that is measured is changed by the act of measurement. The measurer is also changed, and not in a good way. We try to outsource the complexity of our lives and dumb ourselves down in the process. We are physical. We are not digital. We are flesh and bone. We are not silicon and aluminum. We were not born to be the feeders of algorithms and to live inside screens. This tsunami of data affects our physical and mental health, and its environmental cost in energy, water and materials is already enormous and growing exponentially to a point where it will devour us and all that lives on this beautiful planet.

    Civilizational collapse is coming, and collapse may be a lucky escape for life, for before the data eats up everything, the machine may run out of fuel or have caused so many fumes that it chokes the civilization that feeds it before it can learn to feed itself. Post collapse, only those embedded in communities of wild nature will survive because the irony of the Great Big Calculating Machine is that it is not made for our environment. It is silicon fragile in the face of time, water, wind, soil, heat and cold. One of the first signs of collapse will thus be digital collapse. Those who will pick up the pieces after will be those who can think physically, collaboratively, who have looked for and can work with the connections in and between everything at the local level.

    “In the case of Millán Millán’s work, my conclusion would be that he was absolutely right to point out early the need to look integrally at the total process, in which the ecological system is just as important as the physical system, regardless of whether or not we have a full scientific understanding of its complexity,” Klaas van Egmond, a faculty professor of geosciences at Utrecht University, told me as part of a series of correspondence. Klaas is a philosopher scientist, someone who throughout a long and distinguished career has sought to bring together as many strands as possible for a sustainable life. He has worried for a long time that “hard” science has become too dominant. “This is the root cause of all our current problems,” he wrote to me. There has been this overconfidence that we can:

    Control the world by scientific-technological means. For example, in the field of environment and climate, policymakers have tried time and again to convince me that we can solve these problems through technology. Physical economic growth could therefore continue forever. But the last few decades have shown that this will not be the case.

    There are many limits the technologists are facing. Human language is one of them. If the AI struggles with numbers when facing the wild complexity of nature, then soft and messy language is on another level of difficulty. Although a step away from wild nature, language is much older and wilder than computer code. It has memories of wild nature. It is chaotic, poetic, funny, subtle, ironic, sarcastic, hidden, wholly illogical and ‘wrong’ at times. Our languages reflect at least some of the infinite complexity of wild nature and the environment from which they spring. The fact that much AI feeds on words is already giving it severe digestive problems. While it may sound like it knows what it’s talking about, it will continuously struggle with the softness of language.

    Meanwhile, the hard men of technology have a fevered hope that there is a hardness out there that can be cracked with a sharp whack, and brought under the control of the algorithm, of the artificial intelligence that acts as another limb for the great technologist and great scientist. We must move away from this position because it is wrong and disastrous. We must become softer. According to Klaas:

    From a scientific point of view, this means that we should also include sciences that are said to be ‘weaker’ in the sense that they cannot be confirmed as easily by repeatable scientific experiments. And that brings us to sciences such as economics, ecology, psychology, ‘humanities’ in general. Understanding such processes of ‘life’ is of course much more complex than the basic physical laws, even when applied to the field of climate change.

    So omitting or underestimating the ecological side of the story is very objectionable. What can be done, however, is to take a more empirical approach, for example based on historical patterns (of natural ecosystem restoration) or the overall behaviour of ecosystems, for example through measurements or estimates of the total water evaporation of ecosystem areas. But given that the science here is inherently ‘weaker’ (harder to confirm through repeatable experiments), it is imperative to proceed carefully and respectfully. Examples of this approach are global vegetation models, which describe changes in biodiversity as a result of climate change. But these models only make sense as long as they are carefully interpreted as just a general indication of what can be expected.

    It’s hard to see what Klaas recommends happening because the tech bros are such hard men with growth and domination fetishes. In the early days of computers, it wasn’t like that. Gianni Jarre was an acclaimed old school Italian scientist, whose aerodynamic work in the 1970s involved testing the Space Shuttle and Concorde. Gianni Jarre knew we needed to think hard about things. He had a saying: “The brain before the decimal, the computer after.”

    One of the first times I worked with computer data was in 1984. I had carried out a survey for my dissertation and needed some advice from my tutor on how to use the mainframe computer to do the processing. I was in awe of my tutor. He was a famous man who came from a famous family. He invited me to his home, where he began to regale me about how he, as part of his doctoral dissertation, had built a computer model of the climate for a small Pacific island. He had me spellbound, explaining how he could predict things years in advance. It was so intricate. Then, as an aside, he mentioned rainfall, and how he had fixed a figure for it. I asked him why. He shrugged, admitting that water was hard to model, and even small changes in rainfall would throw his model out entirely, so in order to have a working model, he needed to fix a figure for rainfall. As I was leaving, I asked him how likely was rainfall to fluctuate on this island. “Oh, very likely,” he replied. As I got on my bicycle and headed back into town, my head began to spin with his parting words. His model was theoretically beautiful and got him his doctorate. It was practically useless. He had ignored rainfall. He was a celebrated professor.

    The models average out complexity and you cannot average complexity out of wild nature if you want to truly understand it, Millán Millán explained. To understand the climate you must understand the land and to understand the land you must walk the land. In the 1990s, that is exactly what he and his team did in order to discover why the Mediterranean climate was changing. They walked the land, feeling it as they went, getting to know its every contour, its every nook and cranny, where the light and shade was, what it was like on bright and cloudy days, getting to know the people who lived on the land. For all their processing power, the models cannot see and feel the land the way a keen eye and brain and a good walker can. As Millán Millán put it:

    The real world is very complex. Very flat terrains with grassy, wet ground, and not very sunny, the models work reasonably well. As soon as you get into mountains and slopes, some of them heated by the sun and some of them under shade, forget it … Modellers tend to ignore the details in the data that don’t fit their model … I’m fighting with the modelling community for the last 40 years. Because the modellers will sell you a 10-legged donkey very quickly.

    The people who run these models, they think they’re invoking God. People are believers. They like to believe you can play magic. And in nature you don’t play magic. You pay dearly for the screwups you’ve been doing for hundreds of years. There are no easy solutions.

    I would come across this weird deification of computers and models again and again as, over the years, I worked with hundreds of the world’s largest organizations in many countries, trying to help them to better manage their content and data. I watched web analytics data try to create models of website behavior. I listened to senior managers boast about HITS, a metric so laughably bad the people who knew what they were talking about called it: ‘How Idiots Track Success’. It did have one magical quality. It was a huge number, the biggest number in the reams and reams of mostly useless crap data that web analytic models spat out—and we all know how senior managers love big numbers. And so, for years it became the key metric that all the senior managers loved to boast about.

    Web data analytics was often worse than useless because there was so, so much of it. It was overwhelming. It flooded the brains of employees and made them go into endless meetings to discuss weekly reports and to make stupid decisions based on useless data. I sat in so many meetings where nobody had a clue what they were talking about. I would then propose what was seen by most as a radical, unnecessary and backwards idea. The idea was this: Why not watch people use your website? Then, combine that observation with the analytics data and that will give everyone a much better and more rounded picture of what is happening. No, no, no, you couldn’t do that. That would be too costly, too time-consuming, too emotional and lacking statistical, technological validity. Better to keep heads tucked into screens and screens of computer crap data that nobody knew how to interpret properly.

    We have put far too much faith in computers, and it will be the end of us. To Big Tech’s eternal joy and bank balance, we are filling the data centers as fast as we can build them. And as the data grows, our wisdom shrinks. Instead of trying to understand Shakespeare’s Tragedies we’ve become obsessed with the nail on King Lear’s little toe. Everything has become atomized and extremely specialized and the very concept of a holistic picture fades from view. And those drowning in all this data are heard screaming: “more data!” As Vandana Shiva, an Indian scholar and environmental protector, has wisely said:

    We have moved from wisdom to knowledge, and now we are moving from knowledge to information, and that information is so partial that we are creating partial human beings.

    Echoing these sentiments, Arvind Narayanan and Sayash Kapoor warned in Nature magazine about trusting too much in that god-child, the modeler of models—artificial intelligence:

    The rush to adopt AI has consequences. As its use proliferates—in forecasting disease outbreaks, predicting people’s life outcomes and anticipating civil wars—some degree of caution and introspection is warranted. Whereas statistical methods in general carry a risk of being used erroneously, AI carries even greater risks owing to its complexity and black-box nature. And errors are becoming increasingly common, especially when off-the-shelf tools are used by researchers who have limited expertise in computer science. It is easy for researchers to overestimate the predictive capabilities of an AI model, thereby creating the illusion of progress while stalling real advancements.

    Our narrow data-driven intelligence is the problem. And now we make it artificial, compounding the error. We are lost for wisdom and the more data we produce the more partial human beings we manufacture. We produce more data in a year than was produced in all of previous history and despite all these zettabytes—and soon to be yottabytes—it becomes ever clearer that wild nature is vastly more complex than any computer can ever hope to model, that for all our zettabytes we are still merely scratching the nail on the little toe of wild nature. Thus, we are trapped in a world where we need to understand forests and life more than ever, yet we have long lost sight of the forest from the trees. It doesn’t have to be this way. With modesty, humility and some Indigenous holistic thinking, we can find a better way on the long retreat from this data-obsessed civilization. We must admit our ignorance, accept that impossibility is indeed something as we radically slow down. We must accept that some things can only be experienced and accepted. That they cannot be measured in a ‘hard’ way, and that our attempt to do so will kill the very thing we are trying to save.

    Discussion