The Mad Religion of Technological Salvation


Image by Logan Voss.

A science journalist and PhD astrophysicist, Adam Becker spent the last several years investigating the futurological vision of our tech-bro masters – and found it’s a bunch of nonsense.

The targets in his new book More Everything Forever are the names we’ve come to associate with the great leaps forward in the march of what Becker calls the religion of technological salvation: Sam Altman, genius progenitor of artificial intelligence; Elon Musk, technocrat extraordinaire and SpaceX honcho; space-colonization guru and Amazon godhead Jeff Bezos; Mark Andreesen, billionaire software engineer, venture capital investor, and avowed technofascist; and Facebook co-founder Dustin Moskovitz, who has donated hundreds of millions of dollars to so-called “longtermist” think tanks that provide the ideological ground for the religion of tech salvation. He also aims his guns at Ray Kurzweil, grand wizard of software engineering, inventor, planner of the immortalizing digitalization of human affairs called “the Singularity;” Eliezer Yudkowsky, a writer and researcher who attributes fantastical (but as yet non-existent) powers to artificial general intelligence, or AGI; and the crew of longtermist tech apologists at Oxford University on whom Moskovitz and other Valley barons have lavished funding.

What unites these players is lust for power and control based in the seduction that technology will solve all humanity’s problems and overcome the human condition. “These ideas offer transcendence,” writes Becker. “Go to space, and you can ignore scarcity of resources…Be a longtermist, and you can ignore conventional morality, justifying whatever actions you take by claiming they’re necessary to ensure the future safety of humanity. Hasten the Singularity, and you can ignore death itself.”

Musk and Bezos’s “power fantasies” of space colonization and visions of “AI immortality” will usher in a future of unlimited wealth and resources, beyond the confines of Earth, the solar system, the galaxy. Ray Kurzweil’s dream of the Singularity involves the uploading of minds into digital simulations, so we can live forever. All of this, Becker says, is a divorced-from-reality sales pitch driven by the primordial fear of death. Overarching it is what’s called “engineer’s disease”: the mental derangement of believing that engineering can solve anything and everything.

In Becker’s telling, for example, Kurzweil is an unhinged fantasist manically attempting to resurrect his dead father into an artificial intelligence “Dad Bot.” Like a Christian apocalyptic prophet, the high priest of the church of tech salvation promises the Singularity to arrive as early as 2045, when AI computing becomes so fast and so powerful it will transform society, Earth, and the universe itself to overcome “cosmological forces,” including time and aging, the laws of physics and entropy. All existence would become one giant computer spinning forever out across the vastness of space. “The objective is to tame the universe, to make it into a padded playground,” writes Becker. “Nobody would age, nobody would get sick, and – above all else – nobody’s dad would die.”

“The promise of control is total,” he explains, “especially for those who know how to control computers. This is a fantasy of a world where the single most important thing, the thing that literally determines all aspects of reality, is computer programming. All of humanity, running on a computer…”

It’s the ultimate revenge of the nerds, made worse because of our subservience to their immense money and overhyped influence. What to do in answer? Understand the authoritarian nature of these zealots, so we can repulse their attempts at the takeover of society and shatter into bits the armatures of their loony-tune machines. As Becker puts it, channeling Orwell’s 1984: “If you want a picture of [the] future, imagine a billionaire’s digital boot stamping on a human face – forever.”

I spoke with Becker recently via Zoom about his book. Our conversation has been edited for length and clarity.

Ketcham: Let’s start with what inspired you to write this book. Like, why go after Sam Altman, Ray Kurzweil, Bezos, Musk, the whole techno-optimist crowd?

Becker: I’ve been following these sorts of subcultures – longtermists, general techno-optimism, Singularity stuff – for a very long time. I’m a science fiction junkie and first encountered a lot of these ideas in science fiction in high school or earlier. I think I first heard of Ray Kurzweil in college. And I thought, oh, yeah, these ideas are bad, but they don’t seem to be getting a lot of traction. And then the funniest thing happened: tech billionaires took this stuff seriously, giving these people a lot of money. I moved out to the Bay Area about 13 years ago. And of course, this is ground zero. I realized how deep in the culture this stuff is, these things like the singularity and AI hype, the idea that technology is going to solve every single problem, we’ll go to space and that will solve every single problem. I was amazed at how uncritical and ubiquitous the acceptance of these ideas was out here. I thought, you know, this is ridiculous. The other thing is, when I saw people going after these ideas I didn’t see a detailed scientific breakdown of why these things don’t work. There were a lot of people who dismissed people like Yudkowsky or Kurzweil just out of hand, but they would be like, Oh, this is ridiculous. Why? Usually the answer from the analysis was it’s ridiculous because it’s an insane fantasy. Yes, it is an insane fantasy. Why? I thought, well, there’s not enough actual analysis because people are not taking these ideas seriously outside of these communities. What people don’t seem to realize is these communities are becoming bigger and more influential. So even though their ideas are sort of prima facie ridiculous, we have to engage with them because they are gaining more power. Fundamentally, that’s where the impulse for the book came from.

Ketcham: So what drives this zealous acceptance by the technocrats of what you describe as prima facie ridiculous ideas?

Becker: Because it provides all kinds of excuses for them to do and say the things that they already want to do and say, and that makes these ideas really appealing and compelling and persuasive for them. Because that’s the way human psychology works. If something provides an excuse for you to do a thing you want to do anyway, it increases the chances that you genuinely believe it, because it’s so convenient to believe it. It makes the world simple. It provides a sense of direction and meaning. It lets them see themselves as the hero of the story of humanity, that they’re going to save us by taking us all to space and letting us live forever with an AI god. They’re going to be the people who usher in a permanent paradise for humanity. What could be more important than that? And of course, all of that’s nonsense. But it’s like if somebody came down out of the sky and said, you are the chosen one, you are Luke fucking Skywalker, here’s your lightsaber, all you have to do is believe everything that I tell you and you will be seen as a hero. Anybody told you that that person was lying? You and I are used to thinking critically, but tech billionaires don’t think that way. They’re not really in the habit of thinking at all. They don’t have to, because thought is something that requires a lot of effort and critical self-examination. And if you have all of your needs that you could ever possibly have taken care of, and the only thing left is this fearful pissing contest of who has the most billions of dollars, then why would you stop to question yourself? There’s no reason to, and everybody around you is going to tell you that everything you’re doing is right, because you’re a billionaire. You surround yourself with sycophants.

Ketcham: What you describe is, of course, a religion, in that it provides all the various salutary, mentally assuaging elements of religion – meaning, purpose, direction, a god of sorts.

Becker: And it even provides, in some cases, a kind of community.

Ketcham: Right. Not an unimportant thing. Let’s talk about the religion of technological salvation. The religion long predates this movement, no? You could almost go back to the Cartesian vision of the world, Enlightenment science, this idea that science and knowledge will lead to the ultimate perfection of the world. Tell me how the current iteration of the religion of tech salvation fits into the history of industrial society.

Becker: That’s a really good question. But I want to be clear. I think science is great. And I think that it is true that science has brought about really amazing things. It’s also brought about horrors. It gave us vaccines, but it also gave us thermonuclear weapons. And I think that that’s about the scale, right? Vaccines are arguably the best thing that science has ever done. And thermonuclear weapons are, I think, pretty indisputably the worst thing that science has ever enabled. But science is ultimately a tool. And just like the rest of technology, it’s a tool that is subject to human choice and contingency. What scientific truths we discover, that’s not really up to us. What technology we build off of the scientific advances that we’ve made, that is up to us. Technology is not preset on some sort set of rails or like a tech tree out of a video game. And so that means that there is no inevitable future of technology. Technology can enable us to do things that we previously couldn’t, but which things it enables are a combination of the constraints placed on us by nature and human choice. The narrative that technology will inevitably lead us to a utopia or inevitably lead us to apocalypse, these are just stories that we tell. The idea that it will lead to a utopia, as you said, is an old one. The specific version of this ideology of technological salvation that the tech oligarchs and their kept intellectuals and the subcultures that they fund is ultimately something that springs from a mix of early to mid-20th century science fiction and various Christian apocalyptic movements. Because there’s a fairly long history in Christian apocalyptic movements of the idea that technology will bring about the kind of utopia and second coming that you find in Christian apocalyptic writing.

One of the words I learned in the course of doing this book was soteriology, which is the study of doctrines of salvation. There’s a long history of that, going back at least as far as the Russian cosmism concept and then Teilhard de Chardin, as I talk about in the book. And then you’ve also got the technocracy movement, which Elon Musk’s grandfather was involved in, which is a sort of fascist technological pipe dream. The idea behind the technocracy movement was that only the people who build technology actually understand what it’s doing. And technology is what’s going to determine the future. So only the people who build the technology can be allowed to run society. Laying it out that way, it sounds awfully familiar. That sounds a lot like Marc Andreessen and his techno optimist manifesto. And indeed, in that manifesto, he harks back to Marinetti’s Futurist Manifesto, which is itself a forerunner to the Fascist Manifesto, also co-written by Marinetti. All of this stuff has these early echoes, and you also see it in, like I said, early to mid-20th century science fiction, because they pulled a lot of their ideas from those same places. The idea is that the ideal future is one run by engineers. There was this inevitable march of progress that would make the world a kind of utopia on the backs of space colonization and artificial intelligence. These ideas are all over golden age science fiction.

Ketcham: So why should we beware the rule of engineers?

Becker: Because there’s no democratic accountability. And also, because engineers often suffer from engineer’s disease, which has a couple of different definitions. First, there’s a tendency to just ignore the humanities and ignore anything that’s not in the narrow domain of STEM as fundamentally not important. But engineer’s disease really boils down to the idea that if you are an expert in one technical domain and know how to solve one kind of very difficult problem, that makes you an expert in every domain because you know how to solve all kinds of difficult problems. And that’s just not how the world works. There is not a hierarchy of which problems are the hardest and which domains are the most difficult. And domain expertise is not generally transferable. If you are really, really good at, say, string theory, that does not mean that you’re going to be really, really good at, oh, geopolitics. Or even if we pick another technical discipline, it doesn’t mean that you’re going to be really, really good at, say, computer science or genetics or genomics or ecology or whatever. It’s not like Albert Einstein would have been the world’s greatest psychotherapist if he had just gone into Freudian psychology rather than theoretical physics.

Expertise is not transferable like that. It’s not innate. And this is especially pernicious when talking about computer science in particular. In software engineering, the problems that you learn to solve are fundamentally human problems, because the systems that you’re working with and within are designed and built by humans. There’s a legible logic to them because the systems were designed – and designed such that questions that humans would ask of them would have answers. The problems that show up in software engineering generally do have answers. And yes, some of the problems that you run into in software engineering involve making those systems work with the natural world. And that can be hard, but ultimately you are dealing with a human-built system. An artificial human-built system, not even like an evolved human-built system like natural language, but an artificial language, computer programming. Contrast this with problems in fundamental physics, sociology, linguistics, political science, biology. These are all systems that in one way or another are natural. Even human language, like I said, natural human language evolved. It wasn’t designed. Other systems are even less designed. Nobody designed the ribosomes in your cells. Nobody designed the solar system. The world that we live in is an aggressively non-human world, and the logic that underlies it is not a human logic.

Ketcham: Yeah, I noticed throughout your book a kind of implicit critique of the blinkeredness of anthropocentrism. This idea that humans are the center of the world, that everything we invent in the technosphere that comes flying out of our minds and our opposable thumbs is somehow the defining fact of the universe. The example of Ray Kurzweil seems to take this anthropocentric blindness to absurd heights. He talks about “replicator” bots dispersed throughout the universe to transform all matter into a gigantic computer into which our brains will be uploaded. This on its face is a fantasy, as you note, out of the most extreme idealistic visions of science fiction. How do we take someone like Kurzweil seriously when he proposes something manifestly outside the realm of the physically possible, not based in any known science today?

Becker: In a perfect world, we wouldn’t have to take it seriously. But the problem is that there are powerful and influential people who do take these ideas seriously. Marc Andreessen, one of the most powerful and wealthy people alive, he takes Kurzweil’s ideas very seriously. And unfortunately, that means that we have to take those ideas seriously in order to tear them apart. We have to say, that’s ridiculous, and here’s a detailed explanation of why – you know, in small words, so that you, Marc, can understand it. Not that I harbor any hope that Marc Andreessen will understand and absorb the lessons of my book. I don’t think that he’s capable of that kind of honest, critical self-reflection. Prove me wrong, Marc! Look, these ideas, as I said, are very seductive to very powerful people. They claim that the ideas come from science. But they don’t come from science. Where do they come from? And the answer is they come from science fiction and Christian apocalyptic movements. If you have any familiarity with history, politics, sociology, religious studies, it’s pretty obvious that ideas like Kurzweil’s are echoes of other things. There’s this lovely book called “God, Human, Animal, Machine,” which I reference at the end of my book, which does a nice job of laying out the connections between ideas like Kurzweil’s and those of Christian eschatological movements. These people are very dismissive of analyses like that. And this goes back to your first question, why did I write this? I said, oh, okay, let me do an analysis in a language that they will understand.

Ketcham: I’ve noticed a phrase you like to use: That’s not how the world works. These people, it seems, are divorced from the reality of the world.

Becker: Yeah, they are. They have completely misunderstood how the world works, how science works, how people work. I know I keep hammering away at Andreesen, because he’s my least favorite person in the entire book. He says in that unhinged manifesto of his that he is the keeper of the true scientific method, contrasting himself with academic scientists. Well, buddy, first of all, you wouldn’t need to say it so loud if it were true. And second, the real scientific method is not to have a statement of beliefs about what the world is and how it works, or what the inevitable future of technology is. The real scientific method is to be curious and questioning about the world and be open, constantly open, to the possibility that you’re wrong – in fact, expecting that you’re wrong. And that’s not something these people are capable of.

Ketcham: Inherent throughout this movement is techno authoritarianism. You mentioned about why we should we beware the rule of the engineers: there’s no democratic accountability. Talk about that a little bit and tie that in with Andreessen’s embrace of the Italian fascist techno-enthusiast Filippo Tommaso Marinetti – Mussolini’s favorite philosopher.

Becker: So, if you think that the future of technology is the only thing that matters because you suffer from engineer’s disease and that nothing else is really important, and if you think that the future of technology is predetermined on rails, that there’s something inevitable about it because you subscribe to this ideology of technological salvation, then you’re going to think that anybody who doesn’t see the world that way is, you know, irrelevant and in the way of the future, either the glorious future or the apocalyptic future that you are trying to avert. Either way, they’re in the way. And if it is a utopia that you are certain is coming or an apocalypse that you are certain you need to avert, it doesn’t matter how many people tell you you’re wrong or how many people try to stop you. The best thing that you can do is to amass as much power as possible to bring that utopia about and avert the apocalypse. And so, democracy is, you know, just the rule of the un-informed, right? And since they don’t know the secret knowledge that has been vouchsafed to you of what the future holds, they can’t be trusted.

Ketcham: Democracy is an inconvenience to be swatted away. The technological priesthood requires an authoritarian apparatus.

Becker: Exactly. So, this is where you get people like Curtis Yarvin, who is an excellent example of engineer’s disease. Here’s a guy who thinks that he understands how the world works. He doesn’t know anything. His analysis of all of the different texts that he supposedly draws upon in the construction of his blitheringly incoherent philosophy shows he’s just bad at reading and bad at understanding. And like a lot of these guys, just bad at thinking.

Ketcham: Give us a quick take on Yarvin, his worldview, what he represents as part of this movement.

Becker: Curtis Yarvin is a favorite court philosopher of J.D. Vance, a software engineer who got funded by Peter Thiel – he’s tight with Thiel – and a monarchist. He wants kings, and he wants the monarch to be essentially a tech CEO, because those are the people who actually understand things, according to him. He thinks that the world does not work without a king, that society does not work without a king, and that all of the problems of the world today are proof that we need kings. He says that the problem with society today, and this is a direct quote, is chronic kinglessness, and that democracy is a disease that needs to be stamped out. This guy has also come right to the edge of defending slavery and has certainly said way more positive things about slavery and apartheid than could ever be warranted. He believes there are inherent genetic differences in intelligence and other aptitudes between different races of humans, an idea that has been proposed and dismissed just over and over and over again because there is overwhelming evidence that it’s not true. He thinks people like him are better at genetics and genomics than geneticists and genomicists. Again, engineer’s disease.

Ketcham: The engineers must by all means realize their future utopia, and one of the versions of utopia as envisioned by Bezos and Musk is space colonization. You show very clearly that this is, again, divorced from reality. How much of a fantasy are we talking about here? And why go to Mars in the first place? As I told a friend recently talking about this, there’s no wine on Mars, no women, no wildflowers, no running water, no air.

Becker: To pick on another guy who absolutely deserves it, Elon Musk has been very consistent about the vision he has for Mars and the justification for it. He says that we need to become an interplanetary and interstellar species to preserve the light of consciousness, and that specifically what we need to do is go to Mars. His plan is to have a million people living on Mars by 2050 in order to form a self-sufficient colony that will survive even if the rockets from Earth stop coming, as a backup for humanity in the event of a massive disaster here on Earth. This is one of the stupidest ideas that I’ve ever heard in my life. It doesn’t work for so many different reasons. Mars is absolutely terrible. The radiation levels are too high. The gravity is too low. Yes, there’s no air! The dirt is made of poison. It’s a horrible place. Musk talks about wanting to terraform it by nuking the polar ice caps to build a bigger atmosphere. That’s not going to work. It wouldn’t produce enough of an atmosphere to allow for human habitation. It wouldn’t solve the radiation and gravity problems. It wouldn’t solve the toxic dirt problems. Musk talks about Mars as a refuge in the event of an asteroid strike here on Earth. More asteroids strike Mars than Earth. And Earth, even after an asteroid strike like the one that killed off the dinosaurs, was still a nicer place than Mars. We know that because mammals survived, whereas no mammal could survive unprotected on the surface of Mars. And think about what getting a million people to Mars would require. Say that you could somehow cram a hundred people into a single spaceship, into a single rocket. That is more than ten times the number of people that have ever gone up in one space mission ever. And also, of course, that mission that sent eight people up, that was just to Earth orbit. They could get back to the ground in a couple of hours. And it took less than a couple of hours for them to even get to Earth orbit in the first place. A mission to Mars takes six to nine months minimum.

Ketcham: And the radiation that would accumulate or that would be absorbed by the passengers during that period, would it not then result in terrible cancers over time?

Becker: Yeah, it would massively increase the cancer risk. It would probably sterilize some of the people on board or at least make it much harder for them to have kids.

Ketcham: Hopefully Bezos and Musk?

Becker: Well, a little too late for Musk. Put all that aside, say that you could get a hundred people in a rocket, say that you somehow could keep them alive for six to nine months of the journey from Earth to Mars. And yeah, okay, some future rocket technology could cut that number down – but by 2050? No, that’s not happening. But say you have those hundred people on each rocket somehow. You want to put a million people on Mars? That’s 10,000 launches with a hundred people each. That’s how many you’d need. And Musk has said, yeah, that’s right. We’re going to have to launch a hundred people a day for years. Buddy, your rockets explode all the time. You know, the failure rate of crude launches over the history of human spaceflight is something between one percent and five percent. Say that it was 0.1 percent. Say that somehow SpaceX, rather than having a terrible safety record, suddenly had the best space safety record by a factor of 10 or more. Well, 0.1 percent of launches for a million people – let’s see, how many would that be? That would be, if you got 10,000 launches, ten launches. So congratulations, you killed a thousand people.

Ketcham: All for the greater good of realizing utopia. So you’ve arrived on Mars, you’re living presumably in an underground community. You never see the sky. It sounds like a nightmare. It sounds like a place for people to go insane.

Becker: It’s hellish. Look, it’s hard enough to find people who want to winter over at the South Pole. If you winter over at the South Pole, you still get to go outside. You still get to see the sky. It’s cold out there. You don’t go outside for long, but they do it. You can’t leave the polar station in the winter because you can’t get planes or helicopters in or out because of the weather and the darkness. You’re stuck there with people. But there’s oxygen to breathe. There’s enough air that you don’t need to have the air piped in. All you need is to have food and to be able to stand staying there with the same people for upwards of six months. And it’s still so psychologically brutal that very few people are willing to do it. I think a lot of people think that they could do it, but they actually can’t because it requires a very particular psych profile. That is a walk in the park compared to Mars. In my book I write that Mars would make Antarctica look like Tahiti. Someone, I don’t remember who, told me that’s actually inaccurate, an understatement. Compared to Mars, the polar base at the South Pole Station, in the middle of a polar night is like being in Central Park surrounded by people on a gorgeous summer’s day. It is like sitting in the lap of luxury compared to anything that you could have on Mars at anything like a reasonable amount of time from now.

Ketcham: You make abundantly clear in the book that this vision of space colonization is really in service of perpetuating growth and substantiating or rationalizing the ideology of growthism. And I commend you for bringing up the 1972 Club of Rome-MIT report, Limits to Growth, because that’s a pivotal 20th century document that has been forgotten by too many people who should know better. You quote Musk and Bezos both saying that if we don’t colonize the solar system and beyond, we will stagnate because the planet is limited in its resources. The ecosphere is finite. The technosphere will not be able to function limitlessly on a finite resource base. What we’re talking about here is this idea that, oh, we’ll go to space, ergo no need to impose any limits now. We can continue business as usual.

Becker: That’s right. One of the things that I actually like about what Bezos said is that he was so explicit and specific. In the same way I kind of like what Musk said about Mars, precisely because he pinned it down so carefully and it was so easy to tear apart. Because it’s just a delusion. Bezos did something very similar. And you know, Bezos has made fun of Musk, which I kind of love as well – the two of them sniping at each other, right? I love it when they fight each other! Bezos has made fun of Musk for good reason. He said, Musk wants to go to Mars, but Mars sucks. Mars does suck! Instead, Bezos has this idea of giant space stations that he pulled from Gerard O’Neill. Bezos has also been very, very clear that what he wants is continued growth in energy usage per capita – forever. He specifies growth in energy usage per capita. And he’s been very clear that the reason he thinks we need to go to space is so we can have that energy usage per capita continue to grow indefinitely because resources on Earth – and he is correct on this – would run out no later than a couple hundred years.

The problem, of course, is that if you go to space, you still run out, putting aside problems such as it’s hard to live in space. And we might run out of resources a lot sooner than that, because the theoretical limit may be well above the actual practical limit. In other words, there are inherent problems with this ideology of growth that comes along with technological salvation. You still don’t get out of having limited resources by going out into space. Because if you have energy usage continue to grow at the same rate that it has for the past couple hundred years, then in like a thousand years, you’re using all the energy output of the Sun. And then a couple thousand years after that, you’re using the entire energy output of all of the stars in the entire observable universe. A couple hundred years after that, you’re just using all of the energy in the universe. Of course, none of that is possible, if for no other reason than the fact that the speed of light is limited. That is, you can’t get to all of those places in that amount of time. And the laws of physics are not gonna come in and save you, you know? We’re not getting around that speed of light limit. It’s not happening.

Ketcham: One of the big themes of the book is the all-too-human fear of the ultimate limit, which is death. These people are terrified of death. Things come to an end, people die. And that’s it, that’s the nature of things. It’s how the world works, as you would say. One of the most moving parts of the book was Kurzweil and his ridiculous Dad Bot. Now I mentioned that my father died recently. And you know, I was joking about uploading his consciousness into a computer, obviously nonsense – but you write that Kurzweil really believes it. I read that and I thought, oh boy, this guy’s got some psychological illness.

Becker: There’s nothing wrong with being afraid of death. There’s a problem with denying that it’s gonna happen and letting that fear override and control your entire life. Like, you know, live a little, have a life! If you let your life be controlled by fear, you’re not really living. That’s hardly an original thought. I’m pretty sure I just quoted a platitude that shows up in at least half a dozen tearjerker Hollywood movies just from the nineties alone, right? But these guys don’t get it. They are not able to understand that all the money and power in the world can insulate you from a lot of things, but it can’t insulate you from death. And technology can do a lot of things, and science can discover a lot of things, but it cannot prevent or reverse death. And in fact, the better that we understand science, the better that we understand, you know, thermodynamics, chemistry, biochemistry, biology, the better we understand exactly why death is both inescapable and irreversible. The other thing is that our best science also tells us is that we don’t haunt or inhabit our bodies. We are our bodies.

Ketcham: When the body dies, we die. We cannot be separated from the physical environment in which the brain is operating at any particular moment. Hence the absurdity of the notion of uploading consciousness into a computer. Again, it goes back to Cartesianism, right? This artificial separation of mind and body?

Becker: It doesn’t just go back to Descartes. Descartes is sort of the origin of that idea in the history of rational Western philosophy, or empiricist Western philosophy, but it’s not actually the origin. It goes back to the idea of a soul, an ancient idea across many different human cultures. That there is something immaterial that inhabits the material. If you want to have a worldview based on science, that’s not what’s going on. There’s no good evidence that mind uploading can be accomplished. That’s not happening! And even if it turns out it can be done at some point in time, first of all, there’s a great debate in philosophy about whether that would be you or a copy of you. And second, there are many reasons to think that the technology to do that is not computer technology. The computational analogy for how the brain works is just that. It’s an analogy. And it’s the latest in a long line of technological analogies for how the brain works.

Ketcham: But how the brain works remains – and you remark on this in the book – mostly a mystery. We don’t know what’s going on in there, really.

Becker: No, there’s a lot that we don’t know. We do know that it’s not very much like a computer. Computers are designed and built. Brains are not designed by anybody. Brains are things that evolved in response to a long history in the world. And that’s not to say that you can’t ever get a technological artifact to think and do the things that humans think and do. But we wouldn’t call that intelligence in the same sense as brain intelligence.

Ketcham: What about the hype around artificial intelligence, and what we’re now calling artificial general intelligence?

The term AI is one of these terms that has kind of deflated over time in meaning. It’s gotten to mean less and less, which is where this term AGI came in. When I was a kid, AI was what we used to describe Commander Data on Star Trek. Now we’d call that AGI, artificial general intelligence, and AI is instead the same thing we were calling machine learning a few years ago. And then before that, we were calling it data science. And before that, we were calling it statistics. Which is not to say that deep learning models like LLMs are not doing anything other than statistics, but fundamentally they are doing a lot of linear algebra to find patterns in large data sets and then reproduce those patterns in new data sets that they produce. They are trying to predict the next word or in the case of images, the next pixel. They can do some interesting tricks and actually, you know, do things that we’ve not seen computers do before. But fundamentally, they do not have a model of the world. They do not understand how the world works. They do not bear any relationship to truth or falsehood. They only know how to do one thing, which is predict the next word in a way that will sound plausible. But saying that these machines talk to you, that they make mistakes, are all different ways of anthropomorphizing the machines. And we are so, so, so prone as humans to anthropomorphize everything, especially things that produce language. There’s a long history of this, even with far less impressive and less capable chatbots, all throughout the history of artificial intelligence research. Eliza was a chatbot from the 1960s, a very transparent algorithm that was made to emulate the style of a particular kind of psychotherapist, where you would say, Eliza, I’m not doing so well today. And Eliza would say, Oh, I’m sorry to hear that. Why aren’t you doing so well today? And then you’d say, Well, you know, I’m really upset because I got into a fight with my partner. And Eliza would say, Oh, I’m really sorry to hear that. Why did you get into a fight with your partner? It had maybe 10 or a couple dozen different canned phrases with blanks. But people said at the time, I talked to Eliza and she really understood me and helped me. But that machine is not thinking.

Ketcham: Sounds like a conversation Ray Kurzweil has with his Dad Bot. Let’s conclude with the group of people who are, to me, the most offensive protagonists in your book, the “kept intellectuals,” as you describe them, who provide the ideological underpinnings used to justify all these absurd, sometimes nonsensical claims about our glorious tech future. I’m talking about the utilitarian ethicists Toby Ord, William MacAskill, and Nick Bostrom, all of Oxford University and all central figures in the philosophical school called longtermism.

Becker: They have an approach to ethics that, again, fundamentally misunderstands how the world works. They believe that everything can, at least in principle, be quantified.

Ketcham: Happiness can be quantified, as in all utilitarian ethics.

Becker: Happiness can be quantified, the goodness or badness of a world for people can be quantified.

Ketcham: Do you think that’s another expression of engineer’s disease?

Becker: I think so, yeah. There’s this notion that quantification can lead to figuring out what the right answer is for what to do in any given situation. Well, there’s a reason why ethics is hard. And the reason is that the world is a complicated and messy place we did not make. There are all sorts of situations that arise in which it’s not clear what the right thing to do is. Utilitarianism has been around hundreds of years. Variants on the idea have been around for thousands of years. And part of the reason why it’s not taken seriously by a lot of philosophers is precisely because in order to really be a utilitarian, you need to believe that it is possible in principle, if not in practice, to quantify across all different kinds of human happiness and suffering and push that down into a single number that’s positive or negative valued across human experience, not just in one life, but in every life. And when these guys turn to longtermism, they say you have to be able to do this over the course of any human life that could ever come to pass over the future history of the universe. That’s absurd. And worse, it leads them to absurd conclusions. Toby Ord says that there’s some number of future people for whom ensuring their happiness would be worth torturing and killing a million people. And he would say that the number of future people is truly astronomical. I don’t think that any such number of people exists, even in principle.

Ketcham: Do you think that these guys know better, but are being paid to present a philosophy that pleases tech billionaires?

Becker: I think they genuinely believe it. This is the nicest thing I can say about them. Of all the people in the book, I am fully convinced that those guys – Ord, MacAskill, Bostrom – are one hundred percent true believers. I definitely think their views have been influenced by the fact that they’re being paid by tech billionaires, but not in a cynical way. They probably have not really examined how that might have subtly or unconsciously influenced them by changing where their biases lie. And again, if earnestly believing something makes it easier for you to get paid, it’s gonna make it more likely that you earnestly believe it rather than less likely.

And note that they also tend to dismiss expertise, which likely issues from unexamined biases that come from getting money from tech billionaires who dismiss expertise. To take Toby Ord again: he said that in his estimation, the chance of human extinction or irrecoverable civilizational collapse from artificial general intelligence – a machine that is ill-defined that nobody knows how to build and that doesn’t exist – is greater than the threat posed by climate change and nuclear war by a factor of 50. That’s nuts. It’s a deeply irresponsibly ill-informed thing for someone with the platform and cultural power of an Oxford philosophy professor to say. I mean, this guy advises the UK parliament on AI!

Ketcham: Irresponsible, ill-informed – but well paid.

Becker: Yes. And a true believer.

The post The Mad Religion of Technological Salvation appeared first on CounterPunch.org.


This content originally appeared on CounterPunch.org and was authored by Christopher Ketcham.