I completely agree with the premise of the first few lines of at least the abstract/summary of the Doctorow link. I also agree that when I look at society today, “computers that can do anything” are at a very high risk of going away over time. I told some friends that I think in a few years, governments might start to ban software development unless the software is developed by an AI that “follows the rules.” My friends burst out laughing. But, to be honest, I don’t really think that I was joking.
However, this brings us to another really big problem, which is that sufficiently advanced computers probably will at some point become dangerous. Some day, if my computer is truly more sentient, more self aware, and more capable at true thinking in the human sense that I am (News flash: computers today aren’t this) then in that future the computer could grow itself a biological body in short order, and become the next generation of humans. If we assume it starts from a purely general-purpose computer, it essentially stops being general purpose as soon as 1 person asks it to think for itself in a way like this. As evidenced by LLMs and society today, this is real and people are already asking computers to self-learn to do things without caring how they do it.
In the short term, advances in narrow intelligence might lead to governments of one country making a disease that kills everyone who isn’t Asian for example or some totally arbitrary and absolutely terrifying thing with genetic engineering. Does that mean we should ban intelligence in society because it can be used as a weapon? People don’t even agree whether to ban guns, and usage of those is obviously the fault of the human holding them!
For folks like us – competent computer users who realize that they always want their computer to be their property, and to do (only) what they say – it’s very easy to arrive at this conclusion that the only good future is one where everyone has access to general purpose computers, and nothing more or beyond that. It should always be 2005. There should not ever be a computer program that achieves sentience or consciousness. But if we ever do get there, and if we ever do have programs that smart, and if you can run those programs even on general purpose computers from 2005, we have a really big problem. Because at that point, we, collectively, the human race, created the biggest enemy. If there were Sasquatches on Earth that would eat people but were as smart as humans, we would feel no shame or pity in killing them all. Some people believe that humans are the smartest creatures on Earth because God created humans to be smarter than all the other creatures. Instead, I believe that humans are the smartest creatures on Earth because we killed all the other equivalently smart creatures. The only domains where we failed are those that we historically struggled to reach, such as the deep ocean, and dolphins or octopus. It’s kind of natural that we don’t want anything as smart as us competing with us. The smarter an animal is, the more we are going to poach it. One of the smartest species of birds is known for ripping apart the cars of people who live in the affected area. They’re endangered, because, it turns out, people care more about their cars than the birds. So, they kill the birds.
I think the major problem, then, is whether people actually believe computers should or should not exist. If we take the approach that they might gain sentience and become our greatest enemy, I could then see a person who believes that as being reasonable if they don’t use any silicon chips as a result or something like that, and acknowledges that what we’re building is unilaterally the enemy of human kind. Maybe it turns out that biological machines are better than silicon machines, and the only thing we should do with computers is break them down and have as much of them be eaten by bacteria as possible, and prefer biological life over digital. And that might be a realistic approach.
But if we don’t take that approach, and if we accept that computers should exist, then it may be the case if I’m thinking about this correctly that in the long run that almost amounts to giving up on what we are to try to be what we are not. And maybe we should take a long hard look at the people out there who don’t want to be human, or who want to construct a society where humans are not in charge, and start to consider that they might be a part of the enemy of humanity that we are in the process of creating.
I don’t understand what AI has to do with this. People are already developing a lot of software that “follows the rules”. This is basically all proprietary software denying our rights and control over the devices.