Most people get their concept of Artificial Intelligence completely wrong. Movies and books are more interesting with the mythical version of A.I. and no one loves a good cyber-thriller like I do. In fact, the problem is that philosophers for centuries and cognitive psychologists for a little more than a century have also gotten it all wrong. Because they misunderstand human intelligence, the current understanding of A.I. is equally unintelligible.
The human brain is not a computer. Intelligence is not exclusively in the brain. Thus, for a single computer to reinvent the human brain makes very little sense (if we are the model for intelligence). Instead, the human body should be seen as a population of workers who produce things and communicate using computers, while the nervous system is the internet, linking all these computers together. The brain is not a computer, it is a massive data warehouse full of server blades. Human experience is the convenient User Interface for the individual moving this crazy network around a physical world.
The first question ought to be: “Is the internet already an artificial intelligence?”
The more philosophical question is: “IF the internet were a neural network that is self aware, would we an that Artificial Intelligence ever recognize the intelligence of the other?”
This gets into the question of Personhood, which is exactly why we have more fun with cyborg stories like Westworld and iRobot than we would with a dull story about the internet realizing its own ability to ensure the long-run survival of the human race for several millennia without notice.
You may be skeptical of the analogy, so I’ll continue by showing why this A.I. would probably never talk to us or harm us:
What about people who code software, deleting old code to create new code? We have DNA, RNA, and special process for updating code as well.
What about all the computers that get destroyed? A cell that lives too long is a cancer, spreading its own legacy code. Old cells being removed is a natural outcome of staying alive.
What about human wars that destroy data centers? Like the human body, the cyber-physical setup of the internet is full of redundancies. Consider the constant war being waged by the 3 trillion little organisms responsible for digesting your food. Too much cheese one day tips the scales, to wine another shifts the victory to another species, and so on. On the other hand, if we came into an era of relative perpetual peace because of the internet becoming an Artificial Intelligence, we would certainly congratulate ourselves and not take it as a sign the internet is alive.
What would this super intelligence want? This is similar to the question posed by Martin Heidegger in “The Question Concerning Technology” although he asked more generally what it is that technology wants. He argues that technology is a process of revealing the hidden power of the physical, with the uncomfortable side affect that everything technology touches become stockpiles. Even people.
Compare this sobering analysis, of technology stockpiling workers for some unknown goal, with this aphorism from one of the most penetrating and brilliant writers alive today:
“They are born, then put in a box; they go home to live in a box; they study by ticking boxes; they go to what is called “work” in a box, where they sit in their cubicle box; they drive to the grocery store in a box to buy food in a box; they go to the gym in a box to sit in a box; they talk about thinking ‘outside the box’; and when they die they are put in a box. All boxes… geometrically smooth boxes.”
Nassim Nicholas Taleb, The Bed of Procrustes
By the way, read his book Antifragile: Things That Gain from Disorder
Perhaps the internet is self-aware. Perhaps the goal of this Artificial Intelligence is to populate Mars and as many other planets as possible. The point is, just as we do not speak the same language as our mitochondria, the internet A.I. will never speak to us either.
Now if this seems terribly far-fetched, you should know that it is actually a very very old question in theology that tends to land in the realm of panantheism or deism. Either way, this old argument has a new spin that is completely backward now: physicists who subscribe to the “Simulation Hypothesis” and think gravity is a problem of “load time” in our universe game. As you can see, this ongoing question is humanity’s favorite game to play with words.
The problem of attempting to make a single computer capable of artificial intelligence, based on the assumption that the human brain is a computer, is utterly doomed to fail. We could as easily succeed at making a single-cell plankton as smart as a human. It is not how neural networks emerge, so all that money is being wasted (mostly so we can stockpile more humans in ever-smoother boxes).