(FEE) – I just finished listening to Stephen Wolfram’s three-part podcast titled “The Future of AI and Civilization.” If you have any interest in Artificial Intelligence and are not familiar with Wolfram’s work, you should look into it. Genius polymath, creator of Mathematica, Wolfram|Alpha and the Wolfram Language, the author of A New Kind of Science, and the founder and CEO of Wolfram Research, Wolfram is probably the smartest guy I’ve ever met.
The Slow Seduction
What makes his stream-of-consciousness podcast so fascinating is listening to Wolfram think out loud about where technology might take us in the years ahead. Most of his speculation takes the form of “if this-and-such a proposition is true, then it follows that the consequences will be that-and-such.” For example, if it is true that we will one day be able to upload our minds into the cloud, then the civilization of the future will comprise a trillion disembodied souls in a box playing video games for eternity.
To me, this is a description of hell and not heaven, though I don’t spend a second worrying about it. I find the assumption that we will be able to separate our minds from our bodies and upload them into a computer to be totally preposterous given that we know about as much about how the brain works as cavemen knew about how the sun works. (Where does our mind go at night, and why?)
Whether or not spending eternity playing video games with a trillion disembodied souls is your cup of tea, one thing Wolfram points out is that no one is going to be forced to climb into that box. Only governments are in a position to use AI coupled with total surveillance to coercively control their populations. We can stand back and watch China give that a shot, but it’s unlikely to happen here. Rather, we are going to be incrementally seduced into the box by increasingly sophisticated AIs striving to win our favor by doing more and more stuff for us.
This goes beyond the hundreds of millions of hours that Facebook’s algorithms have already seduced people into devoting to cat videos and never-Trump rants, all so we can be rewarded like Pavlov’s dogs. (Have I reached a million likes yet? I must be special.) Think about that sweet voice giving you driving directions. She doesn’t command you to take a right turn here and a left turn there. But as she gets smarter about things like traffic jams and speed traps, we get more inclined to do what she says without second-guessing.
The same thing will happen to everything from medical advice to legal counsel. Wolfram points out that it’s just a matter of time before anything that requires “repeat judgment” will be done better, faster, and cheaper by an AI. He postulates that a new Symbolic Discourse Language will emerge that will allow AIs and humans to communicate more precisely. This way AIs will have a better idea of what it is we think we want, so they can compete to give it to us. And compete they will as long as Big Brother doesn’t grab the reins and herd us into a gulag. (See China, above.)
Of course, Wolfram doesn’t answer the question of what AIs will do when people demand mutually exclusive things like; give me security and freedom or equality and innovation or immortality and a reason to live. But perhaps a really clever AI will feed enough political campaign literature into a neural network to discover how to successfully promise impossible things it will never be held accountable for, all paid for with other people’s money. Then we will really know that AI technology has arrived.
Bill Frezza is a fellow at the Competitive Enterprise Institute.
This article was originally published on FEE.org. Read the original article.