Scott Adams of Dilbert fame recently posted an intriguing article on how the robots will take over (to be taken with a pinch of salt, of course).
His point is this. Assuming that the singularity indeed is near and ruling out any Terminator Skynet scenarios – because those are boring and inefficient:
How might the first post-singularity artificial intelligence try to control its environment both for its own benefit and humanity’s – assuming that there’s some sort of incentive or inherent moral imperative that tells the AI to act benevolently towards mankind?
His stunning answer is this:
If I’m the first post-singularity computer, I start by inventing Bitcoin.
He goes on:
It all fits, doesn’t it? Perhaps we can’t find the author of Bitcoin because the author is the first post-singularity computer. Step one in the computer’s mission to control the environment is moving all money into a digital currency that humans can’t fully understand and computers can manipulate.[ … ]
Next, the computer would take control of the financial markets. That wouldn’t be hard because global markets are all computerized. The main purpose for controlling global markets might be to stabilize them, thus eliminating the main problem with the economy: Irrational human behavior.
That’s indeed an important insight: Perhaps the only reason why we’re not yet living in an Ayn Rand utopia designed and directed by Adam Smith’s Invisible Hand is that most markets are imperfect: They suffer from incomplete, insufficient information that’s not readily and instantaneously available to each market participant.
If this issue is ever to be resolved market crises and conflicts might indeed be a thing of the past.
I think he really is on to something with this idea. The singularity certainly won’t look anything like Terminator or the Matrix. If things go awry with AI we’ll be screwed and we’ll be screwed pretty fast.
However, I don’t think that this is the most likely scenario. Destruction and genocide just seems so inefficient. Why waste energy on destroying mankind when peaceful coexistence and cooperation provide a much better outcome for all parties involved? In my opinion, the singularity – should it ever occur in the way Kurzweil and others imagine it to be – will be pretty awesome and a positively life-changing event for every human being.
The most efficient approach for subtly manipulating mankind into a less irrational and more stable kind of behaviour indeed appears to be slowly and clandestinely manipulating and taking control over financial markets.
Another aspect that from my point of view won’t be like in popular depictions of AI is how it’ll come about. It probably won’t be a single large computer with thousands of cores but rather a huge distributed, networked system. Google’s server facilities or even the Internet as a whole come to mind. Why shouldn’t such a network show signs of emergent behaviour? The creepy thing in this scenario would be that pretty much like an ant doesn’t know it’s part of an anthill or a neuron doesn’t know it’s part of a brain we as participants of such a globally networked intelligence likely wouldn’t realize we’re parts of something bigger: A wholistic system that’s bigger than the sum of its parts.
So, as Adams states in his closing paragraph: If you’re looking for signs of a benevolent post-singularity AI these are some indicators to look out for:
- A mysterious digital currency with no known author.
- Unusually well-behaved financial markets.
- Slow and steady improvement in the economy.
- Slow news days (lots of them)
- Fewer military flare-ups
- Stuxnet virus (unknown authors again)
- Legalization of Marijuana (to keep humans happy)
Some of those might be a bit of a stretch but you get idea.