I’ve recently finished reading Avogadro Corp by William Hertling and the sequel A.I. Apocalypse.
These books deal with the idea how artificial intelligence might come about today or in the near future. The story’s main premise is the eponymous Avogadro Corp, a hardly disguised Google. This company, whose name is conveniently related to a large number as well, offers a wide range of Internet services: Search, web-based office suite, web-based eMail (AvoMail …) and its own smartphone OS (AvoOS). Sounds familiar?
The story starts with Avogadro developing an addition to its AvoMail software: ELOPe, an email language optimization program. ELOPe is designed to maximize the success of the author’s intent behind an eMail, e.g. if the author’s intent is to get funding for a project from his boss, based on the huge corpus of eMail available to AvoMail ELOPe will make suggestions as to which phrases and terms are most likely to convince her to approve the funding.
Due to some seemingly innocuous changes to ELOPe’s code things work out differently, though. ELOPe becomes able to autonomously change eMail content without user interaction and starts subtly manipulating people into maximizing chances of its own survival by provisioning hardware it needs to run on and allocating funds needed to expand its abilities. As you might expect things quickly get out of hand from this point.
While perhaps not instant classics such as Neuromancer Avogadro Corp and its sequel are certainly a good read. The topics the story brings up and the questions it gives rise to are weird and quite valid at the same time:
- Will the first human-created AI come to pass intentionally or rather accidentally?
- How are we to recognize such an AI at all, given that it might both be very different from us and exponentially more intelligent than we are?
- On the other hand, would an AI recognize humans? To a vastly intelligent AI wouldn’t we be more like animals are to us?
- If it’d recognize us would it opt for a symbiotic relationship with humans or would it be hostile towards us?
I think an AI as it is traditionally thought of – a large computer that’s designed to mimic features of the human brain – is still far out of reach. However, as for emergent intelligence that more or less just ‘happens’ accidentally I’m not so sure. Evolution seems to suggest that given enough complexity, cooperation and specialization organisms are bound for developing some sort of intelligence eventually. Just consider ants, who as single organisms don’t show a lot of intelligence. However, ant colonies as a whole are known to be quite smart. They are capable of planning and overcoming obstacles.
The parallels to human cooperation and networking today is quite obvious. The question remains, though: Have we already achieved sufficient system complexity to give rise to intelligence and if so would we recognize it as such?
Intriguing but also somewhat scary thoughts, indeed.