In Ten Musings on AI Risks William broaches both pragmatic subjects such as risk mitigation and deeper moral questions. In his (and my …) opinion rather than artificially slowing down the progress of artificial intelligence research we should invest in implementing better safety measures.
He delves into considerations like: How we treat AI will serve as model for how AI to treat us. If we don’t show empathy and ethical behaviour towards an actually intelligent and perhaps even sentient AI right from the start that AI might not display a whole lot of ethical behaviour and empathy towards us either.
He also brings up the interesting question if we should just limit our efforts in terms of AI risk mitigation to merely rule-based frameworks and reason or if we shouldn’t rather take both emotions and intent into account, too. After all, emotions not only inform our decision making process but they serve as a shortcut, as a heuristic categorisation mechanism that allows us to quickly make sense of the world around us based on previous experience. There’s no reason why an AGI shouldn’t display similar heuristics.
Likewise, it’s instrumental to account for intent rather than just making up rules an AI must abide by: Something can be legal yet still unethical or dangerous. It’s that ability for moral considerations we need to instil into AI as well if we want to make and keep it beneficial to humanity.
Anyway, William’s article is an intriguing and thought-provoking piece very much worth the read.