I came across an article called ‘The death of HAL: the Evolving Digital Ecosystem‘ and it got me thinking. Their AI expert went into great lengths to explain that the expected direction for AI is more of an intelligent usability interface that supports us. He stressed that the likelihood of a advanced artificial intelligence that turned on mankind is unlikely due to the nature of AIs that we create. That brought me up short and got me questioning his answers. Could a supportive technology AI develop into something that could, ultimately, turn on us and endanger us?
And I’m forced to respond with a resounding yes. I believe they can. I believe this ‘Expert’ Nigel Shadbolt has a very narrow view of AIs and is merely looking at them in a small scope. I’m not questioning his credentials, nor am I suggesting he is not an intelligent individual. Also I’m not suggesting that I think that Hollywood has hit things on the nose with some of their interpretations. The very idea that a computer is going to spontaneously become self-aware and want to kill us is stupid.
HOWEVER. I think it is perfectly possible and reasonable to think that a system designed to take information, develop conclusions on that information and apply those conclusions without human involvement is then fed m massive amount of information that an AI could easily adopt an alternate view of how its defining rules should be interpreted.
For example, lets say you have a intelligent system monitoring and controlling traffic for the country. All traffic has been automated and is handled by this system when it is on major freeways. That system is given the base defining rule of ‘Keep traffic efficient, prevent accidents, ensure the safety of the passengers’. These seem like vary basic and straightforward guidelines to follow. Now lets assume that same system is then fed all of teh traffic data and statistics for the united states since the creation of the automobile. It takes all of that information, adds in current trends and makes conclusions based off that information it has been fed. It now has to use this information to find the best way to meet all of it’s requirements. Now you and I might work on timing algorythms for lights, street layouts, etc. But what if the system comes to another conclusion? Based on all the data it has been given, and the large amount of fatalities atteched to the process of traveling by car, the system decides that the best way of keeping the streets efficient, accidents down and protect the passengers is to simple not allow anyone to travel by car ont he freeway. This is a very logical answer to the goals it has been assigned that fits with all of the variables and data it has been given.
This is what I mean by narrow viewed. Just because something seems obvious to us, doesn’t mean that it will be the same when looks from a completely emotionless and logical perspective used by these systems. From an emotionless view, sacrificing a couple thousand to save a million seems like a solid and logical choice. But as humans with emotions we know that there are often better answers, and we would look for those before we go with the sacrificing of people for the greater good. Computers don’t have that luxery. And as such, if they come upon the sacrifice answer first and see that it would be the most efficient answer. If they have the authority to do it without human involvement you can be assured they will make the sacrifice on the spot.
I suggest that anyone who is working on advanced AI concepts to at least keep that in mind. We are not computers and while we may develop them, we can’t think like them. So making such a broad and sweeping statement is probably not the best direction to go.
Update: I should note that I am in no way against the development of Artificial Intelligences. I actually think its a direction we should continue working dilligently along with Space research for space travel and colonization. However that doesn’t mean we should ignore risks. We should acknowledge them and work to resolve them.