Wednesday, November 22, 2023

A Taxonomy of AI Risks

Notes Toward a Taxonomy of AI Risks

Steve Goldfield, November 2023


Let me start by stating that I am not an expert on artificial intelligence (AI). However, I started programming computers in 1966, and I have held a number of jobs where I used those skills culminating in my last such job at Sun Microsystems, where I worked from 1996 to 2011 in IT support and development. So, to some extent I am drawing on those skills.

Let's first divide AI into two categories which I will call AI as a tool for humans and AI with independent consciousness or sentience, which I will call science fiction AI. As far as anyone now knows, the latter is not imminent. One good example of it is in Isaac Asimov's “I, Robot” science series. Asimov was a scientist as well as a writer of SF. In his books, Asimov presented what he called the three laws of robotics. These are his three laws.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov later added a fourth law, also called the zeroth law, which takes precedence of the other three. A robot may not harm humanity, or, by inaction, allow humanity to come to harm. Some of the plot elements in Asimov's novels deal with how a robot can manipulate the ambiguity in these laws in unexpected ways. In Asimov's novels, these laws are hard-wired into his robots' positronic brains which are all built by a single company called US Robotics. That, of course, begs the question of whether other more unscrupulous companies could build robots with different laws or whether robots will these three laws could be tampered with to modify the laws. Stanley Kubrick's film, “2001: A Space Odyssey,” also deals with these issues.

As far as I am aware, we are not close to achieving science fiction AI. That, however, could change when quantum computers become available, which may not be far in the future. In any case, how to contain sentient AI is a thorny question. I'll give a few examples. First of all, we have no guarantee that the human developers of sentient AI will follow any rules. Second, it is possible that machine sentience could take us by surprise and arise in an environment without any safeguards or with insufficient safeguards. We certainly know that humans have in the past and will in the future create dangerous technologies which are difficult or impossible to fully control. Consider nuclear weapons as one example.

It might be possible to isolate sentient AI from the physical world, but that would still be a very risky situation. AI might be able to influence humans to do things which break that isolation or to convince them to carry out dangerous strategies. In this taxonomy, I am not nor am I able to provide solutions to risks. I'm only trying to classify them.

So, let's move on to AI as a tool for humans. In some senses, we have that already. For instance, huge investment firms have used AI to manipulate the stock market for years. They use superfast connections (they place their equipment as physically close to the markets as possible) and superfast computers to follow stock trends and then fairly simple AI can react to those trends to make money for those firms and their customers. The rest of us without access to those tools cannot compete.

Let's consider other criminal uses of AI. We already have legions of skilled and unscrupulous hackers who break into supposedly secure systems. They look for vulnerabilities of various kinds, both technical and human. It would be very easy for them to adapt AI to do that. For example, we now use encryption keys which are theoretically unbreakable. However, we know that when quantum computers become available, those unbreakable encryption protocols will likely become easy to break. We know that bots and trolls are used on the internet to shape and channel public opinion. The use of AI could raise that risk by orders of magnitude and also make it much more difficult to detect. We have growing risks to privacy on the internet which would also be multiplied with the use of AI. AI can also be used to generate massive denial of service attacks by overwhelming online servers. That already happens frequently.

Finally, there are the social risks of AI. AI will increasingly be capable of performing tasks which were previously done only by humans. We saw that as a major issue in the recent screen writer's strike and also in the actor's strike. Our current social and economic organization does not offer viable solutions to AI displacement of humans.

In summary, it is highly likely that there are other risks which are not now visible: Risks that we have not or cannot anticipate and, therefore, which we cannot prevent. To some extent, AI could also be used to combat risks, but that would require substantial investment, too. Right now, for example, many of us run anti-virus programs on our personal computers to filter out many known risks. However, the generators of risks, by default, are always one step ahead because you cannot easily devise protection from a risk until you have seen it already.

No comments:

Post a Comment