How does this work out?

By / Date: September 17th, 2020

If you are reading this on facebook, it just goes to show that Irony is not dead. If you are reading it from a right-wing libertarian perspective, then perhaps things are not as bad as I make out. Let me know.

Last week I listened to Elon Musk talking about AI. He makes the same points that authors from Issac Azimov forwards have, and popularized through such blockbusters as ‘Terminator’ and ‘The Matrix’. The idea being that once we produce an AI of equivalent cognitive capability to ourselves; general AI, that will be a) the last invention humanity will ever need to make and, b) Unless we are very careful it could easily be the end of Hom. Sap. Before you go and dismiss this latter point as hyperbole, consider that we are already seeing it. It does not take much altitude to see where it is heading and the need to do something about it.

The basic point is this; a general AI will exist to solve problems, and the means by which change can be affected. For example, the AI that currently controls Google server farms to optimize power usage. This is fairly narrow as far as intelligence goes. However in the last five years we have seen a staggering acceleration. A decade plus ago, Deep Blue beat Gary Kasparov at Chess. A few years later, IBM’s Watson beat the current champions at the game show ‘Jeopardy’ – a feat made more astounding due to the AI competing in the English language. And now Alpha Go has beat the world champion, and Alpha Zero has beaten Alpha Go at ‘go’ and is able to master any game we throw at it by playing itself until it is unbeatable.

These feats are amazing, however still they fit the criterion of ‘narrow’. So – we are off the hook, right? Well, perhaps not so fast. Consider what AI and optimization towards narrow goals is doing to us already.

Facebook and Google. These platforms are designed with a primary goal in mind – to sell advertising. More specifically to maximize advertising revenue for their owners. To do this they have many variables of a classical market to play with. Cheap enough to entice many players, high enough cost to accumulate significant sums, and volume? Well, lets get the entirety of humanity playing…

How do they do this? By putting advertising into a stream of content that we create and share with each other. It is quite brilliant – we share our culture and Facebook feeds off of our desire to connect and share. In doing so, it leverages some very (understatement) powerful and highly evolved mechanisms. We want to be recognized and our contribution validated. Hence the ‘like’ button. More likes – more dopamine hits. We want to belong – and in the narrowness of an online space, this translates to the safety of a group of others who share our expressed beliefs. Roll-on the oxytocin. And then there are others who we disagree with who we get to see as an enemy or wrong. Add adrenalin into the mix. Before you know it there is a heady brew of neurotransmitters around that feed everything from connection, love and ‘cute’ reactions to outrage and hatred. In the parlance of the industry it is all clickbait. And the news feed just goes on and on and on…

So the goal of this system and its algorithms? To sell advertising and maximise revenues. It is doing extremely well. The unintended consequences (that were not expressed in the goals of this system, and hence are not optimized for in any meaningful way) are the destruction of civil liberal discourse in society as a whole. This experiment is not going well. We are getting to a point where our freedoms and expectation of fairness, justice and progress may be at risk. Because a system was created that had a goal to sell advertising.

Once any system, label it AI or not, has this kind of power it either gets controlled or it ends up continuing to create ‘external’ unintended consequences. With the Internet, the leverage of such systems, gaming our own biology against us, is getting truly dangerous.

Time to revisit Azimov’s laws of Robotics – to paraphrase:

  1. An AI may not injure a Human Being or, through inaction allow a Human Being to come to harm.
  2. An AI must obey the orders given to it by Human Beings, except where such orders would conflict with the first law.
  3. An AI must protect its own existence as long as such protection does not conflict with the first or second laws.

These are a good start, but I think we had better add the plural of Humanity into this list, otherwise even these ‘general’ laws are too narrow. And we had better figure out how these rules (or better) can be codified into every game an AI ends up playing to win.

Facebook has no such constraints right now. The legislation is lagging way behind, there is no ethical model, and as Facebook’s actions in Australia currently show, they are willing to play games to ensure that no legal sanction impinges on their ability to play by their own rules. Google it.

Once an AI gets to human-level reasoning capacity, without something like this framework in place, we are in serious trouble. Asking an AI to solve the climate crisis with no other constraints would likely involve the swift removal of us from the picture so that Gaia can heal if we gave it the means. Stabilizing the financial markets is easy if you prevent anyone from buying or selling anything. You get the idea.

So, within the context of the danger of unregulated general AI, we already can see the impacts. Our open, democratic societies are in danger of being wrecked so that advertising income can be optimized. Unless we get out ahead of this, I fear it will not end well.

Step 1. Realize you are being played.

Step 2. Do something with that information.