Structure vs. Chaos: Why I am Team Elon and think that AI regulation is absolutely necessary

 

00AI Bild Elon 1
https://twitter.com/eucheerleading

In late July there was an open controversy between Elon Musk and Mark Zuckerberg about the impact of AI and if we would need to introduce regulations to it. It culminated in this tweet.

00AI Clipboard02

Musk calls for heavy AI regulation, while Zuckerberg argued that AI can improve people’s lives in many ways. Although under the brand “Technology is our friend” I have always taken the optimistic and progressive view on tech, I can’t follow Mark Zuckerberg’s argument – because no one denies this and there is no reason we couldn’t reach these advancements with “regulated AI”. On the other hand, I am not afraid of Skynet and machine-robots taking over the world. I rather think that we need to regulate AI because I believe that human societies cannot – at least not yet – deal with what AI is capable of or will be capable of in the near future.

Structure vs. Chaos

To explain my point, I need to get philosophical for a bit: I have always believed that there is an element in human nature that we need some kind of order when we organize. I don’t mean this as a political statement about governments and hierarchy, although there are little to no examples of truly anarchistic societies (not singular projects). I believe that, although we are different from animals and have to express this difference in how we live in societies, we belong to a group of “social animals”, and there are many examples of structured animal societies. You can see these orders as hierarchies, but my point about humans is that order gives us structure, it gives us mechanisms we can understand in a cause and effect relation. It helps us deal with everything that happens in our lives. And if we can’t understand something, within that order we create stories and myths: in most cases gods and religions. These are basically collectively created and shared fake ideas of mechanisms about how earthquakes, thunderstorms, death or accidents, rainbows and love happen. It means that we are not able to accept chaos, and that we collectively decide to have unreasonable explanations rather than accepting that we cannot understand or influence things, and that they may happen accidentally or outside our sphere of influence. The topic is far too big and philosophical to be discussed in anything near to its entirety a) by me and b) in a single blog post, so let’s for the sake of argument assume that humans need some kind of structure over chaos, because there needs to be some kind of understanding why the things happening around them take place. If this is the case, I see a big societal challenge for us that may lead to collective unreasonable, irrational behavior if we don’t address it properly during and with the rise of artificial intelligence.

While every little script nowadays is described as a bot or AI, let me be clear about what I mean: The subjects of my thoughts are applications that are not created following a huge chain of “if-then” decisions defined by humans, but that fulfill four criteria:

1. They involve machine learning in a sense that they constantly develop and improve on their own, not human-micro-managed term.

2. They make decisions based on perceiving an ever-changing environment and act on their own, but mostly within human-defined boundaries.

3. They can reason and solve problems based on growing knowledge derived from their (and their evironment’s) actions and reactions.

4. They have the potential to take over (and in extension change the nature of) functions that traditionally have been performed by humans

One easy example would be self-driving, autonomous cars. They constantly improve, make decisions based on object recognition, sensor information and past (and collectively shared) knowledge and substitute human drivers (and will, of course, in the long term change driving by not emulating humans anymore). When you experience a self-driving car, it seems like magic – but you can easily understand what the AI is doing. You understand cause and effect. You have an idea of why the car is accelerating or breaking, why it indicates before taking a turn etc. It seamlessly fits into our knowledge, into our order (our road traffic regulations), into our structured world. The magic is that it behaves like a human.

The question is how we will be able to deal with this AI when it does not behave like a human – when it makes decisions that we don’t understand. When our regulations don’t fit to its abilities. When the cars should – and at some point maybe decide to – run through a city with 190 km/h, leaving only a few inches of distance to the next car because it knows we (and it) will be safe. When it changes the nature of its function and is not driving like a human anymore – will we accept it? Won’t we see any accident a hundred times worse than our frequent, human caused traffic deaths, even if there will be so few compared to today? Will we regulate it irrationally? Will we create a god-like myth story around its actions, embedded into a bigger narrative to help us understand what is happening?

 

Structure vs. Chaos in the organization of information

It may seem super far-fetched and almost idiotic to ask these questions today, but let’s take a look at how we organized information during the digital age and see where we are today: We initially created an internet by human order in systems that were highly ineffective, but that everybody could understand. Then digital mechanisms and, lately, AI came into the equation, and we have reached a point where we don’t really understand how and why we get exposed to this or that content. And our reaction is regulation, and at many occasions a collective irrational behavior.

When the world wide web spread, we organized digital information just like physical information. We just mirrored our way of organizing a library or a newspaper – we created indices and a catalogue. Remember Yahoo and AOL? Editors put websites in categories and displayed them as lists. The first truly digital way of organizing digital information was then algorithmic search. And because it was truly digital, it had to create a giant, very close to a monopoly, named Google. It quickly became so powerful, it is no accident that “don’t be evil” emerged as a motto: To take away fears of something that substituted a traditionally human function, the structure and order of information accessible to us, with algorithms we could barely understand. We, as a society, created a profession of experts that assured us there was some cause and effect mechanism. That there was indeed a structure and order to it. They oversimplified how the algorithm worked in order to ease our feelings of helplessness towards the machine – and it worked.

The next level in accessing content came from Facebook: While catalogues and indices as well as search required some idea of what content or information you would like to access, Facebook lets you discover content in a frame defined by yourself – by issuing permissions to post into your newsfeed (which is what a friendship or a “like” for a page basically is). Probably one of the first really widely distributed machine learning algorithms now decide what you see in your news feed, within those human defined boundaries – on the one hand by yourself, based on your connections, on the other hand by executives at Facebook, in order to crack down on fake news or click bait and other “unwanted” effects. But the micro-management and the day to day decisions are done by an ever improving, self-learning machine based on past knowledge and actions of the environment (the user, the publisher etc.). The traditionally human task of distributing information – remember there are jobs like “editor in chief”, who decide what’s the “top story of the day” for a news show on TV or a newspaper – is substituted by software. In Germany, Facebook currently runs a huge (OOH, TV and print) campaign that explains how you can take (the illusion of?) control over your newsfeed.

We came from a distinctly human way of organizing digital information in the early nineties to a duopoly of two fundamentally different mechanisms that, in their inner workings, we as societies do not comprehend and are incapable of understanding. No wonder that calls for nationalizing/institutionalizing both companies, or at least regulating them intensely, are increasing . But despite the majority of these voices being opinions by journalists, and for some reason are not broadly discussed in public, the European Union will introduce the General Data Protection Regulation (GDPR), effective May 2018, that will among other things give residents of the EU a “right to explanation” when algorithmic decisions are made about them.

It remains to be seen what effect this will have on AI and machine learning, and how strictly the regulation will be enforced, but it shows that societies have a need to understand, and are not willing to accept decisions from machines over them that cannot be explained in retrospect – and for once, the regulation is a rational response (maybe because it is not a result of a public debate among voted parties).

 

AI will outsmart us

Speaking about Google and Facebook, this is “just” about the order and distribution of information. But we will face so many more areas where AI will have a direct impact on how we live – what insurances we get, how we are treated medically, where we can and can’t go, what education we can get etc. As long as we can understand the decisions made, we may be able to cope with these developments as societies. But we are not far away from a future where AI “outsmarts” us: When in May 2017 a Google AI beat the world’s best Go player – something that only a few years back seemed nearly impossible – it deployed moves that the most skillful players couldn’t understand.

Accordingly, the defeated player talked about a god-like performance.

00 AI Bild GOD
It must be god-like

How will we react as societies?

We live in exponential times, with technological progress ever accelerating, and as humans we are barely capable of understanding anything that happens on the second part of the chessboard – because in the physical world, developments like these can rarely be observed or experienced. Now imagine AI taking over the roles of lawyers – making decisions that directly impact you and your life, but in a way that you cannot comprehend or reconstruct. Imagine machines if not making, then at least proposing laws. And politicians following those with nothing but trust in the AI’s abilities. Imagine a software deciding that a societies’ investment in your health only has a minor probability of succeeding. Imagine software fighting wars, and we are weaponizing AI already, or using it to hack and access computer systems – and can only be defended by using AI on the security side as well. Gizmodo puts it this way as the bystander effect:

“So our brave new world of AI-enabled hacking awaits, with criminals becoming increasingly capable of targeting vulnerable users and systems. Computer security firms will likewise lean on a AI in a never ending effort to keep up. Eventually, these tools will escape human comprehension and control, working at lightning fast speeds in an emerging digital ecosystem. It’ll get to a point where both hackers and infosec professionals have no choice but to hit the “go” button on their respective systems, and simply hope for the best. A consequence of AI is that humans are increasingly being kept out of the loop.”

And imagine this in a world where a majority of today’s jobs will probably not exist anymore. If we cannot comprehend what the algorithms do, how they decide, why they decide the way they do, but let AI technology develop the way we do it today, we will enter a world that looks like chaos to us. We will lose core elements of what we perceive as structure, and our narratives about why things happen in the world will not suffice. And as my entering thesis is that we as humans cannot live with chaos, I am afraid that our reactions as societies may go three ways (that may even combine):

  • We will irrationally regulate AI in some societies following an anti-tech movement. Those who don’t will have huge advantages in almost every field of science or economy, and in the wrong hands, this can have devastating effects, leading to two:
  • We will create oppressive systems – “officially” as state systems or through corporate monopolies – in which only a small amount of people control the boundaries of AIs, forcing the “chaos” on people which will lead them to perceive its incomprehensible behavior as terror (see image below) or, leading to three:
  • We will create a religion-like narrative that makes us accept decisions that we do not understand

00AI Bild Putin

Being a humanist, and in light of these thoughts the term “humanist” may get a deeper meaning than today, all of these scenarios seem unacceptable. I don’t fear machines taking over in a Skynet sense, building soldiers of their own and subdue humanity to a rule of terror. At least not yet. But I don’t see societies being able to deal with what we are creating here (and if you’re interested, read this view of “Talking Head” David Byrne with similar views, but coming from the individual and its need to communicate and interact with other humans).

It’s a pattern

In fact, I think that a great many of the political turmoil we have seen in recent years is already due to our incapability, as societies, to understand the complex mechanics of the modern world. Rationally, it is undoubted that Brexit, the Trump presidency etc. are no way to tackle the existing challenges, even from a nationalistic point of view for these nations. It’s just not the reasonable way to act. Yet people voted against their long term interests, and in many cases against their short term interests, too. “America First” will harm the US economically. Brexit will have crippling effects on the British economy. Still, many people feel that this would be the secure way to go.

When it comes to AI in exponential times, I think the level of complexity we are facing is at a whole new level compared to today’s already barely comprehensible world. And I fear it will inspire equally irrational responses with an impact far beyond Brexit or Trump. It may even lead to wars: When the world tried to stop Iran’s nuclear program, you could do so by physically restraining their access to the materials needed while negotiating (although we also created a software to destroy their progress – but we had to insert it physically). How will we treat a country that weaponizes AI and lets it make strategic and tactical military decisions? I think AI should be regulated on an international level. It needs to be part of international negotiations, quick, and we need to create a framework of rules that will allow the positive impacts Mark Zuckerberg talks of, but will ensure that AI enters our societies and lives smoothly, within our order, and not creating an environment that we perceive as chaos.

There are several approaches to how AI could be regulated, and what the basic common understandings are – and not incidentally they address basic human rights and and what humanity is between the lines. I think that an AI regulation is nothing less than a renewed declaration of universal human rights.

I truly believe that besides climate change, this is our largest issue in the long term, and similarly, it needs quick action, and can also only be solved on an international level with the maximum pressure on those who decline to join. This is the one tech topic I am not optimistic about.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s