Artificial Intelligence, Machine Ethics, and Regulatory Policy

by Inayat Chaudhry

An Introduction to the World of Artificial Intelligence

In today’s world of Internet of Things connected devices, which usually use artificial intelligence (“AI”) technology, it has become increasingly important to think about the ways in which AI should be regulated in the years to come. I’ve been thinking about AI and its implications (in terms of user privacy, ethics, implicit biases, and regulatory policy) ever since I bought a Siri integrated iPhone. While I’m going to do another follow-up post on the implications of AI on user privacy, this post is solely going to discuss AI and regulatory policy. In order to do that, I will delve a little bit into robotic/machine ethics from a political philosophy perspective as background.

There are three main stages of Artificial Intelligence. The first stage is called artificial narrow intelligence (ANI), where the AI is specialized and task oriented – for example being good at chess. We crossed this threshold on February 10, 1996 when IBM’s Deep Blue won its first game against world champion, Garry Kasparov.[1] We are currently living in a world that runs on ANI. There is Apple’s Siri, Amazon’s Alexa, Netflix’s algorithm that recommends shows you might like based on your previous viewing history, e-mail spam filters, and Google translate among many other examples that we use on a regular basis. There are also sophisticated ANI systems such as IBM’s Watson. The second stage is called artificial general intelligence (AGI), where the AI can do anything a human can. We are quickly approaching this stage with Google’s DeepMind’s AlphaGo. AlphaGo uses reinforced learning and neural networks to mimic the learning process of a human brain. AlphaGo defeated Chinese Go Master, Ke Jie in May 2017.[2] Of course, this stage raises the age-old concerns of consciousness. But before I get into consciousness, let me quickly mention the last stage. The third stage is called artificial super intelligence (ASI), where AI ranges from just a little bit smarter than humans to infinitely smarter and thus unknowable for us. I mention this now because if AGI raises concerns about human consciousness, ASI obviously does even more so. For the purposes of this thought piece, I’m going to talk about consciousness and robotic ethics in the context of both AGI and ASI since ASI is hypothesized as a progression from the AGI stage.

Let’s talk about Consciousness

When is it that humans attain consciousness? Is it in the womb or soon after being born? And will it work the same way for AGI and ASI? According to this Scientific American article,

“consciousness with its highly elaborate content, begins to be in place between the 24th and 28th week of gestation. Roughly two months later synchrony of the electroencephalographic (EEG) rhythm across both cortical hemispheres signals the onset of global neuronal integration. Thus, many of the circuit elements necessary for consciousness are in place by the third trimester…The dramatic events attending delivery by natural (vaginal) means cause the brain to abruptly wake up, however. The fetus is forced from its paradisic existence in the protected, aqueous and warm womb into a hostile, aerial and cold world that assaults its senses with utterly foreign sounds, smells and sights, a highly stressful event.”

Because there is no “gestation period” for AI, consciousness will not arise in the same way that it does for humans even if AGI and ASI use neural networks like the human brain does.

I can only begin to hypothesize when consciousness arises for AI. Part of it depends on what one believes constitutes consciousness and philosophers have long posited its meaning. For example, Descartes defined thought in the following way,

Thought. I use this term to include everything that is within us in such a way that we are immediately aware [conscii] of it. Thus all the operations of the will, the intellect, the imagination and the senses are thoughts. I say ‘immediately’ so as to exclude the consequences of thoughts; a voluntary movement, for example, originates in a thought.”[3]

After Descartes, three main tenets of the term conscience emerged – consciousness makes thought transparent to the mind, consciousness involves reflection, and conscious thought is intentional.[4]

And, John Locke‘s definition of “person” contained within it the definition of consciousness:

[A person] is a thinking intelligent Being, that has reason and reflection, and can consider it self as it self, the same thinking thing in different times and places; which it does only by that consciousness, which is inseparable from thinking, and as it seems to me essential to it: It being impossible for any one to perceive, without perceiving, that he does perceive. (Essay 2.27.9)[5]

Will AI ever attain Consciousness? 

Based on the thinking of the great philosophers then, my thinking is that consciousness probably will begin sometime or soon after AI has been programmed to develop a deep-learning system and is able to process large amounts of information, store it, and use it at a future date and it only improves as it constantly processes, stores, and uses the information for future tasks and improves its neural networks. The use of the information stored by AI would constitute the reflection and intentional bit described above. It would also be akin to Locke’s belief that a “person” has reason and reflection. Under Locke’s theory, the person also additionally constitutes the same thinking thing in different times and places, but with AI, a complication arises because the same AI (With thought) that was programmed by a specific person can be used in different devices. But once on different devices, the AI learns and processes information from its user in different ways and can essentially communicate with HQ about what it learned thereby creating AI that is akin to both the same thinking thing in different times and places (for example, my Amazon Alexa device) and a different thinking thing in different times and places (for example, your Amazon Alexa device). So even though, to begin with Amazon Alexa is programmed in the same way, it will learn different things depending on who uses it and will hypothetically be able to communicate with its counterparts once AI attains the AGI stage. And, this is similar to what happens in the movie Her.

Let’s talk about Machine Ethics

The reason this discussion is important is because consciousness is implicitly intertwined with “personhood” as explained by Locke. And with consciousness and “personhood,” societal morality comes into play which in turn involves having certain rights and abiding by ethics, which in turn involves the rule of law. So, the next question involves thinking about how AI should be regulated once it achieves consciousness and therefore some version of “personhood” which applies to AI. Machine ethics is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally. Isaac Asimov considered this issues back in the 1950s in his book, I, Robot. He famously proposed the Three Laws of Robotics to govern AI. There have been others who have posited this same question. There is Turing’s Polite Conversation, which involves The Turing test, developed by Alan Turing in 1950 (same year as I, Robot was published!), is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. There is also Searle’s strong AI hypothesis among others, which proposes,

The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.

How Should We Regulate AI?

I think an important first step in thinking about regulation would involve thinking about which stage the AI is in and going from there. The policies and regulations that will need to be put in place will be very different for all three stages.

ANI and its Privacy Concerns

For the ANI stage, which constitutes the aforementioned Alexa’s AI, Apple’s Siri, and Netflix’s algorithms, among others, what needs to be regulated is the specific corporation’s use of its consumer’s metadata, which should be encrypted along with the corporation following the principles of transparency and accountability in the case of misuse of this data. However, I would like to provide further details about my conception for these concepts and I will do so in a separate post which will talk about IoT connected devices, AI, and privacy in depth.

Consciousness, AGI, and Regulation

For now, I want to focus on and postulate the ways in which we can regulate AGI. In In order to do this, let’s envision a specific scenario – one in which AGI robots work among us as our colleagues in the near future. Besides, the machine ethics I talked about earlier, Oxford University Professor Nayef Al-Rodhan has mentioned the case of neuromorphic (“brainlike”) chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons. So, AGI Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit – or if they end up developing human ‘weaknesses’ as well: selfishness, a pro-survival attitude, hesitation etc. I mention this because in a large part, the AI’s decision making will depend on the person who programmed it, and with that comes the risk of the AI having the same implicit biases as the programmer.

AI and Implicit Bias

There has been a lot of talk recently about algorithmic bias being already pervasive in several industries with no one making an effort to correct it. For example, police departments across the U.S. are implementing “predictive-policing” crime prevention efforts. More specifically, in cities including Los Angeles and New York, software analyses of large sets of historical crime data are being used to forecast where crime hot spots are most likely to occur and the police are then directed to those areas. This may sound like something right out of a movie (like Steven Spielberg’s 2002 Minority Report) but problems abound with the real systems as they currently stand deployed in these cities. Currently, the software risks getting mired in a vicious cycle where the police increase their presence in the same places they are already policing, thus ensuring more arrests come from the same areas. In the United States, this has usually meant more surveillance in traditionally poorer, nonwhite neighborhoods. This means that the algorithm is likely going to store past accounts of where these arrests are being made, which in turn will reinforces the AI’s belief in higher than the normal level of crime taking place in the same neighborhoods. And, if this discriminatory algorithm goes unchecked and unaltered, not only will a system like this raise Fourth Amendment concerns, but it will also produce abundant constitutional law concerns.

Let’s Be Proactive when it comes to AGI! What can we learn from already existing law?

I am a big believer in putting proactive rather than reactive policies in place when it comes to AI. Given the previous discussion then, a big part in regulating ANI and AGI would involve making sure that the people who are programming the AI do not impart their implicit biases to the AI. Just like the privacy law space advocates for privacy be design, the AI law space should advocate for eliminating implicit biases by design. Obviously this is easier said than done since everybody has implicit biases. But, what will prove to be key is to make sure a diverse team of individuals (vs. just one individual) is programming these AI algorithms. Another way would be to catch whatever implicit biases leak through to the AI by asking a diverse team of engineers to revisit the AI’s learnings on a regular basis to correct any implicit biases found in its algorithms.

I think we can learn even more from how consumer privacy is regulated by the Federal Trade Commission (FTC) when it comes to regulating AI. The FTC employs “enforcement actions” when it comes to companies keeping their privacy promises to consumers. I think a federal regulatory body akin to the FTC should be put in place for the sole purpose of regulating the different corporations’ AI’s if and when they falter, including  ensuring that companies keep their AI algorithms bias free.

As for the AGI stage, because robots in the AGI stage would be similar to, if not the same as, humans, it would be easier to hold the AI accountable for its actions since presumably it would be employing neural networks like a human brain in its decision making and will be capable of performing actions like humans can. Therefore, a good starting point for developing regulatory policy for AGI then would be to look at what the famous philosophers like Descrates, Plato, Thomas Hobbes, and John Locke among others had to say about human thought, consciousness, and societal and political norms. Even more relevant would be looking at the more recent international laws, treaties, and documents that concern human rights in existence. The International Bill of Human Rights consists of the Universal Declaration of Human Rights (UDHR). The International Covenant on Economic, Social and Cultural Rights is another good resource. And, the International Covenant on Civil and Political Rights is yet another document that may be useful to dig through for policy and lawmakers when formulating these laws in the future. On a fundamental level, it is my belief that if the AGI robots will reach the same level of intelligence as humans, there is no reason for the law not to treat them akin to humans when it comes to attributing rights as well as criminal and civil penalties. So, when this happens, the law should be adopted to include AGI robots in our statutes and regulations, including the judge made common law in the future.

What the heck do we do about ASI?

Finally, ASI is a stage that a lot of public personalities have already expressed fears about. For example, Elon Musk is famous for his views on how AI could potentially doom the human civilization.[6] Stephen Hawking has expressed similar views.[7] However, the good news is just like nobody knows whether Blockchain is a bubble or here to say, no one can accurately predict whether ASI will overtake and destroy humanity. There are obviously inclinations that people lean toward, but the very beauty of human intelligence is that the collective intelligence of our brightest and best minds will not let AI get to the level where it overtakes us and uses us as juice for their needs, The Matrix or Oblivion (Hollywood movie references) style. Luckily, Elon Musk’s own non-profit research company, Open AI, is working to build safe AGI (before we even get to the ASI stage) to ensure that AGI’s benefits are as widely and evenly distributed as possible.[8] Similarly, companies like Google’s Deepmind are also establishing organizations such as the Partnership on AI, whose goals include investing more attention and effort on harnessing AI to contribute to solutions for some of humanity’s most challenging problems.[9]

 

[1] Deep Blue (Chess Computer), https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer).

[2] Paul Mozur, Google’s AlpaGo Defeats Chinese Go Master in Win for A.I., N.Y. Times (May 23, 2017), https://www.nytimes.com/2017/05/23/business/google-deepmind-alphago-go-champion-defeat.html.

[3] Stanford Encyclopedia of Philosophy, Seventeenth-Century Theories of Consciousness (July 29, 2010; Revised Sept. 27, 2014), https://plato.stanford.edu/entries/consciousness-17th/#2.2.

[4] Id. 

[5] Id. 

[6] Maureen Dowd, Elon Musk’s Billion-Dollar Crusade To Stop the A.I. Apocalypse, Vanity Fair (March 26, 2017), https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x; see also Aatif Sulleyman, AI Is Highly Likely To Destroy Humans, Elon Musk Warns, The Independent (Nov. 24, 2017), http://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-artificial-intelligence-openai-neuralink-ai-warning-a8074821.html.

[7] Hannah Osborne, Stephen Hawking AI Warning: Artificial Intelligence Could Destroy Civilization, Newsweek (Nov. 7, 2017), http://www.newsweek.com/stephen-hawking-artificial-intelligence-warning-destroy-civilization-703630; see also Arjun Kharpal, Stephen Hawking says A.I. could be ‘worst Event in the history of our civilization,’ CNBC (Nov. 6, 2017), https://www.cnbc.com/2017/11/06/stephen-hawking-ai-could-be-worst-event-in-civilization.html.

[8] Open AI, About OpenAI, https://openai.com/about/.

[9] Partnership on AI, Introduction form the Founding Co-Chairshttps://www.partnershiponai.org/introduction/.

Advertisements