AI and the frontier of Automation

robot-human-hands

In January 2015, the future of life institute in Boston, Massachusetts, published an open letter on the research priorities for robust and beneficial artificial intelligence [AI] which to this day has been signed by more than 8000 people, including leading minds in industry and research, such as Elon Musk, Stephen Hawking, Jen-Hsun Huang, various members of the future of humanity institute at the university of Oxford, IBM Watson researchers amongst many others. The letter discusses the concept of intelligent agents and the increasing progresses made in this area, but explicitly states that “our AI systems must do what we want them to do”. Scourged by some as anti-AI movement, the attached research priorities article clarifies the motivation of the signees: not to stop the positive research into artificial agents, but to widen the research spectrum to maximise the benefits of success in this area for humanity (Russel et. al., 2015).

The outlined concerns include short and long-term aspects, some of which are:

  • Optimisation of the economic significance of AI
    • Which and how many jobs will be automated and decrease necessary labour force?
  • Legal and ethical topics, such as
    • Data and personal privacy
    • Liability of autonomous vehicles
  • Safety of intelligent agents
    • How do we verify the system follows formal properties?
    • How do we avoid unwanted behaviours?
  • Control
    • How do we ensure that the autonomous operating systems do not achieve superintelligence via sustained self-improvement outside human control?

pai_post_thumbnail_press-release-092816_400x300_01

Following this open letter, in September 2016, two of the signees, Eric Horvitz and Mustafa Suleyman became the co-chairs for the newly established industry association Partnership on Artificial Intelligence to Benefit People and Society [Partnership on AI]. The original founding members include IBM, DeepMind, Google, amazon.com, Apple, facebook.com and Microsoft. The introductory letter of the partnership includes many of the topics brought up by Russel et. al. (2015) and pledges itself to “create a place for open critique and reflection”. This strengthens the importance and relevance of the open letter with regards to the overall sentiments and thought processes towards artificial intelligence.

A recurring theme in publications on AI is the reference to the dual-use nature of engineers and researchers (Brundage et. al., 2018) for which they carry a moral obligation for all of humanity. This is not a new request and has been carried forward on several instances in the last century: Bertrand Russell, in his “The Russell-Einstein Manifesto” publication speaks “on this occasion […] as human beings, members of the species Man, whose continued existence is in doubt.” (Russell, J., 2012) and Friedrich Dürrenmatt let’s his character Möbius narrate in “The Physicists”:

Why play the innocent? We have to face the consequences of our scientific thinking. It was my duty to work out the effects […]. The result is – devastating. […] A sense of responsibility compelled me to choose another course.” (Dürrenmatt, 1964).

Responsibility is of paramount importance to make negative results of AI experiments less likely and can only achieved in an open and public debate. The current approach of industry and research institutes, as well as individual prominent figures to carry this debate into all parts of society is to be welcomed and supported.

While a public discussion about safety and control of intelligent agents is already taking place, which kind of methods do we already know of to prevent rogue systems such as Cyberdyne Systems Corporation’s Skynet (Wissner-Gross and Freer, 2013) or System Shock’s Shodan (Weise 2008)?

Let’s have a look at one particular safeguarding measure that has been discussed since its postulation 77 years ago:

Isaac Asimov codified his 3 laws of robotic for the first time 1941 in his short story “Runaround” (Asimov, 1979). They are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These postulations were hard coded into the principles of artificial intelligence in Asimov’s writings in order to ensure that humans and machines would work together harmoniously as well as coexist peacefully. Unfortunately, it is Asimov himself who displays some of the weaknesses of the 3 laws directly in Runaround: they can contradict each other and result in unexpected behaviour. An AI will behave logically, but it might not be doing the “right” thing – in the sense of doing what the human instructing it, expects the AI to do. In themselves, the laws in their original form would probably not be a sufficient safeguarding instrument, but others have built on Asimov’s principles. Murphy and Woods (2009) for example propose 3 alternative laws:

  1. A human may not deploy a robot without the human–robot work system meeting the highest legal and professional standards of safety and ethics.

  2. A robot must respond to humans as appropriate for their roles.

  3. A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control to other agents consistent the first and second laws.

The revised laws basically create a bridge between Asimov’s laws as an instrument of storytelling and the concept of scientists having to be responsible for their creations as described by the 2015 open letter. In order to make “AI safe for humans” (Guzman, 2017) though, we all need to partake in the open dialogue as well as better our understanding of the human position in life, so we may relate the position of machines to it.

In doing so, we can step forward into a brighter future for all of us, with

our common aim being that each of us should have a good time, doing, so far as possible, the things that he or she likes best (some of those things we do together, others we do separately)” (Cohen, 2009).

 

Further recommended viewings:

 

References

Asimov, I., 1979. In memory yet green: the autobiography of Isaac Asimov, 1920-1954. Doubleday.

Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B. and Anderson, H., 2018. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv preprint arXiv:1802.07228.

Cohen, G.A., 2009. Why not socialism?. Princeton University Press.

Dürrenmatt, F., 1964. The physicists. Grove Press.

Guzman, A.L., 2017. Making AI safe for humans: A conversation with Siri. Socialbots and their friends: Digital media and the automation of sociality. London, UK: Routledge.

Murphy, R. and Woods, D.D., 2009. Beyond Asimov: the three laws of responsible robotics. IEEE Intelligent Systems, 24(4).

Russell, J., 2012. Russell Einstein Manifesto. Book On Demand Limited.

Russell, S., Dewey, D. and Tegmark, M., 2015. Research priorities for robust and beneficial artificial intelligence. Ai Magazine, 36(4), pp.105-114.

Weise, M.J., 2008. Bioshock: A critical historical perspective. Eludamos. Journal for Computer Game Culture, 2(1), pp.151-155.

Wissner-Gross, A.D. and Freer, C.E., 2013. Causal entropic forces. Physical review letters, 110(16), p.168702.

One thought on “AI and the frontier of Automation

  1. Clearly presented and approporiate tone (although I always think that the Murphy and Woods Laws lack the simplicity and poetry of Asimov’s 3 Laws 🙂 )

    Like

Leave a comment