The Ethics of AI. The genie is out of the bottle

This post has been contributed by Dr Andrew Atter, Founder of Pivomo and CIMR Fellow

At the CIMR Annual Strategy meeting, I had the privilege to lead a discussion on the rapidly developing field of the AI ethics. Since the alarm has been raised by tech titans such as Elon Musk, along with scientists like Stephen Hawking, there has been growing public recognition about the need for clearer ethical guidelines to shape this emerging technological field. Investors, founders and the media are all floundering to get to grips with the implications of what is loosely termed “AI”.

Just over the past month, the OECD and the EU have issued ethical guidelines. However, despite lengthy consultations, these appear in very generic and abstract form. To cite one example from the EU guidelines:

Develop, deploy and use AI systems in a way that adheres to the ethical principles of: respect for human autonomy, prevention of harm, fairness and explicability. Acknowledge and address the potential tensions between these principles. (ec.europa.eu 08.04.2019)

Tech founders might be forgiven for being a little perplexed in knowing just how they are meant to operationalise these guidelines!

These early efforts at setting out an ethical framework have drawer fire from a range of critics. In a blog article entitled “From Fake News to Fake Ethics” Thomas Metzinger is Professor of Theoretical Philosophy at the University of Mainz describes this phenomenon as

“an example of “ethics washing”. Industry organizes and cultivates ethical debates to buy time, to distract the public and to prevent or at least delay effective regulation and policy-making”. (TagSpiegel, 08.04.2019)

Even the acronym “AI” is problematic. Originally a short form for artificial intelligence, industry insiders increasingly prefer “Augmented Intelligence” or use a different label entirely, such as “Deep Learning” or “Machine Learning”. The reason is that the development of human like intelligence is neither necessary or remotely practical. We do however, have a need for systems that improve our decision making and overcome our own cognitive limitations, such as bias. The honest truth is we simply don’t have a word or words that accurately categorise the adaptive, iterative and predictive nature of AI algorithms, supported by big data and increasingly sophisticated voice and sensor technologies.

The CIMR discussion took place at the end of a busy week. On 16 May, the government announced the creation of the AI Council (www.gov.uk/16 May 2019). Its mission is to “supercharge the UK’s artificial intelligence sector”. I don’t think we should expect much detached, ethical reflection anytime soon. Moreover, the announcement might trigger further concerns for ethicists. The statement reads:

“…building on the great British pioneer’s legacy by identifying and overcoming barriers of AI adoption in society, such as skills, consumer trust and ensuring the protection of sensitive data”. (www.gov.uk/16 May 2019)

Taken as face value, the statement suggests that “consumer trust” and “protection of sensitive data” are “barriers” to be overcome. This might just be poor drafting, but it does suggest the ease with which a government can lapse into boosterism, “supercharged” by the need for economic growth, high paying jobs and not a little national competition thrown in for good measure.

But what will the role of government be in creating the new ethical landscape? Is it just a matter of accepting a laissez faire approach, and hoping that whatever risks arise can be dealt with by responses from the market? What protection if any, can we expect?

Just the day before, the ethical issues emerging from the AI industry were highlighted by the shareholder motions at Amazon’s Annual General. A group of shareholder activities under the banner of the Tri-State Coalition tried to challenge a decision to sell the Rekognition facial recognition software to governments, and specifically the US government.

Mary Beth Gallagher from the Tri-State Coalition for Responsible Investment said:

“It could enable massive surveillance, even if the technology was 100% accurate, which, of course, it’s not. We don’t want it used by law enforcement because of the impact that will have on society – it might limit people’s willingness to go in public spaces where they think they might be tracked.” (BBC, May 22);

There have also been similar problems with Google’s own facial recognition software and its approach to setting up an AI Ethics committee, following the resignation of privacy campaigner Alessandro Acquisti. (Vox.com, 19 April)

The concerns of the Tri-State Coalition were lent weight by research undertaken by the Massachusetts Institute for Technology and University of Toronto lends some support to this view. Their report read as follows:

“…that Rekognition had a 0% error rate at classifying lighter-skinned males as such within a test, but a 31.4% error rate at categorising darker-skinned females”. (BBC, May 22)

Fairness and non-discrimination is a baseline ethic for most people. However, it may not be quite as simple as that. Amazon countered that

“…it had not received a single report of the system being used in a harmful manner”. (BBC, May 22); and, “[Rekognition is] a powerful tool… for law enforcement and government agencies to catch criminals, prevent crime, and find missing people,” (Amazon May 22, 2019).  

In response, Amazon’s Director of Web Services Ian Massingham said:

“The one thing I would say about deep learning technology generally is that much of the technology is based on publicly available academic research, so you can’t really put the genie back in the bottle. Once the research is published, it’s kind of hard to ‘uninvent’ something. So, our focus is on making sure the right governance and legislative controls are in place.” (BBC, May 22);

So the “genie is out of the bottle”.

So, over the past week we have seen the intensification of the debate over competing ethical positions in relation to AI, all against a backdrop of high level of uncertainty about the way the industry will develop.

There is clearly a need for a methodical and detail study of AI ethics, in the context of the innovation landscape. Founders and Directors, Investors and lawmakers all need to have some idea of where the industry is heading and what risks are being incurred.

To find out more on our work on the leadership of innovation, and the ethics of AI, please follow the link:

https://executivedialogue.co/articles/

Bibliography