Civilising AI: Applying AI to solve practical business issues, ethically

In the CIMR’s last Debate in Public Policy of term, we heard from two speakers on how AI can be applied in practical and useful ways in organisations.

The potential of AI to solve business issues is huge, but debate around its implementation often gets bogged down in abstract discussions about the theory of mind, consciousness and ethics.

In the final CIMR Debate in Public Policy of the summer term, Andrew Atter, Innovation Coach at the University of Liverpool and CIMR Visiting Fellow invited two speakers working to de-risk AI to share insights into how new technologies can provide solutions to real-world problems, ethically.

Understanding ethical technology use

Charles Radclyffe, co-founder of EthicsGrade began the discussion by exploring how firms can balance the benefits of technology while avoiding its ‘dark side’.

Since 2000, the image of the technology sector has transformed from industry outcasts to heroes, in part due to the fact that technology is now a highly lucrative industry. At the same time, there is a deep mistrust of technology, fuelled by scandals such as the well-known Cambridge Analytica case.

In his previous role as Head of AI at the pensions agency Fidelity, Charles explored the question of mitigating ‘techlash’ with big tech companies and large consultancy firms. In both cases, he found the responses, which focused on setting protocols and ensuring regulatory compliance, dissatisfying.

The search for answers led to the publication of the white paper ‘Ethical by Design’, which concluded that the loss of trust in technology stems from the lack of stakeholder engagement and missing feedback loop of big technology firms – technology is ‘done to us’, for example when Apple remove the headphone socket from the latest iPhone.

Charles argued that a company that is performing well in terms of ethics is one that engages well with stakeholders and incorporates stakeholders into its design. The white paper highlighted three domains of activity to ensure ethical technology use:

  1. Engineering controls which speak to the challenges of bias and discrimination
  2. Public policy: regulatory compliance, lobbying and designing for a better civic relationship between a company and its use of technology and society
  3. Ethics: engaging with stakeholders, incorporating that feedback into design and systems

EthicsGrade supports companies to evaluate the impact of their technology in terms of the environment, social justice and corporate governance.

Applying AI to mitigate human biases

Riham Satti, Co-Founder of MeVitae continued the discussion by demonstrating how companies can use AI to make more ethical decisions.

A former neuroscience researcher, Riham demonstrated that the human brain is sub-optimal for making unbiased decisions: there is a speed/accuracy trade off, we do not have infinite memory and the way the brain processes information leads to cognitive emotional biases. Researchers have identified 140 different types of cognitive biases in the human brain, all of which impact the hiring process and can lead to biased decision making.

While we can never fully remove biases from the human brain, technology can be used to mitigate their impact in the recruitment process. For example, researchers used a heatmap to explore where employers’ attention is drawn to on a CV. The results showed that recruiters spent significant amounts of time looking at the name, university name, former company name and hobbies and interests of an applicant, none of which directly inform their suitability for the role. Tools such as blind recruiting, which redacts personal information from an application, can be employed to mitigate such biases.

For Riham, the ideal situation is one where companies don’t use technology for the sake of it, but rather use technology to enhance hiring decisions, which ultimately still lie within the organisation.

Challenges and opportunities

The presentations were followed by discussion from Jo Magnani, Birkbeck MSc Business with Innovation alumna with four decades’ experience in technology. Jo questioned how human oversight can be maintained in data governance, and how to ensure commitment from businesses to improve their transparency. For Riham, this is where policy intervention plays a role, as we have seen with the introduction of GDPR laws, but ultimately an ethical use of technology must not be a tick box exercise.

We would like to thank our speakers and the organisers for kickstarting an important discussion on how firms can play their part in civilising AI.

A recording of this discussion is available to watch on YouTube.