Google’s recent Pixel 2 event may have been focused primarily on new product, on smartphones and laptops, but the real story was the blanket pervasiveness of Artificial Intelligence and Machine Learning.

Artificial Intelligence is iterating at a rapid pace. As corporates and governments rush to implement, to automate tasks at speed and link technical actions to sensory experience, there is both risk and opportunity that the social, cultural and political implications of AI will be neglected.

It’s important to recognise that AI creates a cultural shift as much as it does a technical shift. Technology is never without values, and never built without an agenda or values attached – so how do we ensure that this risk is an opportunity – how do we ensure that AI is held to the highest ethical standards? Is it time for a robo-magna-carta?

One of the first challenges is defining the problem, not to mention clearly defining what AI is and is not.This isn’t only about re-training and re-employing the hundreds of thousands who will be displaced by AI; an ethical foundation of AI needs to put morals and social impact at the core of the design process.

Greg Brockman, one of MIT’s 35 innovators under 35, is one of the few spearheading the charge, co-founding OpenAI with Elon Musk. They’re working backwards from an accepted realisation – that AI must learn crucial, implicitly human emotion as a way of preventing misbehaviour, bias and misrepresentation. The only way to do this is to design with one particularly anthropomorphic emotion in mind: Shame. They believe that shame should be the forefront emotion to ensure AI recognises there are consequences to actions.

Google’s DeepMind has identified 6 key challenges to ensuring AI design acknowledges social and ethical impact. I’ve highlighted two below:

  1. Privacy, transparency and fairness.

AI is reliant on the data sets it learns from. Not only is this information often personal, it also has a history of learning and applying existing societal biases. Risk Assessments, an increasingly common AI tool used in US courtrooms to create a ‘probability of re-offending score’ have been identified as having a racial bias. The formula was identified as, “particularly likely to falsely flag black defendants as future criminals”. When this happens – who takes accountability?

The score for Risk Assessments is based on 137 questions, some answered individually and some taken from existing records. They include questions on family employment history and education. The questions and their application highlight a wider problem with AI – it’s quick to learn and to replicate human bias. Sentencing has long been entrusted to a group of humans who are presented information but deliberately coerced and ultimately guided by principal, instinct and bias. If computers could accurately remove these biases, it would result in an incomparably fairer system; instead, it is exacerbating the problem.

If due process is that criminal proceedings require the accused to be able to confront and cross-examine the witnesses testifying against him, how do we ensure that computer algorithms get the same treatment?

  1. Governance and accountability.

When things go wrong, who is responsible? Now is the opportunity to import governance and regulation, possibly regulatory bodies similar to the FCA, to ensure deployment of AI is accountable. Many AI applications and systems behind them are so complex that they defy thorough examination. This lack of understanding is already feeding bias with tangible and widespread results. In your Facebook bubble your news is filtered by AI and this news dispersion tactic may have had a significant impact on the US election.

Several companies at the forefront of AI, including Microsoft, Amazon and IMB have formed “The Partnership on AI” in an attempt to advance public understanding and develop shared standards of practice that everyone should adhere to – but if intelligence is constructed to create competitive advantage it could lead to important information being withheld.

Increased use of AI promises to bring with it unpredicted economic growth. Efficiency savings and properly utilising the wealth of data amassed in the last 15 years in particular present companies with two huge opportunities: In the AI in the UK Executive Summary for Government, released 15th October 2017, it’s estimated that AI could add £630bn to the UK economy by 2035. The report makes 18 recommendations to improve access to data, supply of skills, uptake of AI and increased research – but in the race to become a global AI powerhouse the report makes no practical recommendation that is rooted in moral responsibility.

As we accept new technologies and the AI within them, we as consumers need to ask questions about how the systems work. AI with social implications in particular, like Risk Assessments, need to be draw on philosophy, law, sociology, anthropology, science and technology – just for starters. It must turn to social, political and cultural values and how these have been affected by technology over time. In the rush to make the most of the opportunity, we’re in danger of ignoring the social, moral and cultural implications all together.

Written by Ben Abbott
Ben is a Business Development Manager for Wazoku with a passion for making work more exciting, more inclusive and more democratic for everyone.