Facing up to ethical complexity
If ethics seems complex now, there’s much more to come. Thanks to cognitive computing — covering artificial intelligence (AI), machine learning, intelligent automation, natural language processing, neural networks, deep learning and more — machines may soon be making qualitative judgements. But right now, the main role of cognitive computing is to augment human intelligence.
With traditional AI, a machine is programmed to explore vast amounts of data to find insights about causality, correlations and other complex relationships that can be expressed as an algorithm. Humans then interpret the findings and determine what action to take. So, people retain control. With machine learning, a computer is given data on inputs, activities and outcomes, then told to analyse it to ‘learn’ how to improve outcomes. Intelligent automation is where the machine takes control, putting the algorithms it has learned into practice and continuing to hone them based on outcomes.
Deloitte identifies the ethical risks relating to AI and machine learning as:i
General Data Protection Regulations (GDPR) is the data protection and privacy regulation in the EU. It came into effect in May 2018 and can lead to fines of up to 4% of global turnover for abusing or failing to take care of customer data .ii
Machine learning improves the accuracy of algorithms’ predictions by weighting the variables that most influence accuracy. To check for potential bias, it’s necessary to understand which of an algorithm’s variables are most influential. What is your current role and are you confident you can address ethical risks in a digital environment? If the data was about past applications for jobs or loans, for example, it’s possible that the machine will replicate or exaggerate any bias learned from past decision-making by humans. Such sub-optimal decision-making is not to the benefit of the business and it can be unfair to customers or employees. This was recognised over 30 years ago by the UK’s Commission for Racial Equality when a program, used to select applicants for interview by a British medical school,iii was found to discriminate against people with non-European names. It achieved 90–95% accuracy in matching human selection, revealing a bias that might not otherwise have been spotted. In 2016, an intelligent Microsoft-developed chat-bot called Tay was given a Twitter account. It soon learned from its interaction with the public to post sexist and racist Tweets, “Tay in most cases was only repeating other users’ inflammatory statements, but the nature of AI means that it learns from those interactions.”iv While machine learning can replicate human bias, AI can help identify it and provide transparency about discrimination. But, proper governance is necessary and would include monitoring outcomes to reveal whether any bias in past human decision-making has been codified. Unfortunately, this may be more difficult in the future, due to complex data architecture across neural networks.
The pace of change is making it hard for regulators to anticipate potential risks. New regulation is normally only enacted following a scandal, but several nations have issued AI plans, road maps or strategies without this trigger.
While their primary concern may be to maintain their nation’s economic competitiveness, there is also a focus on ethical standards and policies. Meanwhile, major technology businesses such as Google and IBM have developed ethical guidelines for governing the use of AI that they make widely available. Some technology vendors have also launched open-source tools to address ethical issues like bias and transparency. These include Facebook’s Fairness Flow, Google’s What-if tool and IBM’s AI Fairness 360 and OpenScale Environment.
The Institute for Business Ethics suggests four principles to consider how algorithms are used:v
Whilst these guidelines exist, currently, there isn’t a 'gold standard' of what is considered 'ethical AI'.
“If the data was about past applications for jobs or loans, for example, it’s possible that the machine will replicate or exaggerate any bias learned from past decision-making by humans.”