Amazon has recently been accused of having bias in its Artificial Intelligence (AI) facial-recognition technology.

Research found that women with darker skin were identified only 69% of the time compared to light-skinned men who were successfully identified every time, displaying a clear gender and ethnic bias.

However, it is not only Amazon’s technology which has struggle with bias. A study of Microsoft, IBM and Face++’s facial-recognition intelligence revealed all performed better on male faces than female faces (91.9% vs 79.4% respectively). Lighter faces were also recognised better than darker faces (88.2% vs 80.8% respectively). The lowest score, was darker skinned females who were only identified successfully 65.3% of the time, once again displaying clear gender and ethnic bias.

The question arises as to whether this bias is due to the AI’s nature or nurture, the latter being the data fed to the AI in order to make decisions and in this case, recognise faces. In response and acknowledgment to AI bias, IBM  published ‘Diversity in Faces’ to combat and reduce bias with a new and more diverse dataset. With a goal to eradicate bias by changing the way AI learns, rather than the AI itself, IBM identifies the need for new training data which is more diverse and offers a wider breadth of coverage, reflecting the distribution of faces seen across the globe.

Bias in Law Enforcement


Biased AI can exist in more forms than facial-recognition. Utilised by Durham police, the Harm Assessment Risk Tool (HART) is a form of AI which uses algorithms to assist in the eligibility for deferred prosecution by looking at factors which constitute the likelihood of re-offending. With no way to precisely predict human behaviour and extraneous variables, HART remains a simple tool for assistance rather than authoritative decision making.

HART requires the input of evidence, which if misguided can result in a biased end decision. The cause of the bias here, similar to facial-recognition, is not in the AI itself, but the data it relies upon to reach an outcome.

A large number of UK police forces have started to implement crime prediction mapping programmes to determine crime ‘hotspots’. Using historical data, the AI identifies areas which crime is statistically more likely to occur, which the police respond to with a higher volume of patrols in that area.

’Hotspot’ methods are believed to result in biased policing, which is only exacerbated the more it is used. Upon identifying an area which is at higher risk of criminal activity, higher patrols will lead to more police coverage, therefore scope for arrests. A focus on these specific areas, leading to an increase in arrests, means subsequent AI continues to identify those areas as problematic, leading to a cyclical pattern of specific targeting. Once again, rather than the AI itself, the bias lies in the data which the AI relies upon.

Legislating AI


With no direct legislation on AI and perhaps to the restraint of their own facial-recognition technology, Amazon and Microsoft are amongst those who have called for regulation. Amazon suggests that relevant existing legislation should apply to facial-recognition, even if it restricts the technology. Contrastingly, Microsoft, citing the above study on bias, suggest new laws, particularly regarding discrimination, which need to be implemented to protect consumers and the public from the adverse use of AI technology. As the law stands, it may be a case of interpretation to achieve the most equitable outcome.

The UK’s Centre for Data Ethics and Innovation has recently announced partnership with the Government’s Race Disparity Unit to investigate the potential presence of bias in AI algorithms. The focus will be on those used in criminal justice, financial services, recruitment and local government. The investigation will highlight the impact on members of the public and address any potential bias to support fairer decision making. In turn, this will allow appropriate legislation to be suggested and implemented as necessary.

The Bottom Line


The bias present in facial-recognition may raise questions over AI’s current ability to replicate human social interaction. More importantly, it has the potential to inadvertently discriminate or, depending on the instructions given, purposely discriminate with significant ramifications.

When utilised authoritatively for justice, AI has to be flawless which is not the current position. Even without its own biases, AI is subject to some level of human influence which unfortunately, in some cases, is in itself biased. With potential human liberty at risk, any AI bias within law enforcement can have serious consequences for both offenders and victims alike.