Technochauvinism and AI Bias

Technochauvinism is a recent addition to cyber vernacular. Coined by data journalist Meredith Broussard, it refers to the view that technology will always improve things, and that technological solutions should always be favoured over traditional approaches. How many times have you tried scanning your groceries through the self check outs with painful results? These chipper sounding self check-outs often impede rather than assist customers. Self service processes can also be problematic for older, less digitally literate customers.

A recent video from China documents a cardless vending machine transaction which scans a customers face, giving them access to the stock inside. If ever there was an example of needless technology, look no further.

This is a perfect example of a needlessly “advanced “solution to a daily transaction. If convenience is the end goal here, why not go for contactless payment? What happens to this facial scan data on the back end? And what about customers who do not have their payment information stored and linked on the cloud? How do they access services?

In her book, Artificial Unintelligence – How Computers Misunderstand the WorldBroussard documents the many ways technology misses the mark, causing problems and excluding certain demographics. She outlines that facial recognition systems often fail to verify faces of colour. This is an increasing problem, particularly as this software is now becoming a main staple in airports the world over. Facial recognition mechanisms are now replacing passport checks but how convenient one find’s this process, may depend entirely on the colour of one’s skin. In a study by Joy Buolamwini, she tested the facial recognition abilities of IBM, Microsoft and Face++. She found that the software could most easily identify the faces of white men, while it was the least accurate in identifying black women. Check out the full piece here :

Technology can only reflect the world view programmed into it by developers. As has been clearly documented throughout the tech boom, the majority of developers have been white men. With that lack of diversity, comes key gaps and oversights which are then programmed into AI’s neural networks, resulting in this software struggling to recognise women of colour. When facial recognition software has been fed data consisting of primarily white faces, no wonder these issues exist. It is vital that companies introduce more diversity to their programming teams, and furthermore ensure that their data sets are representative of all demographics to avoid these biases.

Questions have been raised around the validity of facial recognition programmes being used by police. This software is used to identify criminals and countless reports have revealed that these programmes are far more likely to profile people of colour. Police in the UK have recently spent millions on this software, despite obvious inaccuracies in the systems. Technochauvinism strikes again, as a product which is far from perfect is launched and trusted purely because it’s shiny and new. Research by Joy Buolamwini is essential in establishing more inclusive approaches to AI, and hopefully her work will help inform important changes by tech giants dipping their toes into these new avenues. But one thing is for sure – the motivations for introducing facial recognition software are ambiguous at best. Human rights abuses in China have been linked to facial surveillance programmes. Again, as Meredith Broussard asks in her book – is this technology really necessary and is it improving our lives?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s