As humans, we are naturally flawed. Our ability to make mistakes and learn from them is what makes us who we are. It’s where creativity and innovation spring from. Nowhere is this more apparent than in technology.
From systems mis-registering genders based on job titles, to prejudiced photo auto-tagging and facial recognition software struggling to recognise people of colour, our own bias, whether conscious or not, seeps into the technology we create.
Whenever someone mentions the diversity challenges facing the technology industry, they always turn to the aforementioned examples. This issue, however, is only part of the problem. Even if the sector were to become the most diverse and inclusive on the planet, there would still be inherent bias within technology because it’s subconsciously built-in by its creators – humans.
These challenges are compounded, as technologies like Artificial Intelligence (AI) and Machine Learning (ML) increase in prominence. Why? Because these technologies automate actions. They can make decisions and learn repeat behaviours faster than humans. But that also means that mistakes can now move more quickly than human comprehension.
For example, an AI-powered facial ID application that has been tested on a predominantly female, Anglo-Saxon group could struggle to recognise other faces. Say for example, it’s being used as a means of access in hospitals, schools, or a new workspace. If it’s powered by an ML platform to allow it to scale, then the problem will scale with it.
So how do we overcome our own bias? In short: through a two-pronged approach. One part focused on humans, and the other focused on technology.
On the human side, it’s imperative that organisations look at diversity and inclusion. It must be an organisation-wide approach to ensure we accurately reflect our global audiences. This isn’t just a coding consideration. It should be engrained within an organisation’s leadership, as well as its planning, conception and executional processes.
Good leaders encourage their teams to constantly challenge their assumptions and their biases, every day. It’s also a critical HR consideration, as the greater the diversity within a team, the better equipped it will be to check for blind spots.
In addition, humans are able to understand nuance and employ empathy. This requires us to not only identify bias but also be able to turn assumptions into positive opportunities that resonate with our audiences.
On the technology side, it’s important that our tools help us catch our biases before they enter the system and proliferate. To do this, we require technology that takes what we think we know and forces us to consider other possibilities.
A classic example could be the age-old Google search. A basic text box search engine doesn’t actually tell us anything we don’t already know – we need to know the question we want to ask. Over time, thanks to algorithms and cookies, the search engine will tailor the results to what it thinks we’re asking for.
This might have been introduced to offer a more personalised experience, but it can actually create bubbles where considerations from outside of our personal context will never appear.
Let data define itself
It’s time for a different approach. Allow me to introduce you to the associative engine. This technology allows data to define itself – as opposed to the rational data structure-type approach where data is defined based on the questions we want to ask. A human still asks the question to the engine, but the engine cuts out the back answers that are filtered by pre-defined data structures. Instead, the data is presented as it stands.
What’s the true benefit of this? Ultimately it means that the connections in the data are live wired, allowing the algorithms to find and surface the unasked insights to the users.
It’s part of a concept called augmented intelligence, whereby you take the power of AI and use it to support – not replace – humans. In an era where many people are concerned that AI and machines will become so smart that they replace us, it helps to counter this fear.
With augmented intelligence, you can have the speed and scale of AI and ML in a way that enhances what a human is trying to achieve. When we use it to help tackle bias, it can help support diverse teams to rapidly review large data sets for instances and bias, and identify and remove them before those insights are actioned.
Will we achieve a bias-free future?
The mistakes that make us human, are what make us valuable. They give us our creativity and our ability to innovate – things that machines can’t yet replicate. But to manage this, particularly as business leaders, we need to be aware of our weaknesses and prejudices – the biases inherent in all of us – and set a strong example for our team to follow.
Doing this is only possible when you combine the human ability to empathise and find nuances, with the scale and speed of technology – in short, augmented intelligence.