Ethics are defined as the “Principles that govern a person’s behaviour”. We all live by rules which are based on ethical and moral obligations to society. We need the rules by which our tech is guided to behave in much the same way. 

We must view AI through an ethical lens. That means we have to be mindful of the impact it may have on society when deciding how and when to use it. There are five questions we need to ask ourselves as we adopt AI. 

1. Will it benefit the stakeholders we’re trying to serve? 

At Confused.com, our business model is built around making customers’ lives easier. We pioneered the idea of making it faster and easier to compare insurance. This has saved people billions of pounds over the years. The companies that get it right are the ones that have their customers’ best interests genuinely at heart. Authenticity is key, especially in this day and age. Companies should adopt the same approach to AI. Ask yourself, does the purpose of this application genuinely do ‘good’?

For us, doing ‘good’ means helping customers find better deals. It also means consistently striving for a better understanding of the insurance industry, so that we can keep improving on what we can deliver. That’s how we’re using AI.

2. How will it integrate with society? 

There is a fine balance that hangs between needing to evolve in line with future technologies and making sure we don’t move too quickly. Otherwise, we risk colliding with the values we hold in society. AI is still in the young stages of its development, which means there is room for improvement and better understanding. More importantly, this means that there is room for mistakes and misuse. There is a great deal of volatility and instability around which models organisations should use and, for that reason, we need to move forward with caution.

AI isn’t a phenomenon that’s only impacting the fintech sector; it’s gaining traction in all industries – including public services, government, healthcare, education and charities. From a moral standing, most people would agree that the ability to reduce costs, enhance quality and free up the time of frontline staff to do work that is beneficial to all of society is a positive thing. It’s about ensuring that staff are unburdened of menial tasks so that they are able to do work that is more meaningful – rather than seeing AI as a service to replace staff. 

To help our staff embrace AI, we’ve implemented the ‘School of Tech’ initiative, where non-tech employees team up with our data engineers to help them understand how AI and machine learning can support them in their day-to-day roles. A few members of our marketing team have been on the programme and they have created simple applications that help them process and analyse results, reducing tasks that were taking them an hour to just minutes. This helps them to see the value of AI and better understand and come up with ideas for how it can help our customers.

3. Is our data biased? 

AI is only as good as the data behind it. This has proved critical for certain organisations, including across the mortgage lending, HR and education systems where glitches in data mean people have been treated unfairly. In one example, an AI recruiting engine excluded all female applicants, due to the team who trained the system using a majority of male CV’s. A similar situation arose within an education system, where all minority and female applicants were rejected. 

Bias within data can be addressed by employing a diverse team. We need to be inclusive of different genders, ethnicities, socioeconomic background, disabilities and sexualities to make sure that our solutions to problems are multifaceted and so that we can alleviate the potential for bias in AI systems. 

4. How can we ensure we comply with regulation? 

Another key consideration is the regulatory environment. Due to the uncertainty of AI technology and the fast-moving, immature nature of the market, there is more need than ever to be mindful of regulations (whether they are decided by a regulatory board or by an internal set of values). But we also need to maintain a level of fluidity around these regulations in order to develop in line with the growth of the application. For this reason, it’s absolutely crucial that we test an application over its lifespan as opposed to testing on a one-off occasion. From the outset, we need to promote a culture of accountability in humans when it comes to providing AI powered solutions and when making decisions. 

For us as organisations, the successful use of AI relies mainly on trust – which is why we must conduct ourselves in a manner of total transparency. These are the mechanics that will allow us to build trust and pacify any fear amongst society. We can help aid this transition by actively collaborating with regulators from an early stage of development, which in turn can help us identify and address any sector-specific issues. This should also be driven with the mindset of ensuring that all data is ethically sourced and GDPR-compliant. 

Then there is the pivotal question: 

5. Is AI the best application for the job? 

Despite the hype around the current AI trend, there is a duty for us as Fintech organisations to consider whether or not AI is the best solution at hand. As technology changes so quickly, it can often be unreliable, so there is a need to weigh up all options before hastily using the newest, most revolutionary application available to us. Naturally, benefitting stakeholders, customers and wider society should be the main driving force behind any decision to employ AI or not. But equally, we need to view the capabilities of the application scrupulously, highlighting the limitations of the data in order to address the shortcomings. 

Looking to the future, if implemented correctly, AI will help organisations achieve breakthroughs where there would have previously been blind spots. For Confused.com, that could mean providing customers with a more advanced and faster service. For the healthcare sector, it could mean introducing a new diagnostic product into the market. AI will allow organisations to provide better customer service, enabling shorter response times and better interaction, whilst giving organisations the ability to focus more time and real-life interaction on enquiries and more complex issues. 

The possibilities are endless, but we must always keep societal impact at the forefront of our minds. So, if the way we’re using AI doesn’t benefit our stakeholders or threatens wider society in some other way, we should keep working in ways that already work. Just because AI allows you to do something, it doesn’t mean you should.

Latest Articles

International Women’s Day 2021

International Women’s Day 2021

International Women's Day 2021 To celebrate International Women's Day 2021, Barclays would like to invite you to join a series of panel discussions to be held virtually on: Monday, 8 March - 8:30am - 9:30 am EST / 1:30pm - 2:30pm GMT Thursday, 11 March - 4:00pm -...

Fintech investment builds on momentum and is set for record year

Fintech investment builds on momentum and is set for record year

Fintech is continuing to enjoy popularity with investors who see it as a sector with excellent prospects for sustainable growth. The sector’s rise last year against a backdrop of great uncertainty has continued in 2021, with strong investment volumes gaining momentum....

Singapore Fintech Festival 2019

Singapore Fintech Festival 2019

Singapore FinTech Festival 2019 summary of activities from the Fintech Power 50: Singapore FinTech Festival 2019 has proved another resounding success. Inaugural SFF x SWITCH sees over 60,000 participants from 140 countries; event to return on 9-13 November 2020....