Trust is Key to Future Success of AI

  • Jean Van Vuuren, Associate Vice President - EMEA Commercial Sales at Hyland

  • 06.12.2022 03:06 pm

Across the financial services sector, firms are looking to make their processes faster and less resource-intensive, with increasing numbers turning to artificial intelligence (AI) to automate their decision-making. On the face of it, this is a good thing for consumers, since their applications are dealt with sooner.
 
However, there are concerns that such efficiency comes with a caveat. What if the decisions being made aren't fair and cause financial exclusion by unintentionally discriminating against certain groups or individuals?
 
In other words, there may be inbuilt bias in such decisions, something the European Commission defines as "an effect which deprives a statistical result of representativeness by systematically distorting it".
 
A widespread issue?
 
There is growing belief that such unintended bias exists and is a major issue for the sector. The UK Centre for Data Ethics and Innovation Barometer, for instance, has described this kind of bias as “the biggest risk arising from the use of data-driven technology" and has started to work with others to help bring about more responsible data sharing and better public sector AI and data use. The European Banking Authority (EBA), Bank of England and the Financial Conduct Authority are also exploring the potential social impact and AI adoption, challenges and risks.
 
But if algorithmic bias is a real problem, just how big is it?
 
No one is quite sure, though a pilot study being run by the NHS into bias in the allocation and delivery of health and care services may help provide some answers.
 
And if it is a major issue, what can be done to address it?
 
‘Algorithmovigilance’ may be the answer. In other words, the systematic monitoring of the algorithms used by financial institutions to ensure that processes such as credit referencing and fraud and AML checks aren't unconsciously discriminating against particular individuals or societal groups.
 
The source of bias
 
How does this unintended bias happen in the first place?
 
Generally, the starting point is the use of datasets that are incomplete, incorrect, or not up to date, to which programmers, managers and other stakeholders then add their own assumptions and prejudices. It is from this mix that skewed algorithms emerge.
 
And once they become embedded in the fabric of an organisation, these unwanted behaviours are — like bad habits — hard to root out and change.
 
Any AI system must of course be designed to meet all legal, social and ethical standards. So, if financial firms aren't paying sufficient attention to algorithmovigilance, they may expose themselves to legal and regulatory risks that, if not properly managed, could result in long-term reputational damage.
 
And given that consumer trust is a much sought-after commodity, making sure you are doing things correctly and delivering on your promises is of paramount importance. What's needed is real transparency, which means that if you can’t justify your decisions when challenged, you have a problem. In other words, it's all very well the computer saying 'no', but is the computer always right?
 
Rooting out bias
 
Organisations that are truly 'algorithmovigilant' will be better placed firstly to identify the biases that result in unfair decision-making, then adjust them accordingly.
 
And given that most banks have already bought into the benefits of artificial intelligence, the potential for algorithmic bias is growing, and the spotlight will increasingly fall on those who are not seen as playing fair.
 
So, senior industry leaders must ensure that algorithmovigilance becomes embedded in every nook and cranny of their corporate and governance processes, and that staff are trained to be far more aware of unintended bias when working with data or building machine learning models. And such training needs to be continuous since there is always potential to drift back towards discrimination.

This requires continuous monitoring of algorithm-generated decisions to spot anomalies, with algorithms adjusted as necessary in response to changing market and societal conditions.
 
Without this, firms in the financial services sector won't be able to declare with any real confidence that their application procedures are unbiased.
 
A plan of action
 
Setting up a team of subject experts to create a knowledge centre is a good way to begin, since this will help ensure consistency of approach across the organisation and provide a focus for the regular monitoring of customer data to confirm that it is as complete and accurate as possible.   

Organisations would also do well to work actively with regulators, so they have a better understanding of what is current best practice - within their industry and beyond.
Technology is there to serve us. But that only happens when it is properly managed, so we must be continually alert to potential issues that may result in the reverse.

This is a challenge for all industries, but along with perhaps healthcare, is most profoundly felt in the financial sector, where wrongly made decisions can have a profound effect on individuals.
We certainly don't want to get to a point where consumers feel they are at the wrong end of AI decisions that a bank can't explain and regulators lack the technical expertise to investigate.
 
So, algorithmovigilance is a way to keep technology on track and help ensure that the social, cultural, legal and reputational consequences of discrimination, either implicit or explicit, don't ultimately lead to a permanent erosion of trust.

Other Interviews