Christopher Woolard discussed the UK regulator's views on artificial intelligence in financial services at an Alan Turing Institute event yesterday

AI Mike MacKenzie Flickr www.vpnsrus.com

Artificial intelligence is increasingly being used in financial services (Credit: Mike MacKenzie/www.vpnsrus.com)

Financial services companies should consider the worst case scenario when choosing to deploy artificial intelligence in order to mitigate against potential risks, according to a UK regulator.

Despite the technology still being in a “nascent” exploratory stage across the industry and used largely for back office functions, its growing influence means that banks and other institutions must ensure they have a “solid understanding” of it and the related governance issues.

The remarks were made by Financial Conduct Authority executive director of strategy and competition Christopher Woolard during a speech at an Alan Turing Institute event in London yesterday.

He discussed the watchdog’s approach to regulating AI in the UK financial services industry and the issues surrounding a technology that is gradually becoming more commonplace in everyday life.

Mr Woolard said: “Finance plays a fundamental role at the heart of daily life for almost every single person, and how AI plays out here may determine how citizens feel about new technologies across the board.

“The use of AI in the firms we regulate is best described as nascent. The technology is employed largely for back office functions, with customer-facing technology largely in the exploration stage.

artificial intelligence in financial services
Christopher Woolard, FCA executive director of strategy and competition (Credit: FCA)

“If firms are deploying AI and machine learning they need to ensure they have a solid understanding of the technology and the governance around it.

“This is true of any new product or service, but will be especially pertinent when considering ethical questions around data.

“We want to see boards asking themselves: ‘what is the worst thing that can go wrong’ and providing mitigations against those risks.”

 

Artificial intelligence holds ‘enormous promise’ for financial services

Despite this need to exercise caution in the implementation of AI across the industry, Mr Woolard acknowledged the “enormous” potential benefits of the technology to improving both security standards and customer experience.

He said: “While the widespread use of AI presents us with complex, ethically-charged questions to work through, it also holds enormous promise.

“We’re already seeing its potential play out in areas like financial crime. Distinctive patterns and data typologies are now being identified by today’s machine learning tools.

“And in retail banking – a utility of central importance to consumers that has long been bedevilled by a lack of innovation – AI has the ability to be genuinely transformative.”

The introduction of open banking standards in the UK last year opened up customer data sharing between finance companies and third-party providers – a move in which AI will undoubtedly play a significant role.

artificial intelligence in financial services
Artificial intelligence will play a significant role in open banking

Mr Woolard added: “We all know that with access to the rich data sets facilitated by open banking, the potential for AI for the good of consumers is huge.”

There was a warning, however, that as valuable customer information continues to proliferate throughout the market, there will also be temptations for banks to misuse it.

“A key determinant of future competition will be whether data is used in the interests of consumers or used by firms to extract more value from those consumers,” said the FCA director.

“As the market in data grows and machine learning continues to develop, firms will find themselves increasingly armed with information and may be tempted into anti-competitive behaviours.”

 

Financial services industry must work to build trust in artificial intelligence

Mr Woolard stressed the importance of building and maintaining consumer trust in AI, data sharing and the wider banking industry.

The 2008 financial crisis and the more recent Facebook and Cambridge Analytica privacy scandal were cited as episodes that caused significant reputational damage to large firms and industries collecting rich data sets on their customers.

He said: “Technology relies on public trust and a willingness to use it. The public needs to see the value data can create for them.

“The Facebook-Cambridge Analytica incident last year struck a heavy blow against consumer trust in data sharing, which is still playing out.

“By and large those who lead financial services firms seem to be cognizant of the need to act responsibly, and from an informed position.

financial services and artificial intelligence
The FCA headquarters (Credit: FCA)

“Rightly, the lessons of the crisis seem to be playing on industry minds.

“Too much faith was put in products and instruments that weren’t properly understood.

“Certainly, there is no desire to reverse progress on rebuilding public trust.

“Perhaps unsurprisingly, the picture varies depending on the firm in question. Some larger, more established firms are displaying particular cautiousness.

“Some newer market entrants can be less risk averse. Some firms haven’t done any thinking around these questions at all – which is obviously a concern.”

But he added that, despite this need for prudence and risk awareness, the FCA doesn’t want to become a “barrier to innovation in the interests of consumers”.

 

FCA partners with Alan Turing Institute to explore impact of artificial intelligence in financial services

During his address, Mr Woolard announced the FCA has teamed up with the Alan Turing Institute to explore “transparency and explainability of AI in the financial sector”.

The research organisation – named after the celebrated computer scientist who was this week named as the face of the new £50 note – specialises in data science and AI, and will work with the regulator to develop a better understanding of how the technology will influence financial services.

Mr Woolard said: “Through this project we want to move the debate on from the high-level discussion of principles – which most now agree on – towards a better understanding of the practical challenges on the ground that machine learning presents.”

Helen Margetts, public policy programme director at the Alan Turing Institute, added: “The use of AI in the financial sector is characterised by a fascinatingly diverse range of applications.

“It also includes AI use cases with particularly high stakes for consumers and society at large.

“This makes examining the ethical and regulatory questions that arise in this context a rich as well as an urgent task.”