News by sections
ESG

News by region
Issue archives
Archive section
Multimedia
Videos
Search site
Features
Interviews
Country profiles
Generic business image for editors pick article feature Image: Shutterstock

26 October 2016

Share this article





Smarter than thou

Artificial intelligence is creeping in to the back office, but it must be applied efficiently, and its human masters need to maintain a quantum of control

No longer a mere trope of science fiction, artificial intelligence (AI) and machine learning are gradually, but surely, starting to find practical positions in the financial services sector. While they may not be mainstream, they are most certainly up for discussion, and in some cases they’re even in the testing and implementation stages. However, while industry discussion would suggest that the era of AI is very nearly upon us, there are still several schools of thought as to where the technology might be best received, and most beneficial.

In a survey conducted by Finextra and Pelican, focused on AI in the payments space, 37 percent of respondents said their organisation’s payments processes were inefficient. The survey whitepaper noted that this is a relatively high number, considering the industry effort that has gone into improving efficiency in this space.

It said: “This raises the question: with all the investment and technology implemented to solve this problem to date, why can’t payments processing be close to 100 percent efficient?”

While 72 percent of survey respondents said they see potential for AI to tackle the inefficiencies in their payments processing, the majority also highlighted an issue with management buy-in, with 63 percent saying they would struggle to get the support to implement AI in this space.

One respondent, a director based in the US, said: “AI is still in its infancy. For a highly regulated field like banking and payments processing, AI has to reliably and repeatedly prove that its decisioning process falls under the regulatory framework and not cause operational decisions that can create legal issues resulting in fines.”

Henri Waelbroeck, director of research at trading solution provider Portware, comes at risk management from a different angle.

With a background in physics and expertise in genetic algorithms and chaos theory, Waelbroeck focuses on the predictive capabilities of AI. Human beings perceive the world through “the lens of prediction”, he says, and this is what allows us to act intelligently within it.

“We identify differences between natural events and what we have anticipated, and we use this to trigger a response mechanism,” comments Waelbroeck.

Within financial institutions, risk management typically involves looking backwards and basing analysis on situations that have occurred in the past. AI could potentially allow for a new approach based on more information than simply past correlations.

However, such a model requires harnessing specific machine learning algorithms for specific uses, keeping an open mind about the data collected for the task at hand.

Waelbroeck says: “If you’re in the business of predicting market volume or volatility, then the typical machine learning approach is to look at historical volume and volatility numbers and build a prediction model based on that.”

“These models lend themselves well to that kind of application, until something unexpected happens.”

Using the example of a US Federal Reserve announcement, Waelbroeck explains that while traders know volumes will be quiet before the announcement and surge afterwards, machines have to be programmed to learn the significance of unusual events.

“What is required is to blend in enough domain knowledge for the machine know which daily events are significant, and then leverage that data to handle that situation.”
He adds: “When picking a model, it’s important not to blindly use a machine learning model that automatically selects features from a broad ensemble, but rather to consider the problem you’re addressing and the key things the machine needs to be aware of.”

As is often the case, there is also a regulatory angle here. According to Ian Manocha, CEO of Gresham, regulators are putting increasing pressure on institutions to prove they’re in control of their transaction data.

He says: “For every report, they have to prove the lineage of where the information came from, right back down to the raw data, and show that all parties involved have integrity in terms of accuracy, completeness and timeliness.”

At the same time, the introduction of regulations such as the UK’s Senior Managers Regime mean the executives at the top are responsible for what they’re reporting. If there are errors, it is they who will be held accountable.

Manocha suggests that “heuristic algorithms” can take stock of transaction flows and supplement them with a specific set of rules that are relevant to the financial market in question, thereby collecting the correct data, and potentially cutting out months of manual IT work.

The technology not only collates the data, but gradually learns to look for holes in it and to find a way to correct the issue. Through a pilot project using this technology, Manocha claims Gresham “did in four days what the financial institution spent six months on”.

AI is in fact already in use to aid compliance, specifically with financial crime prevention methods. The Pelican and Finextra survey found that 26 percent of respondents are already using AI and machine learning capabilities for sanctions screening and anti-money laundering (AML) processes, while a further 20 percent said they are exploring options for developing in this area.

When asked whether machine learning would be useful in sanctions screening, AML and fraud prevention, some 63 percent said they expect it to be a significant benefit, and a further 43 percent said it would have some or a little benefit.

The survey report said: “Sanctions screening has been the starting point for many organisations, possibly because the way AI can be applied can easily be understood and checking against sanctions has become an essential activity for most organisations involved in payments.”

It added that natural language processing could allow institutions to scan and understand free-format text much more quickly.

“Using machine learning, a system can learn through experience and understanding of context what can pass through the sanctions filter, and what compliance obligations need to be checked, thus reducing false positive rates,” it said.

Of course, speeding up any of these processes reduces not only the time taken to complete a task, but also the resources and cost involved. According to Manocha, over time, these cost savings should not be underestimated, especially in large financial institutions that could have tens of thousands of personnel in the back office.

Manocha says: “The solution is not just about proving an institution is in control of the data; it is about helping them to resolve the issues there. That’s often the more expensive part of the problem.”

Waelbroeck also identifies cost savings that could come out of AI in the trade execution space. For asset managers, he says, trading costs can be a “significant drag” on portfolio performance, and if AI algorithms can improve the efficiency of trade execution schedules and the management of execution, that drag could be reduced.

He says: “Execution algorithms can reduce the friction costs of a portfolio through trading and therefore enhance portfolio performance, especially on a longer timescale when these incremental savings can accumulate to significant levels.”

However, overhauls of the back office don’t come about overnight. The results of the Finextra and Pelican survey showed some frustration, and resignation, about the industry’s sluggishness at adopting new technologies.

Improving time to market for new technologies and products was considered as either challenging or impossible by 53 percent of respondents. The report said: “Without addressing this challenge, they remain slow to deal with the threat from more agile competitors, and slow time to market also means a delay in generating revenue, return on investment and profit.”

Equally, the focus on meeting regulatory requirements means that compliance projects take up the majority of institutions’ time, budget and resources, meaning potentially transformative projects, such as developing AI, are seldom a priority. It’s also unlikely to become a priority until more use cases are developed, tested and proven to add value.

One respondent, a director based in the UK, commented: “AI or machine learning is an untapped, yet to be tested, but potentially huge benefit to the financial services sector.”
“The challenge is how to best present the value and demonstrate it in a measurable, quantifiable way.”

Waelbroeck notes, however, that innovation doesn’t tend to come from the compliance arena, but rather from the drivers of minimising investment risk, reducing costs and driving efficiency.

While these are part of the regulatory agenda, they’re also in tune with the asset managers’ objectives of improving portfolio performance and generating alpha.

“The better starting point for fintech innovation is when we can identify opportunities for machine learning or artificial intelligence that really impact portfolios from an alpha point of view,” Waelbroeck says. “This will catch the portfolio manager’s attention, and as the portfolio managers adopt more advanced techniques both in minimising execution costs and managing risks, then eventually those concepts will find their way into the compliance arena.”

Waelbroeck is of the opinion that the responsibility here sits with the fintech vendors, which should make a transition to AI systems as seamless as possible. He suggests that it is the responsibility of the technology provider “to physically enable portfolio managers to operate in the same environment they’re operating in today, but also to gain access to higher-level and more intelligent insights”.

From a regulatory perspective, however, it is still important for the humans within the business to understand how the machines have arrived at any particular conclusion.

According to Manocha, it all comes back to the element of control.

He says: “What a financial institution doesn’t want is a black box where information goes in and an answer comes out, but actually nobody knows how that answer was put together.”

“Particularly in a regulated environment, you have to know exactly what is going on,” Manocha adds.

Gresham has applied AI to the “hard yards and smarts”, Manocha says, however he stresses the importance of the machine producing a ‘natural language’ descriptor of the answer.

Over time, this means that supervisory regimes will have a more comprehensive, real-time view of what is going on within an institution, and the institution itself will have clearly documented evidence of what is going on within its processing systems.

The Finextra and Pelican survey showed a clear desire for more specifics and proof points around AI technology and its application.
The report also noted the desire for a distinction between the depiction of AI in popular culture and its actual tangible benefits in the business world, noting: “Because of the connotations of AI in the popular consciousness, it can be prone to hype and fearmongering.”

The vast majority of respondents, 92 percent, agreed, to some degree, that there is a need for more industry awareness on how AI can apply to transaction banking.

Some 88 percent said it could be integral in addressing the inefficiencies that remain in the industry.

Parth Desai, founder and CEO of Pelican, said in the report: “Certainly we are a long way from developing general-purpose intelligent machines that operate on the same scale as human beings, because the computational power needed to use all the knowledge for this will be enormous.”

He added, however: “AI can deliver real benefits right now, by offering the ability to solve practical problems in less time and using the available computational power.”

Advertisement
Get in touch
News
More sections
Black Knight Media