News by sections
ESG

News by region
Issue archives
Archive section
Multimedia
Videos
Search site
Features
Interviews
Country profiles
Generic business image for editors pick article feature Image: Shutterstock

03 June 2015

Share this article





The end of grace: more pain on the way

As grace periods come to an end and legacy systems are found wanting under the weight of regulatory scrutiny, the pressure will grow to come up with sound, persistent solutions that greatly diminish the threat of crippling fines and levies. Simon Shepherd of MYRIAD reports

Recent regulatory enforcement and hefty fines from the UK Financial Conduct Authority (FCA) came with a stark warning from Georgina Philippou, the then acting director of enforcement and market oversight: “[F]irms with responsibility for client assets should take this as a further warning that there is no excuse for failing to safeguard client assets and to ensure their own processes comply with our rules.”

The FCA’s statement made it extra clear that custody rules require firms to “keep entity-specific records and accounts”, because they are required in the event of insolvency. Without them, client assets cannot be safely returned. Additional requirements include the obligation to: conduct entity-specific external reconciliations; maintain an adequate Client Assets Sourcebook (CASS) resolution pack (from 1 October 2012 when the requirement to do so came into force); and submit accurate client money and asset returns (CMAR) (from October 2011 when the requirement to do so came into force).

But in these areas, the fined financial institutions had been found wanting. Specificially, they failed to take the necessary steps to prevent the commingling of safe custody assets with firm assets from 13 proprietary accounts; used safe custody assets held in omnibus accounts to settle other clients’ transactions without consent; and failed to implement CASS-specific governance arrangements that were sufficient given the nature of the firms’ business and their failure to identify and remedy the failings identified.

It was not just the operations area at various banks that had come in for scrutiny. The serious rule breaches had typically not been spotted by the banks’ internal compliance teams, either. It is safe to assume that the level of internal communication and information sharing, and the setting of standards internally at many banks, had fallen somewhat short of the FCA’s expectations. The issue is whether they continue to do so.

It is clear that many banks have taken steps to correct these shortcomings, however, it is a worthwhile exercise to look at why and how such situations arose in the first place and, more importantly, what lessons can be learned from this episode for all major financial institutions. A key question has to be: to what extent do many interim measures put in place over the last six or seven years, sometimes even longer, continue to meet the ever-more demanding needs of their owners?

Part of this examination must necessarily look at the environment within which all financial institutions, and particularly banks, are working. Inevitably, this means that the regulatory regimes around the world will be part of the discussion. Much of the regulation put in place after the financial crisis is starting to bite and is being extended. The Organisation for Economic Co-operation and Development has come up with the common reporting standard (CRS) modelled on the US Foreign Account Tax Compliance Act (FATCA). Fifty-one early adopter nations signed up to the CRS principles in October 2014 and the first information exchanges will take place by September 2017. CRS has significantly increased the scope and complexity of existing (FATCA) projects. Many tactical solutions are, or will be, unsustainable.

Furthermore the Basel Committee on Banking Supervision’s 14 principles for aggregated risk reporting (BCBS239) will progressively affect the industry through 2015 and 2016, and will necessitate radical overhauls of governance and infrastructure, risk aggregation capability, risk reporting, and overall supervisory reviews.

The fact is that a lot of solutions that have been cobbled together since the financial crisis are now coming under intense scrutiny and many of them are being found inadequate. In-house systems that might have been deemed ‘robust enough’ and put in place as stop-gaps over the past six years lack sufficient depth, breadth, sophistication and persistence to deal with this raft of regulation satisfactorily. Legacy systems that might even be older than this are also suffering from under-investment and can be deemed no longer fit-for-purpose.

All of these concepts—functional capability and persistence, in particular—are key to the definition, design, development and deployment of any solution. Indeed, in-house solutions frequently fail to address all four of adequately, if at all. Part of the problem with in-house solutions is that they are designed, indeed destined, to fail from the outset and it comes down to vision, knowledge and the ability to execute in a timely and cost-effective fashion. ‘Robust enough’ is typically indicative of not being robust at all in times of stress.

Why would a large bank not maintain segregated accounts as required under the FCA’s CASS? The only two reasons must be cost and/or capability. In the past, it must have been cheaper to maintain omnibus accounts anyway, but part of the cost consideration is the supposed difficulty in maintaining and administering segregated accounts.

The mechanics of maintaining segregated accounts are straightforward, if you have the right technology. Building the right technology, in-house, is slow and costly and perpetually ‘behind the curve’. The main criteria for assessing any project of this nature—those of being functionally robust, timely and cost-effective—have consistently not been met by in-house projects. Banks need to focus on definition and deployment, not design and development, and the best use of a bank’s money is to book-end the technology with saying what they want (requirements) and making sure it ‘goes in’ properly (implementation). Leave the rest to the experts.

A good example of this would be what needs designing into a system on the back of requirements to open up access to multiple departments, permitting better coordination and collaboration, and reporting and management information systems. Simply declaring that a new nostro database will be built is all well and good, unless it: (i) costs 10 times what is available on the market, (ii) only does 10 percent of what is available on the market; and (iii) takes three years to deliver, when an industry standard can be deployed in six months.

Any shareholder would question the value of doing this, and wonder whether their investment might be better spent elsewhere. The value-for-money argument throws into sharp relief any justification for developing in-house.

And here’s the rub: grace periods are running out in the next two-to-three years. The ‘robust enough’ approach is about to get very expensive. When an FCA fine could have paid for a 1,000-year licence to software that could easily have solved these issues 10 years ago, senior executives need to re-cast their eyes over empire builders and in-house technologists, and rapidly acquaint themselves with what is available in the wider world.
In-house software systems are, by definition, ‘legacy’ from the outset. They lack a clear upgrade path, they suffer from infrequent release cycles and the absolute cost of ongoing maintenance means they can rapidly fall into disrepair. The job of a chief information or technology officer should be to determine that ‘best-of–breed’ is the preferred approach, even if it means going externally, and that in-house development really is the last resort.

Investment should be made in definition, deployment and integration, not design and development. The economics of in-house development rarely stack up and the opportunity cost of a slow, relatively expensive development project means that long-term returns remain a pipe-dream.

Irrespective of the cost of maintenance and upgrades, the compliance cost of keeping up with regulation often means starting projects anew, yet the inertia of in-house systems means that fresh change and adaptability are difficult to achieve. The very nature of internally developed systems means they are often standalone, further hindering ongoing development. The resource responsible for the original project has almost certainly left the institution, so continuity and persistence become additional problems.

Reasons given for the development of in-house software systems include operational efficiency, business growth, keeping up with regulatory initiatives and both system consolidation and cost reduction—all of which can be much more readily achieved by buying a purpose-built platform.

Indeed it could be argued that in-house development projects have often actually increased operational risk, rather than reduced it. Given the obvious shortcomings of in-house projects and given the time-to-market deadlines that a series of end-of-grace periods represent, it must make sense for banks to acknowledge that the functionality exists to properly address these issues. This functionality is constantly evolving. What they need to do is to examine what they really want, where it sits and how best to use that functionality within their operations’ teams.

Advertisement
Get in touch
News
More sections
Black Knight Media