News by sections
ESG

News by region
Issue archives
Archive section
Multimedia
Videos
Search site
Features
Interviews
Country profiles
Generic business image for editors pick article feature Image: Shutterstock

12 October 2015

Share this article





Philippe Chambadal
SmartStream

Small changes can have big consequences, and the only way to stay informed is through a single common model, says SmartStream’s Philippe Chambadal

Would you agree that new regulatory obligations are putting current data processes under duress, and that budget constraints ask firms to do more with less?

It’s a difficult and ongoing question, and it has been the same for the last six or seven years for financial firms. Various new regulations across the globe have put constraints on budgets all at the same time, even though business volumes have not recovered to pre-crisis levels yet.

The regulations are a must, but we believe that if every firm approaches them alone, building their own solutions, they will end up with duplicated services, when actually they could have used a mutualised or utility-based service to do the job once, for the whole industry. In terms of data governance, I think utilities are the only way forward in terms of time-to-market, efficiency, and the ability to comply with regulatory requirements in a timely fashion.

Are financial institutions looking for new approaches to managing their data?

Absolutely. There is an enormous amount of fragmentation in data management. There are often many functions within one firm—there may be one group that handles fixed income, another for corporate actions, one for ratings, and another for exchange-traded fund data or indices.

It can also be fragmented by business units and geographies. You can end up with dozens, or sometimes hundreds, of sub-systems, all trying to solve their own the data management issues.

All the data needed to run clearing and settlement processes and post-trade processes should be handled as one single mechanism, managing reference data, pricing, corporate actions, issuer data, and everything else, all as a single process.

There is a danger that firms will end up chasing data without understanding the workings of the processes, for example, the ripple effects of a corporate action, or a new issue that could affect all the subsystems.

If you don’t manage data as a holistic set, you become exposed to having applications using wrong and/or inconsistent data.

If you’re trading with a counterparty and that counterparty happens to be an issuer of debt, it might have a common stock, options trading off that stock and debt instruments. This means three asset classes to take in to account, plus, the stock might be part of an index, adding another data set on top of that. If there is a split in the common stock, all other data types are going to be affected.

Here, it is important to have a single data model that captures the issuer, the counterparty details such as settlement instructions, the corporate actions of the issuer, and all the instruments that the issuer issues.

This is the only way you can be confident of a net that captures everything. If it only captures 98 or 99 percent, there is still risk exposure.

The interdependency between all these different data types is causing a lot of problems. As long as they are managed separately, depending on asset class or where they are in the trade cycle, there are going to be inconsistencies.

What does the Reference Data Utility mean for the industry?

We built the Reference Data Utility (RDU) five years ago with the goal of capturing all the external data a firm needs to run its back-office processes, including the cross-referencing, cleansing and enrichment needed to clear and settle trades.

The goal is to improve straight-through processing (STP) rates, as 35 to 40 percent of trade breaks happen because of data mismatches. We are the leading reconciliation solution vendor and have a precise idea of why trades break.

Our utility provides a high-quality, enriched data set that can eliminate these trade breaks proactively. The biggest impact is reducing the downstream ripple effects of bad data and the associated trade repair costs—that is where clients see the biggest return on their investments.

It costs between $50 and $1,500 to repair a broken trade, and, as with most business processes, the earlier you can fix the problem, the cheaper it is to fix. We want to be pre-emptive, fixing the problem before it breaks the trade. We can see that a trade is about to fail because the reference data is mismatched, and the software will alert the utility, resolve the problem in real-time, and complete the trade, avoiding the fail in the first place.

We have a lot of data points from a large number of clients, and the efficiency has proven itself many times. Typically, for a data management client, budgets are reduced by 30 to 40 percent in the first year, and in terms of avoiding trade breaks, we’re seeing cost reductions of up to 90 or 95 percent. It makes a very significant difference.

What kind of data management challenges will we see in the future?

Regulators are pushing for a better data governance framework. They want to make sure that whatever an employee of a financial firm does to a data set, that change is properly documented, controlled and auditable. For example, if a data point from a data vendor is modified, it might affect regulatory reporting. Regulators are looking for complete traceability on who changed the value, when, why and if there was supervisory control.

This is what the utility intends to do. It includes a complete audit function that shows clients, regulators and auditors what exactly has been done and when, and it keeps track of those changes forever.

Large banks have hundreds of sub-systems and many segregated lines of business such as custody, prime brokerage, asset servicing, and more, so trying to create a framework like this in-house is an expensive, long-term project. If the reference data management processes are run as a utility outside the firm, the service can be delivered and live in a matter of weeks, yielding much greater return on investment.

On top of this, an in-house system is only ever as good as the counterparty’s system, and a firm is only as good as its counterparties’ systems. Mutualisation of these internal processes is the only way: one data model and one high-quality data set used by all firms, at the same time.

Advertisement
Get in touch
News
More sections
Black Knight Media