By Steve Barnes, CTO
In this series of blog posts, we explore intelligent regulatory reporting solutions and how the next generation of solutions are delivering insights far beyond their original purpose.
This first feature in the series looks at the importance of data innovation, and the hidden value that can be realised from data assets.
Changes to regulatory reporting regimes have focused attention on the process by which data is gathered. Distilling fund data into a format that can be used for regulatory reporting is a challenge for managers, particularly those whose funds are administered by more than one fund administrator. However, once this data is gathered into a golden source, not only can it be deployed for global regulatory filings, it can also be extended, reused and redeployed to provide additional insights, far beyond the original intent.
The ultimate asset – a Golden Source of Data
Since fund data is stored in multiple systems and these source systems rely on multiple formats, it is little wonder that managers yearn for a single, consolidated view, otherwise known as a ‘golden source’. This desire has been amplified by managers’ increased reporting requirements.
Firms are recognising that they are becoming data-driven organizations. Data is being treated as an asset, and optimised to the maximum, as any other company asset. The concept of data as an asset is not only being recognised by the firms themselves, it is also being acknowledged by the regulatory authorities. Recently when the Central Bank of Ireland deputy governor – Ed Sibley, made a keynote speech on this matter, he highlighted “firms that can harness data effectively can expect competitive advantages”. 1
However, attempts at harnessing data are often hampered by the restrictions and shortcomings of the source legacy systems. Firms need to see beyond the legacy system constraints. Automated rules for data consolidation, enrichment and normalisation can be executed to transform the data into forms ready for regulatory reporting, and also into a format ready for further exploration and insights. These rules, often executed in locations outside of the source systems, allow for third party data enrichment, data quality issues can finally be addressed and importantly, allow for a flexible translation for multiple downstream requirements.
Think of data transformation as as control mechanism to get your data into the required shape for multiple uses. A flexible rule system can easily accommodate this.
Value in data insights beyond regulatory reporting
An additional benefit of the ‘golden source’ approach is that data will not only be optimised for regulatory reporting – it can also be fed into cluster computation engines, such as Apache Spark, to reveal previously hidden business intelligence insights.
However, having insights into your data is only part of the solution. Effective communication of those insights is what makes information actionable. It is important to consider the correct visualisations and level of configurability required by customers to ensure that insights are received clearly, at the right time, to impact decisions in the best way possible.
The insights and competitive advantage that firms can realise from their golden source of data are vast and unlimited. Often firms will use regulatory reporting data to perform variance analysis against the submitted regulatory reports across filing periods and jurisdictions. Increasingly we are encouraging our customers to use this data in configurable dashboards to perform trend analysis against holdings and exposures, across sectors, geographies and asset classes and to calculate performance metrics against cumulative returns in order to track portfolio performance against appropriate benchmarks. Such measures, to mention a few, include Alpha, Beta, Sharpe and Sortino ratios, and drawdown related quantities, such as the maximum drawdown and recovery rates over given time periods.
Enhanced data-driven decision making with Machine Learning
What was key in Mr Sibley’s speech was that the use of artificial intelligence and machine learning can be very powerful. We ask the question, what is your data trying to tell you that you are not even looking for at present?
In the AQMetrics innovation lab, we are applying machine learning to drive new insights beyond the typical variance analysis that has become de facto.
Using historical data analysis insightful data items can be uncovered using machine learning algorithms. These algorithms learn what is usual for a set of regulatory reporting data and highlight where something falls out of normal bounds. Both univariate and multivariate outliers can highlight data errors, but also importantly, in the multivariate case, it can highlight clusters of outlying data, which could potentially be indicative of a more elusive market event; hence the need for contextualizing the outlying data for individual firms. Analysis can point out large movements in a manager’s funds or erroneous data in the reporting of these funds to the regulator and most importantly – before the information is reported to the regulator.
Peer Group Analysis
Peer group analysis offers new possibilities with the increased use of multi-tenanted cloud platforms. Opt-in, anonymised analysis can compare an individual manager’s funds with other funds which share a similar investment strategy. Rolling up data into a global view across a diverse set of funds can provide exciting ways for managers to evaluate their own funds. Trends and movements across securities, returns, exposure, geographical locations and sectors can highlight efficiencies and areas of improvement for funds to improve their own data and workflow processes. It is through the increased pressure of these processes upon a firm’s data that can turn data into diamonds.
In the next blog we will look at data as both an asset and data as a liability. We will explore the processes, workflow and third party controls that every firm needs to protect their data assets throughout the regulatory compliance journey.