Colm Kincaid, Director of Securities and Markets Supervision, Central Bank of Ireland
Since the 2008 global financial crisis, the regulation of financial services has grown in scale and complexity, with each major legislative initiative bringing with it significant reporting obligations. This has implications for the IT systems and data architecture we need to support this regulatory framework. It is an area where the interests of regulated firms and regulators are aligned – or should be.
Developing our tools for getting in information and analysing it has become a core priority in the regulation of financial services, an industry which has traditionally prided itself on the quality of its information. This is no less so in the field of securities markets, where the scale of regulatory reporting and data complexity has grown exponentially. This growth continues to accelerate, with the recently introduced SFTR alone adding further millions of daily reports, each containing a significant volume of underlying data points.
This is where machine learning, machine-to-machine reporting and the innovations of data science come in, using processes and algorithms to scan through millions of data points, categorising and clustering the data as necessary. Using multiple complex combinations of variables to effectively group actors, types of financial instruments and transactions, as well as comparing these to the historic patterns, we can use technology to identify anomalies or other triggers to guide our work. We can also verify such alerts within context and help to train machine learning algorithms to improve the future accuracy of the alerts. Coupled with our human interactions with firms, this provides us with a powerful blend of technological and human intelligence to drive out more targeted and assertive supervisory challenge.
Using multiple complex combinations of variables to effectively group actors, types of financial instruments and transactions, as well as comparing these to the historic patterns, we can use technology to identify anomalies or other triggers to guide our work
Of course, this increase in regulatory reporting reflects the massive quantities of data comprised in daily trading on the securities markets we supervise. Here we see not just an increase in the volume of trading activity but also an increase in the use of automated or ‘algorithmic’ trading strategies. It is no longer appropriate to describe these strategies as a novelty. Rather it has been a new norm for some time now. Indeed, our own daily scrutiny of securities market activity would indicate that it is even more prevalent than the obvious data would suggest.
Put simply, there is no getting around the fact that, as the data at play in securities markets grows, so too does the information regulators need to ensure a properly and effectively regulated securities market. This means a securities market that provides a high level of protection for investors and market participants and is:
• transparent as to the features of products and their market price;
• well governed and comprises firms that are well governed;
• trusted, by both those using the market to raise funds and those seeking to invest; and
• resilient enough to continue to operate its core functions in stressed conditions and to innovate appropriately as markets evolve.
Of course, developments in data and technology have implications for how we supervise financial services that go beyond just the need to invest in enhancing our technological capabilities, or getting in more reports. Technology is no longer just a backdrop to financial services, but a core feature which we have to risk assess and supervise in its own right. That is why many regulators, like ourselves, established centres of expertise in the area of technology risk and continue to invest in our broader supervision of operational resilience. We are also working to bring together systems so that they speak to each other properly and empower our supervisors to slice and dice the information in the manner that best suits their needs. This all reflects a drive towards supervision that is more data driven, so we can monitor and track all aspects of the market while allowing supervision teams to react in as near to real time as possible. It is also changing the skills we need to build as supervisors.
More recently of course, COVID 19 has required us to adjust to working and managing these large volumes of data remotely. Business continuity and being on top of market information have been two key areas of scrutiny for us in our COVID related engagements with firms. With further market volatility to be expected in the period to come, it is vital that firms take the necessary steps to enhance their preparedness for futuremarket shocks. Our experience of recent market events has been that, while firms did well overall in the face of extraordinary challenges, these events have also shown a clear and present need for improvement.
So, as a CIO in a regulated financial service provider, what should you be focusing on in the midst of all this activity?
It is impossible (and imprudent) to seek to distil the multi-faceted nature of this challenge down to one topic. Nevertheless, if there is one topic that firms can certainly expect to see regulators place more focus on in the months and years to come, it is data quality. All too often, when we look beneath the data we find errors, basic oversights and poor data quality frameworks. This can be a feature of assigning tasks to colleagues who are too junior or not given enough time to complete the task to a high quality. Or, it can be a symptom of a wider lack of quality control or even a lack of respect for regulatory obligations. You can expect regulators to factor these basic errors into our overall assessment of the risk your firm poses to our objectives – and to deploy our resources accordingly.
Securities markets regulators rely more and more on the accuracy of the data we receive from the firms and markets we regulate. We have done a lot in the legislative framework to provide for key information to be reported. Now we need to make sure we are using this information to greatest effect, leveraging technology to do so. We have the quantity of data we need to deliver on our respective responsibilities. But to do so effectively, we now need to see more quality in this quantity.