Data quality, Data Warehouse and Solvency II Compliance

If all processes run on schedule, the insurance industry in the European Union will be staring at new regulation effective 2014. The new directive, Solvency 2, is meant to plug gaps present in its 1973 predecessor, Solvency I – primarily better risk management as well as correcting the inability of the first directive to lead to the harmonization of insurer supervision among member countries. Of course, the need for Solvency 2 has been further expedited by the events of 2008 where insurance companies were hit just as much as banks. For instance, the American International Group (AIG) had to be saved via a $185 billion US government bailout – the largest any US financial institution received.

In a nutshell, the Solvency directives are to the insurance industry what the Basel directives are to the banking industry. Solvency 2 has three pillars just like Basel II and III do. One of the major differences is that Solvency 2 specifically targets insurance companies operating within the European Union while Basel has a wider reach. Still, given the considerable size and influence of European insurance market on the global insurance industry, it is expected that Solvency 2 will eventually percolate in some form to other jurisdictions especially if it leads to a more stable insurance sector in the Europe Union.

Understanding ‘Data’ in the context of Solvency 2

The quality of data can never be separated from any form of financial services regulation or risk management – insurance or otherwise. Solvency 2 takes data quality requirements a notch higher to factor in changes to the financial services industry over the past 3 decades. When referring to data, the Solvency 2 directive has in mind the information (including assumptions) employed by statistical and actuarial analysis to determine technical provisions.

Ultimately, the quality of risk management and regulatory reporting data is determined by the satisfaction of 3 criteria – accuracy, appropriateness and completeness. Most quality assessment systems will usually evaluate there three criteria using 4 distinct types of data check –  technical tests, general ledger tests, functional tests and business consistency tests.

As you would expect, there is no single formula for implementing a data warehouse and risk management system that will ensure analysis, reporting and decision-making is based on data that is of the highest possible quality. However, it is almost inevitable that the assessment and management of data quality requires that all relevant information be at some point held in a single repository such as an enterprise-wide data warehouse.

Moving data from source systems to the data warehouse

It is highly unlikely that any significantly-sized insurance company today will have all its raw data relevant for Solvency 2 will be present in a single system. Solvency 2, just like similar risk management frameworks such as Basel III, requires that the data for risk analysis, management and reporting be obtained from numerous sources both within and outside the organization.

Therefore a data warehouse installed by an insurance company for purposes of risk management and Solvency 2 regulatory reporting will likely contain data from several source data systems. But the data from such disparate systems is almost always in different formats. Meaning, it will have to first be structured to a standardized format before it is uploaded to the data warehouse. All this must be done without compromising data integrity. Such ‘conversion’ functionality may already be present in some data warehouse platforms. Alternatively, the IT department of the insurance company would have to develop or work with a third party to create a separate software tool for converting the data.

In summary, data destined for the risk management and regulatory reporting data warehouse will move from the source systems to an intermediary ‘format converter’ before it is relayed to the data warehouse.

At what point should you check for data quality?

Having understood the general path that risk data will follow from its source to the data warehouse, next comes the question of at what point the data should be checked for quality. The almost obvious response would be to perform quality checks at every step. But this can be expensive to implement and lead to redundant repetition that can ultimately strain precious network, server and human resources.  Companies will have to determine what point of the data transfer chain is the most ideal for performing data quality checks.

One option would be to do the data quality checks at the source systems. In this case, the responsibility would lie with the respective line managers to ensure the data captured is accurate, relevant and complete. The drawbacks of such an approach include inconsistency and duplication of efforts between departments. In addition, competing interests and ‘internal politics’ can compromise the quality of data eventually uploaded to the data warehouse.

A second approach is embedding the data quality checks within the data format conversion tool prior to posting in the data warehouse. This is better than the previous option on many levels. However, its main shortcoming is that it might filter out data that in theory may not have significance in the quality of reporting but that may be very significant when looked at in the context of other data checks. This ‘robs’ the risk manager of information that though seemingly insignificant on its own, its combination with other checks may make it a major factor in managing risk and complying with Solvency 2 reporting.

Quality checks within the data warehouse- The best solution?

The third route to checking data quality is to perform it from within the data warehouse. Of the three options, this is the best for several reasons. First, risk managers will be working with structured but relatively raw data whose content is virtually unchanged from what it is in the source systems. They can rest assured that the information in their possession has not been filtered by someone else. The second advantage is that access to such whole data gives more room for back testing and scenario modeling. Certain data checks can be enable or disabled for what-if risk assessment computation.

Share article

Let's talk!

Contact us today to discuss your unique needs and request a personalized demo.

Vamos conversar!

Let's talk!