Wolters Kluwer: Taming data duplication to support business transformation

Will Newcomer

VP - Risk & Performance at Wolters Kluwer

Views 345

Wolters Kluwer: Taming data duplication to support business transformation

15.03.2018 10:15 am

Thanks to CECL granular data requirements are here to stay. But how can firms tame the new data requirements by adapting their IT and data infrastructure? Will Newcomer, Vice President of Product & Strategy for Wolters Kluwer’s Finance, Risk & Reporting Americas business, explores the options.

The regulatory requirements that emerged from the firestorm that triggered the Great Recession put banks under near-relentless pressure to generate and report increasingly detailed layers of data. The Financial Accounting Standards Board’s new Current Expected Credit Loss (CECL) standard, which extends necessary loss calculations over the lifetime of loans, and signals from Federal Reserve chairman Powell that the stress testing and resolution planning components of Dodd-Frank will be maintained, show granular data requirements are here to stay. While the current administration’s focus on deregulation might bring some relief, it is unlikely to turn the tide, let alone relieve the data burdens bankers already face.

The most challenging data management burden is rooted in duplication. The evolution of regulations has left banks with various bespoke databases across five core functions: credit, treasury, profitability analytics, financial reporting and regulatory reporting, with the same data inevitably appearing and processed in multiple places.

This hodgepodge of bespoke marts simultaneously leads to both the duplication of data and processes, and the risk of inconsistencies -- which tend to rear their head at inopportune moments (i.e. when consistent data needs to be presented to regulators). For example, credit extracts core loan, customer and credit data; treasury pulls core cash flow data from all instruments; profitability departments pull the same instrument data as credit and treasury and add ledger information for allocations; financial reporting pulls ledgers and some subledgers for GAAP reporting; and regulatory reporting pulls the same data yet again to submit reports to regulators per prescribed templates. 

Relearning the data alphabet

Data storage has evolved from simple flat files to databases to marts to warehouses, and now to ‘lakes’ housed in ‘big data’ environments, where all kinds of structured and unstructured data is tended by data scientists. Proprietary calculating and reporting solutions designed for different requirements complicate the data requirements picture even further.

Just as the complexity of housing data has evolved, so have the data management tools. Data management is typically thought of in three stages: Extract, Transform and Load (ETL). Considering the multiple levels of staging tables between data sources and storage areas, most real-world data management processes consist of much more than three sequential steps (e.g. ELETL or ELELTL) -- and this is just to get the detailed data. Additional steps -- Calculation (C), Aggregation (A) and Presentation (P) -- are needed throughout to meet today’s analytical and reporting requirements.

When considered end to end, multiple occurrences of E, T, L, C, A and P are embedded in today’s data management processes – many of which are manual or semi-manual and still performed by senior management.



Banks should seize the moment

Fortunately for forward-thinking banks, the window before CECL kicks in provides a golden opportunity to transform the disparate data marts and processes underpinning key departments into a more integrated, future-proof approach benefiting not only compliance but also profitability and competitiveness.

The fact that CECL will effectively force this integration makes it even more important to begin the transformation now. The question is, how?

Some advocate ELT as the solution, but this exacerbates duplication in rules and loses the value of clean, normalized, persistent data. Others advocate throwing everything into a new data lake and letting the data scientists fish out what’s needed. Others sadly continue to build or buy point systems that involve separate databases and management processes, adding another island to the others. But reality shows there is no ‘one size fits all’ approach, no single data ocean or program. Tactical solutions to problems that are both immediate and strategic are not transformational; neither are old processes under new names.

The path to effectively transforming data management is to combine tried and true processes and solutions with the selective deployment of new technologies which remove undesirable duplication of both rules and storage.

A map of such an approach is below; ETL/ELT is applied to source data that is convened into staging tables or a data lake. Required and/or relevant data from the staging tables or lake is then transferred to a permanent, defined data mart where analysis, calculation and presentation can be conducted before the results are transmitted to various business functions and external recipients.

Diagram: A blueprint for the transformation of data management

By reducing the need for departments to pursue these processes independently, a more unified data management structure is created, minimizing duplication and redundancy and improving efficiency.

The result is that banks have more time to focus on core business goals. Better-managed data helps banks manage risk better, and build a clearer picture of customer behaviors. Data management, risk and finance professionals need to transform or replace their bespoke tools and processes to support this transformation. The institutions that use this brief period of regulatory ‘downtime’ to improve outdated and inefficient processes will be those in the best position going forward.

Note: Will Newcomer has more than 35 years of experience in risk and finance with major and regional banks as well as leading technology firms, making him uniquely qualified to lead clients to the forefront of integrated finance, risk and compliance solutions. In addition, Newcomer uses extensive experience in enterprise-wide management information systems to help financial institutions in the areas of risk adjusted performance management, budgeting and planning, asset and liability management, incentive compensation, financial reporting and stress testing. He will be speaking at next week’s CECL 2018 Congress in New York, organizing by The Center for Financial Professionals and sponsored by Wolters Kluwer

Latest blogs

Greg Taylor Validis

Why run before you can walk? Digital data assets for SME lending

Designing and implementing a digital strategy for SME lending and deciding what data assets to utilise in doing so is truly staggering. Read more »

Graeme Dillane InterSystems

InterSystems: Banking on real-time data to remain compliant

We talk about the ‘data revolution’ referring to the huge influx of data experienced over the past few years and the potential value it holds when analysed. However, many businesses – and banks in particular – are now trying to find new ways of Read more »

Fima Katz Exadel

Finance Organizations and the Digital Boom

How the FinServ industry is leveraging digital transformation to provide secure, meaningful customer interactions Read more »

Elias Thomaidis Hitachi Europe

How PSD2 will influence the Corporate Banking Payments landscape for the better using Biometrics

Transaction processing for corporate banking operations are on a completely different scale to the retail-banking world in terms of both value and volume.  The resultant revenues form one of the main profit drivers for banks and according to the Read more »

Tony Bethell ClusterSeven

The Real GDPR Challenge is Sustainable Compliance

As organisations raced towards the compliance deadline for the GDPR, their focus appeared to be simply to identify the inventory of IT supported assets that hold GDPR sensitive data. While a reasonable first step, this approach is merely scratching Read more »

Related Blogs

Mike Salisbury Quantrix

Financial Data: Why Backend Infrastructure Matters

As banks and financial institutions drive to automate processes and ensure tighter integration, they are increasingly diverting their attention away from shiny, attention-grabbing customer-facing products. Instead, they are turning to the critical, Read more »

Ivy Schmerken FlexTrade

Alt Data on the March with Machine Learning

The explosion of alternative data sources, such as satellite images, sentiment analysis, and geolocation data, is having a profound impact on the field of quantitative investing. Analyzing torrents of unstructured data requires sophisticated tools Read more »

Ivy Schmerken FlexTrade

The Buy Side Delves into Mobile Data

As hedge funds and quantitative asset managers hunt for unique sources of alpha, Wall Street’s attention is turning to analysis of location data generated by mobile phones. Location data from mobile phones is now at the forefront of the push to Read more »

Ivy Schmerken FlexTrade

Discretionary Managers Seek Alpha in Alternative Data

Alternative data providers see huge potential in providing their data to discretionary asset managers who are losing assets to quantitative and systematic funds. As active managers trail the performance of passive index funds and exchange-traded Read more »

Andrew Joss EMEA

Knowledge Is Power: Why Data Is Key to Mergers and Acquisitions

There’s a lot of jostling at the top of the tech leaderboard as we move into 2017. Mergers and acquisitions (M&As) are a good way for companies to climb that board. They help businesses rapidly expand their skill set, customer base and revenue. Read more »

Free Newsletter Sign-up
+44 (0) 208 819 32 53 +44 (0) 173 261 71 47
Download Our Mobile App