Why Banks Need Powerful, Agile Data Preparation Solutions for Accurate and Timely Regulatory Reporting

  • Banking
  • 28.10.2021 09:00 am

Where there is data, there is regulation. Most financial institutions including banks and insurance companies deal with a complex web of external, local and global regulations, all requiring them to submit regular reports about their business to comply with KYC, AML, CFT, and other regulations..

However, due to obsolete IT infrastructure, data silos, and lack of data taxonomies, and manual processes, meeting these regulations proves to be a far more challenging task – and financial institutions are not prepared.

In this detailed post, you’ll get an overview of how banks prepare data and how it is no longer effective. Additionally, you’ll get to know:

  • Expectations in regulatory reporting  
  • Data quality challenges that threaten compliance  
  • Cost of failing to meet regulatory compliances  
  • Why Excel is no longer the right tool for compliance data preparation 
  • Approaches to Remediation – Self-service data preparation tools  

Let’s get started.  

Expectations in Regulatory Reporting 

The increasing complexities of financial crimes coupled with multiple financial crises have escalated stricter guidelines for institutions. The banking world has experienced tougher regulations that demand comprehensive capital analysis reviews, comprehensive liquidity reviews, supervisory reviews, and evaluation processes among others. All these regulations, whether it’s the BCBS 239, the CCAR, Basel III or MiFID II are inherently data-centric.

These regulations were brought in for multiple purposes – for instance, the Dodd-Frank is to ensure transparency in record keeping while the CCAR and BCBS 239 are focused on data quality, data lineage and overall data management with a special emphasis on proving and improving data governance.  

These regulations have imposed a new mode of operation adding new complexities to regulatory compliance. There is more to come. As digital transactions and online commerce take precedence over traditional banking, these compliances are expected to increase to counter money laundering and financial crimes carried over the internet.  

Put simply, banks are required to keep up with technology and the evasive world of digital finance. The problem? Financial institutes are not. Not for cultural transformations, not for technology transformations, not even for data transformations. The pace is slow, resources are limited but the pressure to comply is mounting. Failure to demonstrate a compliant solution can result in massive regulatory and reputational risks – including hefty fines, imprisonment of executives, and loss of reputation.  

Cost of Failing to Meet Regulatory Compliance Standards  

The cost of compliance to the regulatory standards consumes approximately 20% of “run-the-bank” cost base of a financial services provider and about 40% of “change the bank” costs for ventures presently in progress. Dodd-Frank and BCBS-239 support consumer protection; however, call for investment to manage at scale, placing a burden on the profits of the financial sector. Failing to comply has resulted in more than 200 billion dollars in fines during the past five years and has also caused an increase in the concerns for personal accountability for the banking executives.  

 

Cost of Failing to Meet Regulatory Compliance Standards

Local banks are also not exempted. The American Bankers Association recently conducted a survey of small American banks and established that approximately 50 percent of small American banks have dropped their product offerings, lowered staff because of the stress of regulatory compliance, and the consumer support is not up to the mark needed for compliance. 

Regulatory reporting demands a swift merger of varied data available throughout the financial organization. This can be an extremely costly and resource-intensive task. Even when financial organizations are spending approximately 4.5 billion dollars annually on compliance only, they are still paying billions of dollars in fine. Diverse data management for regulatory reporting is a multi-tiered challenge for the modern-day financial sector. 

Key Data Preparation Challenges in Regulatory Compliance  

Experts agree that resources in financial institutes need to spend 80% of their time in analytics review (including reviewing data for sanctions compliance), while 20% on data preparation (which encapsulates operations as data cleansing, data standardization, data profiling, data matching etc.). The sheer volume and complexity of data coupled with limited human and technological resources have resulted in teams spending more time on data treatment and preparation rather than on analytics. This mainly because regulatory reporting is still considered a subsidiary, backside function handled manually by IT resources working in silos. But there are other hurdles too – hurdles that are preventing organizations from establishing a foolproof regulatory system.  

Over the years, we’ve worked with several of the largest banks and financial institutes in the US and across the globe to help them with data quality challenges. Almost every client we’ve worked with cited one or all of the following hurdles:  

 

Disparate Data Sources: A wide network of vendors and partners and a spread of multiple branches means banks are dealing with disparate data sources. They are struggling with the consolidation of data from multiple sources including Excel files, relational databases, and cloud applications. For every report or analytics review, banks need to collect data from these multiple sources which could take up to months due to data integration, conversion and transformation challenges. .

Reliance on out-dated Systems: Too often, traditional financial institutions (FIs) are still getting by with the same systems they’ve had in place for the past 20 or 30 years, so it’s hardly surprising that these solutions aren’t well-equipped for today’s digitally-focused, omni-channel environment.

Financial data is stored in obsolete mainframe systems. In fact, an overwhelming majority of the top 100 banks in the world depend on it. The problems exacerbate further when data is moved between or stored across on-premise relational databases and cloud web applications as it presents greater data conversion challenges, requiring more person-hours and costs.

Data Exists in Silos:  As banks rely on legacy systems, their business data structures are often fragmented, resulting in data silos.

For instance, some banks still don’t have a centralized data management system for a consolidated view of each stakeholder information such as customers, vendors, partners, and accounts. This means that at the time of analytics reviews, the organization will have a difficult time pulling and consolidating this data from multiple systems. Siloed data makes for one of the most time-consuming activities as firms struggle to extract data from a host of apps, platforms, and  systems.  

More importantly, data silos exist independently, each with their own unique identifiers - customer codes, social security numbers, or other proprietary data – that act as a major barrier for accurate record linkage. As a result, the end users face a tedious task in reconciling conflicting customer or account information, increasing the risks for poor customer tracking, fraud and identity theft, and regulatory non-compliance.

Poor Data Quality: For most financial institutions, data quality remains an ongoing challenge, with its integrity degraded by inconsistent taxonomies, inaccuracy, incompleteness, and duplication. According to a study conducted by Oracle Financial Services and the Center of Financial Professionals, inconsistent data, and poor data quality resulting from siloed systems are two of the barriers to achieving BCBS 239 compliance. 

The siloed systems, thus, can bring their own problems of inconsistent file naming conventions, disparate formats and other redundant data that can undermine the reliability of existing data as it gets updated with new, incoming information.

Identifying duplicates becomes an almost impossible task. End users must spend considerable time to determine if two or multiple records refer to the same entity or not which could be in the form of different order of numbers and/or letters and other variations. Alternatively, there could be seemingly similar duplicate records with slight variations that are in fact different entities altogether. 

Data Preparation Still a Manual Process: There is still a heavy dependence on manual methods to prepare data. Excel sheets and SQL programming are still being employed in aggregating complex data – which break apart beyond a few thousand records and thus unstable or require significant refinement for scripts to work smoothly across complex datasets. . This manual approach prevents financial institutes from keeping up with new demands – both in terms of customer and regulatory expectations.  

Regulatory reporting demands data to be clean, accurate, complete, and consistent. But one of the biggest roadblocks to meeting these demands is impaired technology coupled with a stubborn insistence of sticking to outdated data preparation methods that worked well in the past but are no longer helpful in managing current data needs.  

Either the technology in use doesn’t have the breadth of data quality solutions - ease of exporting data, data standardization and data deduplication through a deduping software – or it may not be robust enough to run real-time API workflows to automate data quality tasks with minimal false positives.

Why Excel and SQL Programming are No Longer Effective Tools for Data Preparation  

The Federal Reserve and regulators are now less tolerant of manual solutions and workarounds that are no longer a match to the scope, volume, and granularity of data that need to be submitted to regulatory authorities.  

Adding fuel to fire is the counter-intuitive reporting architecture of many firms that still delivers individual reports by business area, preventing the accurate calculation and reporting of risks across entities or by product mix. Plagued by disparate systems, inconsistent data sets, manual data entry errors, and mounting compliance pressures, professionals spend a significant amount of time and effort in data aggregation and reconciliation via Excel or SQL codes.  

Highly limited in the face of vast volumes and varieties of data, common technologies like Excel first introduced 40 years ago to complete regulatory reporting can no longer meet the required speed and demands. Some of the main challenges of using these technologies are:  

Limited Data Preparation Features: Excel is not intuitive and requires the user to create formulas and rules for every transformation. For instance, it takes multiple formulas and repetitive actions to remove white spaces or accidental punctuation marks in text fields. Moreover, unlike ML-based solutions that evolve with time to encapsulate new problems, Excel is still more or less the same as it was 40 years ago. It has limited data preparation features such as integration with other data sources, or profiling, or even click-based data cleansing. Lastly, it cannot be used to dedupe data which is one of the leading challenges professionals face when consolidating data from multiple sources.  

Data Lineage Limitations: One of the key requirements of regulatory reporting is visibility. Stakeholders want to know exactly how data has been transformed before being submitted to a regulator. Excel does not automatically keep records of transformations. Users often have to go back through their work and manually demonstrate the steps they took to reach the desired level of accuracy. Financial institutions must use data preparation software that automatically records all transformations and preserves the structure of this data.  

Requiring Expert Users: Both SQL and Excel require expert/advanced programmers or users making the regulatory reporting an IT task instead of a business task. Not every financial or regulatory compliance analyst is technically sound in SQL or in data management for that matter. Data analysts or programmers on the other hand are not owners of compliance data and therefore do not have as robust an understanding of the nature of this data as do the people owning it. This crisscross between IT and financial analysts is one of the leading causes of siloed data preparation that hampers a progressive approach to data management.  

Organizations must acknowledge the fact that common technologies like spreadsheets and SQL are only effective when preparing data at a small scale – for regulatory reporting – which demands accuracy – they are hardly ideal. Today, financial institutions need automated, ML-based solutions that are powerful enough to allow for agile data preparation while allowing your departments to easily consolidate, merge, dedupe, and clean data for regulatory compliance. The platform must be intuitive, allows for easy integration and an easy-to-use interface that does not depend on the expertise and availability of programmers or IT experts.  

Approaches to Remediation – ML-Based Self-Service Data Preparation Tools  

While most experts talk about culture change, data transformation journeys, and a complete overhaul of infrastructure, we believe the right approach to remediation lies in first acknowledging core problems with data quality and understanding challenges with regards to processes.  

For instance, firms can start by improving the quality of their data before moving on to bigger transformation initiatives like migrations or new infrastructure implementations. As the saying goes, the devil is in the detail, and in this case, it’s not the infrastructure or technology that’s hampering progress, it’s quite literally the details in a bank’s data source.  

The first step to remediation, therefore, is in preparing data for compliance. And this can be done by using a top-in-line self-service data preparation tool that allows for:  

  • Integration of data sources into a single, one-stop platform  
  • In-depth profiling of data to discover errors and anomalies  
  • Cleansing of data according to pre-defined and customized rules, patterns or logic  
  • Merging and deduplicating data with a 100% match accuracy  
  • Consolidating data into one single source of truth which can be presented for regulatory reporting  
  • Data lineage – show every transformation as it happens  

The Bottom Line….  

Regulatory reporting demands data accuracy and integrity, both of which cannot be achieved via manual processing of data. Financial firms need data preparation tools that can evolve with time and allow them the flexibility of preparing massive volumes and multiple varieties of data as effortlessly as possible. The goal is to minimize repetitive tasks to make time for the core business.  

 

Related News