Are Financial Services Companies Getting Data Management Right?

  • Ezat Dayeh, SE Manager at Cohesity

  • 10.03.2021 01:00 pm
  • data

Financial institutions manage a large volume of sensitive information about their customers. However, the protection of sensitive data in line with regulations, both for banks and other financial services organisations, is currently a big challenge.

For these organisations, data backups and the ability to recover from them aren’t just about getting the business up and running after a hardware failure, as important as that is. They are also about compliance, security, privacy, innovation and more.  Financial institutions are, quite rightly, subject to a huge array of regulations from those of a general nature such as General Data Protection Regulation (GDPR) to a myriad of others specific to different aspects and services, such as MIFID II.

By their very nature, financial services companies need to be up and running continuously. Any unplanned breaks in service, for whatever reason, from a ransomware attack to an accident, a systems failure, or even, potentially, state-sponsored attacks, simply must be avoided. Financial services companies just can’t afford the monetary losses or the reputational damage that would result from downtime of services that allow customers to access their money.

Compliance matters for backups too

When it comes to compliance, there are requirements for backups as well as for live production systems. Consider GDPR: it requires that organisations must not keep personal data for longer than it is needed, and data must be regularly reviewed to be sure it is still needed. Individuals also have a right to ask for their personal data to be removed too. How this is done varies from application to application, but ensuring you don’t re-populate an application with data that is no longer required from a backup is a necessity.

There is also a requirement under GDPR to respond to individuals’ requests within a month of them being made. That is a fair period of time, but issues such as ransomware attacks can leave an organisation without access to its complete data for considerable periods, and as we have seen recently, backups are not immune from attack, in fact they are now a focus for certain attack types, especially those stored on a network attached storage device.

The basics of backup and restore

In this context, the National Cyber Security Centre advises organisations to maintain recent offline backups of all their most important files and data . Still, the evidence suggests that not all organisations have the kind of backup systems in place that will allow data recovery. Sophos surveyed 5,000 IT managers in 26 countries for its The State of Ransomware 2020 report. It found that just 56 percent of organisations undergoing a ransomware attack got their data back via backups (26 percent paid the ransom, 12 percent used ‘other means’, and 6 percent didn’t get their data back at all).

The implication in all of this is that the backup is the tool of last resort.  But even in that role, it isn’t necessarily fulfilling its purpose. You could infer from this research that most enterprise backups are only able to do the job just over half of the time. But it doesn’t have to be like this, and for financial services companies that really can’t afford downtime whatever its cause, there is a strong argument that backups need to assume a much wider role.

Reimagining backups

It is perfectly possible for a backup system to analyse the production environment versus the data it holds in order to detect if any major changes have been made that could in turn signify an attack being made. A modern system can also scan VMs for open vulnerabilities even if there is no attack, to ensure threat prevention can take place.

As mentioned, to ensure a payout, cyber criminals are not just attacking the production environment now, but increasingly targeting backup data and infrastructure. This effectively hobbles the “insurance policy” organisations depend upon when disaster strikes. The attackers are often exploiting weaknesses associated with legacy backup solutions architected before the advent of the ransomware industry. Before encrypting the production environment, sophisticated malware is known to destroy shadow copies and restore-point data. Due to its underlying architecture these malware make legacy backup infrastructure easy prey rather than a solid defence against ransomware attacks.

It might seem a little strange to suggest that financial services companies reinvent their approach to data management by paying closer attention to their backups. But it is time to realise that data backups are much more than the ‘necessary evil’ that you create as an insurance policy and file away, never to revisit. Especially, if these backups sit on legacy infrastructure, architected many years previous.

Since the financial crisis, there has been a wave of regulation with a significant part of it aimed at ensuring banks have sufficient capital and liquidity.

Now, in 2020, backups are both a living insurance policy against the times when the worst happens (and in some shape or form it inevitably will), and a part of your data management system that is as relevant to regulatory compliance requirements as your live systems are.

For the financial services industry, and other sectors, hybrid cloud's agility and scale can help increase their data security and governance initiatives. A hybrid cloud strategy has always promised and delivered reduced costs, improved operational efficiencies and greater ability to accommodate new technological advancements. Yet most won’t fully achieve the agility and cost benefits of cloud because the move to hybrid cloud adoption may compound the problem of mass data fragmentation.

Legacy products not born in the cloud era are ill-equipped to support modern data management objectives, particularly hybrid cloud backup and disaster recovery. Teams using such solutions as well as teams trying to force fit existing point products into doing more than they were designed for commonly make five big mistakes when it comes to cloud — all of which a modern data management solution will help you to avoid.

1. Assuming your cloud data is backed up

Although adopting a fully hosted and managed cloud service relieves many IT management tasks, backup isn’t one of them. Assuming backup will take place by default puts your business at risk because most cloud providers operate a shared responsibility model. They focus on high-availability of physical infrastructure (i.e., data centers, servers, wiring, etc.). Your business is responsible for the data, including access, governance, backup, and recovery. Look for a web-scale data management solution that consolidates and protects all your data in the cloud and on-premises — without requiring bolt-on cloud gateways.

2. Believing you can retrofit point solutions for cloud

Complexity doesn’t have to be a by product of introducing cloud into your IT vision. Yet that’s exactly what happens when you rely on legacy and/or disconnected point products for your on-premises and cloud backup. Why? Because each additional tool adds complexity. A bolt-on cloud gateway alone adds a new hardware expense and management headache when protecting data in virtual machines (VMs), databases, containers, physical servers, and the cloud. A modern data management solution works seamlessly across on-premises and cloud environments. It consolidates management of all your data and apps in one place, with global deduplication and Google-like search to make things easy to find and restore.

3. Thinking snapshots are free

In an on-premises environment, snapshots - a quick copy of data - are commonly stored on the same hardware device or storage array. In cloud deployments, snapshots may be in different tiers, but always in the same region or data center. As your organization moves workloads to cloud, managing snapshots across multiple cloud users, accounts, and providers is hugely challenging. So is protecting hundreds or thousands of VMs or storage volumes from growing silos. Your organization needs a unified, policy-based data management solution—with global search—that lets you easily manage snapshots across accounts—and cloud providers. One that ensures unlimited snapshots for best-in-class storage efficiency and cost.

4. Focusing more on backup than recovery

Backup software is often considered as an insurance policy. Everything is fine, until it isn’t. And that’s when recovery speed matters just as much as your backup solution’s features. Too many organizations focus on backup windows and dedupe ratios, neglecting to prioritize these key recovery capabilities: speed, quality, granularity, and cloud readiness. Look for a modern data management solution that protects backup data with erasure coding and across multi-node clusters to help prevent failures. Select one that can recover data and fail over to the cloud of your choice. Make sure it can restore hundreds or thousands of VMs at once—possible only with a file system designed for parallel data access that significantly improves speed for large scale recoveries. The right data management solution also ensures you and your team can search for the exact data you need, so you don’t have to scramble to find the right point in time to recover from.

5. Going about disaster recovery without a plan

Despite regular reports of natural disasters and cyberattacks, 5 percent of organisations still don’t have an actionable disaster recovery plan. Of those that do, 29 percent have never tested their plans, and 34 percent have experienced outages from improper failover to the cloud.4 A modern data management solution ensures data recovery preparedness for continuity of operations. It lets your organization rapidly retrieve and recover data when it’s needed, no matter where it’s stored. It works by replicating data, converting VM formats between clouds, and spinning up VMs in a matter of hours or minutes to meet fast recovery time objectives. To ease all the steps of your disaster recovery process—from documentation to regular testing—a modern solution includes seamless automation and orchestration tools.

Avoiding these mistakes and improvements to modern data management will bring financial services companies and banking systems through the COVID-19-related economic crisis in reasonable shape, without tripping up on something that can easily be avoided – basic data management good practice and health check. In doing so it will afford them a head start for future data-driven innovation.

Considering the likely brand damage and loss of customers that come with failing to put this into practice, the risk is simply too big for financial institutions not to have this aspect of their systems in place now.  Let’s hope it doesn't take a specific problem before the community realises this and takes data protection more seriously.

Related Blogs

Data is at the Heart of ESG’s Mission
  • 3 years 5 months ago 04:00 am

Other Blogs