Every sector is under pressure to digitise. However, the mission critical nature of financial services means it is especially accountable; and therefore expected to lead the way by modernising and enhancing services to meet customer expectations, and successfully operate in an increasingly digital landscape. As such, the pace of digital change has rapidly increased over the last two years – a report compiled by PwC revealed that 77% of financial institutions are increasing efforts to innovate, demonstrating the pressure they’re under.
However, despite digital transformation being the current buzzword of the day, digital innovation has presented the finance industry with new challenges - despite its aim of driving efficiency and opportunity. For example, a Gartner report on emerging business risks has placed the pace of change and lagging digitalisation at the top of many firms’ worries. This is not surprising, especially when you consider how firms need to piece together multiple technologies – some cutting edge, and some legacy, holding vital data points that simply can’t be ripped out and replaced. However, the difficulty of successfully integrating legacy systems with new digital applications whilst maintaining a clear visibility across the entire IT infrastructure is easier said than done. Gartner predicts that for every dollar spent on digital innovation until 2020, organisations will have to spend at least three times that on modernising legacy portfolios, maintenance, and upkeep. But without this integration, any innovation is threatened with failure, plummeting customer satisfaction and disrupted business operations.
The banking sector has particularly felt this burden. A Financial Conduct Authority survey revealed a 187% increase in reported digital incidents during 2018, and this rise is indicative of the sector’s IT struggles. Worryingly, the FCA has said that these figures are likely to be conservative due to under-reporting - which suggests the true scale of the issue may be worse than thought. This problem isn’t just localised in the UK either, and is widespread internationally, with both the National Australia Bank and Visa experiencing vast outages that affected millions.
But as this is such a common problem, why change the status quo?
The downtime that occurs as a consequence of IT outages amounts to more than the revenue lost. Just a few minutes of downtime can completely destroy the customer experience and if organisations fail to deliver exceptional customer service in today’s fast-moving world, competition will waste no time in trying to steal customers and swallow market share. Just look at Telegram, the messaging platform, which gained millions of new users when WhatsApp went down for a mere 14 hours.
Let’s also not forget that time is money, with IT outages being financially detrimental, costing an estimated $300,000 per hour. While this may seem like a huge amount, it is far from a theoretical risk when you examine the fallout from TSB’s digital implosion. In the banking industry, the regulator’s call for a maximum outage time of two days is a good step in the right direction, however customer expectations will likely call to see this reduced further in the near future. Organisations must adopt the right technology to stay competitive and keep the difficulties of outages at bay.
A prime example of those doing it well is modern and disruptive ‘challenger banks’ such as Starling Bank or Revolut. Revolut has garnered four million customers in just four years with its digital only offering, and this has only been possible because these businesses are able to innovate at an extreme pace, leaving more traditional companies steeped in legacy systems one step behind.
In fact, a 2019 Freedom survey of banking decision-makers in the US and UK found that nearly half of respondents thought legacy systems were the biggest barrier to growth. The survey also showed that to address the rise of challenger banks, 44 percent expected their organisation to invest heavily in modernising these systems. This is complemented by an Accenture report which found that as recently as 2016, as much as 50% of all banking IT assets were in “urgent” need of modernisation. Take COBOL for example. Forty-three percent of banking systems still run on this system invented in 1959. It’s no surprise then that financial institutions struggle to implement successful digital transformation projects; it’s down to the widespread problem of sprawling IT systems which are being continuously patched up.
Behind a new breed of innovative customer and employee-facing digital services lies a mishmash of systems – virtual machines, hybrid cloud accounts, IoT endpoints, physical and virtual networks and much more. These disparate, decentralised systems don’t talk to each other, and they frequently fail. To make things worse, many of these systems are outside the control of IT, adding an extra layer of opacity and complexity. In fact, a recent report from Parliament’s Public Accounts Committee revealed that the Bank of England’s IT expenditure is 33.6% more on IT than other central government departments. The bank later acknowledged that this inflated budget was “due to high levels of manual processors and legacy IT systems” and said “integration of these systems took a substantial amount of time to deal with.”
So, what can established financial institutions do to mitigate against the threat of legacy systems and thereby reduce outages?
Put simply, they need to adopt best practice operational activities and processes, such as running regular threat and vulnerability assessments, conducting configuration reviews and including operation process validation checkpoints. This significantly reduces the chances of suffering from a systems failure, by enabling IT teams to anticipate problems and quickly deal with them before they become outages, simply by increasing the visibility into the entire IT network.
It’s only by unifying IT operations and monitoring under a single pane of glass that an organisation - financial or otherwise - can hope to get a holistic view of what’s going on across its entire environment, including into the health of its legacy systems. A centralised view ensures that there is a single version of the truth, thereby bringing siloed teams together, avoiding duplication of effort and more importantly, ensuring that IT ops and monitoring finally fulfils its promise to improve service performance, availability, and user experience. Furthermore, when effective monitoring is coupled with automation, the likelihood of outages gets reduced even further. When properly configured, automation provides consistency and the ability to fix minor errors before they cascade, saving time, money and potential reputational damage.
Financial institutions have never been under such pressure to surprise and delight the end user. Customer experience no longer just centres on the physical, the digital is just as important, with fast load times and a frictionless experience key. With self-service such an important part of today’s financial landscape, banks can no longer afford to disrupt their customers with outages caused by poorly managed systems. To stay competitive, everything must run smoothly, and at the first hint of trouble, the issue identified and resolved before the disruption is noticed.