- 13.04.2021 05:30 pm
- 13.04.2021 10:15 am
- 08.04.2021 02:30 pm
- 08.04.2021 01:15 pm
- 08.04.2021 12:45 pm
- 07.04.2021 03:45 pm
- 07.04.2021 03:30 pm
- 07.04.2021 01:15 pm
- 06.04.2021 04:15 pm
- 02.04.2021 12:00 pm
- 31.03.2021 06:15 pm
- 31.03.2021 03:30 pm
Why now is the time for pension providers to address the requirements of the Pensions Dashboard project
The average UK worker is expected to have at least 11 jobs over their lifetime which, in theory, could mean 11 or more pension pots to remember and track over a very long period of time. Add to that the estimated £400 million already sat in lost or forgotten pension accounts and there is a clear need for a mechanism to allow workers to see all of their pension details in just one place, as they can already in countries such as Denmark and Australia.
Hence the so-called Pensions Dashboard project, a government backed initiative to allow UK citizens to do just that; do it online or through a mobile app and, according to the latest estimate, do it by the end of 2019.
Whether or not that ambitious target will be met is debatable, with a lot of the detail still to be thrashed out. That, however, doesn’t mean that pension providers and other interested parties should sit around waiting for the starting gun to be fired. Not least because the, seemingly, simple task of collating pension details for online consumption is likely to be a lot more complex, costly and time consuming than might, at first glance, be expected.
The scale of the problem
Scale is one of the biggest hurdles to overcome with an estimated 65 million individual pension plans already in existence in the UK. A number that is expected to grow rapidly following the recent auto enrolment programme designed to bring everyone over the age of 21 into the pension net.
Then there’s the little matter of having to cope with all of the various different pension scheme formats in operation, plus others yet to be devised. Not only will the Pensions Dashboard have to handle final salary (defined benefit) schemes run by big companies, for example, but defined contribution plans, including both personal pensions and company run schemes, operated by a variety of pension providers, insurance firms and investment companies, either directly or through a whole host of intermediaries. Self Invested Personal Pensions (SIPPs) will also come under the remit of the Pensions Dashboard, as will the state pension scheme and the so-called closed book schemes no longer accepting new members, many of which will have been outsourced to third party administrators or even lost in the mists of time.
So far, there have been no details as to who will be responsible for locating and drawing together all this information. However, as the chief custodians of the data, customer facing pension providers and insurance companies will inevitably be involved at some point. They should therefore start to take action now as, while many will have toyed with dashboards of their own, these will have been much more limited in scope. The proposed Pensions Dashboard is totally new territory and the first time these organisations will have been tasked with both consolidating all of the pension data they manage and, more importantly, making it available in a standard format to an external agency.
No to the data warehouse
One approach to this data integration challenge, would be to create a data warehouse and use conventional ETL (Extract Transform and Load) tools to extract data from the many sources involved, transform it to meet the needs of the Pensions Dashboard project, and load the results into a new set of consolidated databases.
On the plus side, this would be standard fare for most providers, but data warehouses take time to build and are costly to run. Added to which, if the final data specification differs markedly from the initial brief, developers may be forced to rethink their models and start over, adding further to the cost and time involved.
Bear in mind also, that the application and data architecture of many pension providers will, over time, have evolved into a complex federated architecture spanning numerous systems, each with their own data models and data access methods. Many will also have been outsourced, making access to the data even more complex and, typically, batch orientated. Data warehousing could, in theory, cope with all this, but the mechanisms required would be complex, cumbersome, error-prone and difficult to manage.
Conventional data warehousing also struggles when it comes to data that is continuously changing. This is a major concern, given one of the key aims of the Pensions Dashboard project is to provide users with real time valuations of their pension pots. Valuations which could vary according to day to day investment returns, interest and exchange rate fluctuations and so on.
Yes to the API and data virtualisation
Add together the unsuitability of data warehousing to the needs of the Pensions Dashboard project with the uncertainty over its direction and working, and pension providers could be forgiven for sitting on their hands. But they will have to do something, sometime, and that sometime could be fairly soon whether they want to or not, with legislation widely expected to compel companies to make pension data available.
There are indeed ways of preparing for that call whenever it comes. And ways of doing so at a much lower cost than with data warehousing and in a flexible manner that allows both for the disparate nature of the data and applications involved and the uncertainty over the detailed requirements.
Crucially, organisations need to look to managed Application Programming Interface (API) tools, to shortcut the process of making data available from multiple different sources. TIBCO Software, for example, is already developing the high level consolidated data models required to fulfil the requirements of the Pension Dashboard as far as they are known. These are being made available to customers in the form of quick-start accelerator package along with additional tools to help build the API endpoints and bind the consolidated data models to API payloads.
Organisations should also look to data virtualisation and caching technologies that create a single unified view of retrieved data, without the need to move it to a consolidated store. Not only does this sidestep the cost and time required to build and run a data warehouse, but by abstracting data in real time, it makes for a much more agile solution. A solution more easily modified to meet the needs of whatever data structures and compliance requirements are ultimately decided upon and allow for further tuning to cope with any future changes to the pension market.
Ready, set, go
Even where companies opt for the use of managed APIs and data virtualisation, the work required to deliver the data required by the Pensions Dashboard project is far from trivial, making it important to start planning now. Moreover, those taking early action won’t just find it easier to comply with the requirements of the Pensions Dashboard project, but will also gain an advantage in the pensions market which has become increasingly competitive following the so-called ‘pension freedom’ reforms introduced in 2015/16.
Those reforms have led to a huge pent-up demand for access to pensions data, as customers find they need to be much more personally involved in their own pension planning process. This is clearly a demand that the Pensions Dashboard project is aiming to address. At the same time, however, there’s nothing to prevent providers building dashboards of their own, using smart technologies that can be rolled out very quickly and cost effectively to give customers easy access to not only their pension data, but other financial products as well.
The tools required are all there, as is the opportunity to lead from the front. It just needs to be seized and seized now.