Data Virtualisation: Putting an end to conflict in an organisation

Robert Eve

Senior Director at TIBCO Software

Views 1137

Data Virtualisation: Putting an end to conflict in an organisation

01.11.2018 08:00 am

Everyone suffers from having too much information. There is no shame in that. There is only so much you can deal with any time, and under bombardment we become confused. This is not a new phenomenon. When it comes to decision making, as the age-old phrase has it, we can’t see the wood for the trees.

In fact, recognition is the first step to recovery. It’s the way we deal with that information, storing it efficiently and processing it accurately, that is the mark of progress.

There have evolved, by stages, two schools of thought about how we deal with masses of data. These methods - data warehousing with ETL and data virtualisation – both have their pros and cons.

The first is an older generation of technology, although still extensively used. The handling of data, which by the day becomes both more vital and more complicated, is still a work in progress. Data virtualisation is the more recent iteration and incorporates some of the lessons learned from previous efforts.

Though data warehousing delivered much, in terms of broad-brush strokes, it didn’t give the full picture that everyone needs if they are to work together. Indeed, it gave a picture that was never quite complete, nor fully up-to-date, and thus open to different interpretations by different departments. To put that in context, a bank’s traders might have been singing from one song sheet, while the compliance department had a different version of the truth. Neither is necessarily wrong, they just have been briefed differently about the state of the assets within their organisation. This can lead to conflict within the company.

In a bank, for example, data warehousing could lead to complications if used today, because it is relatively unwieldy to adapt and thus less responsive to change. It was fine for a less complicated time when financial services were less complex and competitive.

Traditionally the bank's products were arranged along separate lines of business. There would be, say Fixed Income products, Equities, Futures and Options, Brokerage and so on. Since each had its own organisation systems and services, they could be analysed separately in distinct warehouses. So, the relative lack of a unified view of bank-wide data was less of a limitation, as queries were on each one’s sales, research, positions management, risk, books and records, clearing, settlements, payments and reconciliations.

Each line of business had its own controlled distribution of its data. However, the increasing complexity of business meant there was ‘too much information’ for even the data warehouse to handle. There are ever more customers, products and mergers. As if that was not complicated enough, there is a huge amount of crossover for clients who are increasingly asking for product interactions that straddle different lines of business. 

Banks needed a new system that could give both unified and comprehensive versions of events, and at a much faster rate.

Data virtualisation improves on traditional methods because the extract, transform, and load (ETL) into the data warehouse process did not give the inquirer instant (AKA real-time) access to the source system for the data. It couldn’t and it wasn’t designed to work that way.

This led to data errors. Worse still, it involved extraction from one place which meant a massive workload of moving data. Further, this secondary copy now needed to be managed, audited and secured. This was a waste of resources.

Neither did data warehousing give a single version of the truth, even though it could have done.  Data virtualisation can bridge data across all the original data sources as well as any data warehouses, across lines of business. The beauty is that it doesn’t need a whole new integrated physical data system. The existing data infrastructure can continue performing its core functions, while the data virtualisation layer uses the data from those sources.

So, data virtualisation becomes a complement to all the existing data sources. It sits on top and, by improving the communications, makes the organisation’s data more available for usage.

Data virtualisation is inherently aimed at producing quick and timely insights from multiple sources without having to embark on a major data project as required by the ETL/warehouse alternative. In the old days, when staff sought client reference data, they would have to go to each line of business separately, requesting data and tracking their request until it was delivered.

This led to a sort of black market of second-hand data, with distribution teams emerging who would take original copies of data from the systems of record and add their own information for their own local needs.  They were acting almost as brokers. This practice increased risk and led to audit problems over data quality. There was a terrible price to pay, through lost productivity, compliance infractions and very high data integration costs.

The great thing about data virtualisation is that it can bring everything together, drawing data from all sources and straddling every line of business.

From multiple sources, data is aggregated into a single logical entity which can be read by everyone, whether they are coming in from a portal, a report, an application or a search.

What does this all mean in the ‘cold hard cash’ alluded to in The Wolf of Wall Street?

Typically, a data virtualisation can create a 300 per cent return on investment. The cost of data distribution will fall by 83 per cent and, finally, there will be a 25 per cent improvement in data quality. Given the value of financial data, that is a substantial dividend indeed.

Latest blogs

Jean-Paul Carbonnier CarboKinetic

Market Data, Reference Data and Blockchain

Blockchain, or distributed ledger technology (DLT), has the potential to disrupt a wide range of business models across the financial services industry. However, the potential application of blockchain to the worlds of market data and reference data Read more »

Matt Philips Diebold Nixdorf UK/I

Should we be scared of AI’s future potential?

Artificial intelligence (AI) is a technology which is admired, feared and — some would say — hyped. Certainly AI has come a long way since its beginnings in the 1940s, when computer scientists began to create algorithms that could, to some extent, Read more »

Thomas Rex Fingerprints

The future of cards, contactless and biometrics in payments

It's an interesting time for the humble payment card. Card payments have steadily risen in the last two decades, but innovation of the card has slowed since the launch of contactless over ten years ago. Until, that is, the recent entrance of the Read more »

Duena Blomstrom PeopleNotTech

Questioning Agile

I get that request all the time. People from all industries who have had no brush with the concept and who have seen some of my articles and can see my borderline-obsessive passion when it gets mentioned want me to provide a Cliff notes version to Read more »

N/A Red Deer

The hidden problems Europe uncovered during unbundling

This is the second in a three-part series of articles to help US asset and hedge fund managers answer their clients’ questions about the unbundling of payments for research and trading and understand what a best in class research management system Read more »

Magazine
ALL
Free Newsletter Sign-up
+44 (0) 208 819 32 53 +44 (0) 173 261 71 47
Download Our Mobile App