Keeping up with the exchanges – the ETD challenge
- Bill Blythe, Global Business Development Director at Gresham Computing
- 02:00 am ETD
Trading volumes in exchange-traded derivatives (ETDs) continue to rise (up 30% from pre-crash volumes). During a recent pilot study at a large securities clearing house Gresham CTC was tasked with processing 20 million trades in 14 minutes as a minimum requirement for processing ETD transactions. I’m pleased to say that we passed with flying colours, but it does highlight the increasing volumes.
But at the same time many firms have failed to make a corresponding investment in their post-trade processing systems, attempting to stretch their legacy technology to accommodate ever more complex instruments. More often than not this is a false economy. These exchange-traded contracts are often complex and time-consuming to onboard, requiring firms to go through a shoe horning exercise using expensive extract and transform data technologies to get the data into a format that can be reconciled.
Some vendor firms have tried to make this easier by supplying ‘exchange templates’, but these are not easy to use, and can be very costly to maintain through increased software license and maintenance costs.
I’ve come across trading organisations using legacy reconciliation technology that take an average of 220 hours every time a change is mandated by a single exchange. If a bank has partnerships with 40 different exchanges and each of these exchange issues a new set of file formats every six months, then that’s a whopping 8,800 hours of activity. To try and keep up, firms have been throwing sheer manpower at the problem as well as inventing complex workarounds. But as well as introducing the very operational risk that the reconciliation was meant to remove, the approach is simply not viable or cost effective in the long term.
Regulations such as Dodd-Frank and EMIR have been pushing firms away from OTC derivatives towards centrally cleared ETDs that are more transparent and carry less risk. Data files coming from exchanges like the DTCC, CME or Eurex can include hundreds (500+) of attributes. More often than not, legacy reconciliation systems are unable to accommodate such wide data files. In order to adapt these incumbent systems, which are largely based on fixed data models, you often have to make compromises with the data along the way.
Instead firms need flexible, adaptable and robust controls that allow them to easily and quickly on-board new asset classes, regardless of their complexity. Firms need to prove the integrity of their operational functions (hence operational risk departments expanding), whilst also improving clarity and governance across the post-trade environment. Real-time consolidated views (total equity reconciliation) of cash, positions, trades, portfolios, margin and collateral are all required in today’s derivatives environment.
Some firms have been tempted to use spreadsheets as a workaround or add-on to their legacy solutions. But with regulators cracking down on such practises, ultimately there can be no shortcut to putting in place the robust controls that come with the next generation of reconciliation technology including integrity, clarity, flexibility, adaptability, and scalability.
I’m seeing more and more firms finally accept that a ‘one-size-fits-all’ approach to their own reconciliation utility doesn’t work in most cases. A solution that remains suitable for Nostro/Depot processing for example, might not be the best for handling derivatives or T+0 matching. The smart thinking involves using a combination of complementary technologies, all deployed within the internal reconciliation utility. This includes continuing to use legacy technologies for commoditised processes (Nostro/Depot) to make best use of existing investments, but also deploying new, flexible, and adaptable technologies to meet the tough new regulatory landscape and controls needed in the ETD world.