Leveling the Playing Field: Harnessing AI Without Big Tech Dependency
- Dean Clark, Chief Technology Officer at GFT
- 15.10.2025 03:00 pm #ArtificialIntelligence #TechInnovation
AI has become an integral part of day-to-day operations at financial services organisations over the last few years. From being utilised for productivity gains to automating manual tasks, the technology is being leveraged across different work streams to ultimately enhance operational efficiencies. However, the persistent challenge has been integrating AI with complex, pre-existing legacy systems.
AI integration also comes with significant considerations from a regulatory perspective, spanning data privacy to compliance. For instance, the Financial Conduct Authority has cautioned against relying on AI in financial services, especially given the concentration of the technology amongst only a few large providers.
This challenge extends beyond integration into access and deployment of the technology. Until recently, the prevailing belief was that only hyperscaler technology giants had resources to develop and host sophisticated AI platforms. Perceptions are quickly being disproven through the use of large language models (LLMs). Accessibility has proven a persistent challenge to AI adoption, which is why the proliferation of LLMs levels the competitive playing field.
Banks now have the option to harness LLMs to build specialised, sovereign AI capabilities that would strengthen their competitive positioning for the next decade. On the other end of the spectrum, banks can remain dependent on hyperscalers, which has its own associated risk of vendor lock-in.
The constraints of a hyperscaler-only approach
Traditionally, hyperscaler APIs have provided banks access to ready-made AI tools without having to make heavy investments upfront. It is an attractive avenue for companies seeking to quickly embrace AI, ensuring they remain competitive even as the technology evolves. However, that convenience comes with strategic compromises, which can have drawbacks
The first obstacle is that given the narrow pool of hyperscaler APIs serving financial services, banks have very little wiggle-room to negotiate when costs rise or terms shift against them. Such a concentration also means that innovation is driven by the agendas of the hyperscalers as opposed to the priorities of the banks. The tool ends up dictating the solution when it should be the other way around.
Additionally, financial organisations need to take regulatory implications into account. Sending sensitive customer data through external APIs creates ongoing compliance challenges, especially given the tighter data protection rules being implemented worldwide. Despite having contractual safeguards in place, the regulatory scrutiny of banks intensifies when sensitive financial data is handled by third-party vendors.
Furthermore, the products that external AI providers typically tend to offer are focused on broad market use rather than tailored to what each individual bank or institution needs. As a result, banks end up leveraging similar capabilities, missing opportunities for creating meaningful differentiation in the market through further innovation in tools such as fraud detection, risk assessment and compliance solutions.
Developing in-house AI capabilities
In recent years, AI adoption avenues have widened. One of these is the creation of inhouse capabilities independent of third-party vendors. What once made self-hosted AI technically and financially prohibitive has become far more manageable. Innovations in model compression and distillation mean that AI models that once required an extensive GPU infrastructure are able to run efficiently on smaller, cost-effective hardware. Open-source progress demonstrates that sophisticated models can achieve performance on par with leading proprietary systems while operating on standard servers.
Recent advances in training techniques have also made it a lot easier for banks to customise existing AI models without shelling out massive amounts of investment for full-scale retraining. Rather than rebuilding an entire system from scratch, banks are now able to update the necessary components using Low Rank Adaptation (LoRA), which is further enhanced by Weight -Decomposed LoRA (DoRA). These approaches enable faster, more cost-effective fine-tuning of AI to meet the specific needs of the financial organisations implementing them.
Bringing AI in-house offers clear benefits for financial institutions. Keeping customer data within the bank’s own environment strengthens privacy and compliance, thereby reducing regulatory exposure and reinforcing customer trust in the banks. In addition, in-house AI enables banks to develop specialisations that hyperscalers cannot match. By fine-tuning models on proprietary data such as transaction histories, risk profiles or fraud patterns, banks can create capabilities that are unique to their organisation and difficult for their competitors to replicate. It offers a depth of domain expertise that turns AI from a standard tool to a competitive differentiator.
Clients are drawn to services that pair advanced AI capabilities with the assurance that their data remains within the bank. Added trust and differentiation lays the path for deeper customer relationships. Self-hosted models provide banks the flexibility to innovate and experiment further with AI and deploy solutions on their own timelines, as opposed to waiting on third-party providers to introduce new updates.
While building customised AI capabilities is now both technically feasible and economically viable, it does not completely rule out working with third-party AI vendors. However, it does give banks the flexibility to select partners who support the development and maintenance of in-house capabilities, rather than depending solely on solutions that have been designed for a broader market.
Turning potential into advantage
AI capabilities that once required billions in investment and massive data centres can now be developed with significantly lower capital. This makes it realistic for banks to consider building their own AI capabilities rather than relying exclusively on hyperscaler APIs.
Financial institutions have multiple paths for AI integration. Some may continue to leverage the efficiencies of hyperscalers, while others may gain confidence in developing AI capabilities, in-house. The solution doesn’t need to be binary. Strategic partnerships can provide the guidance required for designing and deploying internal AI, combining external expertise with internal control, which enables banks to innovate efficiently, protect sensitive data and tailor solutions to their specific needs.






