Eyeing Applied AI, Automation’s Next Job

- Steven Teasdale, Senior Director and General Manager UK&I Sales at Nutanix
- 10.06.2025 11:00 am #AI #AItools #transformation
Artificial Intelligence (AI) is everywhere, yet AI is also nowhere at the same time. To be clear, while there are AI tools and services surfacing at every level, organisations are yet to see widespread functional intelligence being applied to the working commercial parts of their operation. This reality puts us at an important inflexion point. As AI is now rolled out inside operating systems, database functions, through application layers and outward to every user endpoint, making it accessible, consumable and eminently usable is the challenge at hand.
A key step in that process is agentic AI and the use of the AI agents we can now build by engineering non-deterministic reasoning and inference into our increasingly smart services. While a lot of our AI up to now has been engineered to provide us with pattern-recognition insights for humans to make decisions on and execute corresponding business actions, AI agents take on the responsibility for answering questions, discovering solutions to problems and automating those human actions and tasks themselves.
Autonomous Adaptability
Because agentic AI systems have an autonomous level of adaptability, they can operate with a degree of self-awareness that elevates their usefulness and applicability to businesses by an order of magnitude. Because agentic AI systems appear to “act like humans”, they can be architected into modern workflows inside organisations. Because agent-based AI systems can understand context and learn from experiences, they become first-class citizens in the march towards the upper tier of applied AI that businesses must now aspire to.
Agentic AI solutions aim to mimic human behaviour more closely and they do so by using large language models for planning. They also use additional data tools for performing various deterministic tasks as well as memory, for context retrieval.
All of which brings us up to today, the point where modern businesses have a relatively short implementation window if they are to take advantage of generative and agentic AI ahead of their competitors. Even when an organisation decides on the AI services that are right for it, the post-implementation and management aspects of a rollout at this level can be daunting. Although we use the term Day 2 operations for a reason to describe the reality of post-deployment software services, it should perhaps be called Day 2 to infinity operations; there’s a lot to shoulder here.
Architectural Longevity
This is the point where the IT systems planners need to start thinking about architectural conformity and longevity. The IT team needs to be able to redefine its technology stack with enough broad operational scope to serve agentic services today and well into tomorrow. That typically means grasping services that are capable of simplified AI model deployment across various different computing environments. Because those environmental differences will span different hybrid clouds, different hybrid application models and different (also very often hybrid) use cases, identifying composable, compatible, containerised microservices-based computing principles ensures operational readiness and accelerated time to value.
Fundamentals for successful generative and agentic deployment also centralise around the need for a unified compute, connectivity and storage platform. Businesses need a unified platform to run IT services at this level that is capable of simplifying the deployment, operation and management of large AI models at scale. This unified substrate for computing must also act as a consolidated data platform for the ingestion, processing, and archiving of AI data.
Organisations embracing AI should now look for services capable of combining models by use case with secure endpoints and APIs into a single shared service for multiple applications to access. As agentic workflows progress, the need to reuse multiple models and endpoints is key for achieving efficiency and performance across applications. But security controls will always be needed because malicious prompt injections can lead to a complete compromise of an language model by removing its core guardrails.
Execution inside a unified AI platform
Adopting a unified platform to underpin AI services enables an organisation to explore the full breadth of connected AI services as they get to work at the back end. As agentic AI workflows span multiple LLM-type models, a hyper-hybrid platform is essential for cross-pollination between services. To illustrate this process in action, imagine a Retrieval-Augmented Generation (RAG) data workflow that needs to send context to a language model (large… and small as well) to provide it with reasoning logic. That same connection also needs reranking and safety guardrails to determine the “best” (i.e. safest, most accurate and free from bias) answer. We can also envisage an embedding model in this mix, which will be in place to integrate vector databases.
That somewhat heady mix of technologies needs a unified platform environment to execute inside of to be efficient. If that same unified platform can also offer preset blueprints for the AI workflow (a sort of reference architecture for AI, if you will), then AI can be put to practical good use inside any business. With so much discussion across the technology industry aligned towards the shiny end of new AI functions that are rarely applied inside real world businesses, now it’s time for applied AI to get to work.