Across industries, we see business functions developing their own AI use cases independently. Sales might use AI to score leads, finance might automate invoice matching, and HR might explore AI-driven recruitment tools. These efforts are often driven by what the application vendor offers, not by a shared enterprise vision. The result could end up being a fragmented landscape of disconnected solutions that don’t talk to each other.
This siloed approach limits the value of AI. When each function builds its own intelligence in isolation, cross-functional insights become difficult — if not impossible. The intelligence generated is based mainly on what’s possible within a specific system, but strategic decisions require a broader view, which application-specific AI rarely delivers.
Another issue is that built-in AI features are typically designed for generic use cases. They may work well for standard processes and can be very useful to many use cases, but they often lack the flexibility to adapt to the unique needs of a business. And because these models are proprietary, they can’t be reused across platforms. Organisations might end up duplicating efforts, training similar models in different systems, and managing multiple AI environments — each with its own limitations.
Then there’s the cost. Each application with embedded AI may require a separate licence tier or add-on. Some vendors charge based on usage — for example, the number of predictions made, or the volume of data processed. Without centralised oversight, these costs can escalate quickly and unpredictably.
A particularly important concern is the Indirect Access Licensing Clause. In some cases, ERP vendors have demanded licences for users who access ERP data indirectly — such as through reporting tools or custom AI agents. Even if the user never logs into the ERP system, they may still be considered a “user” under the licence terms. This can lead to unexpected audit findings and significant cost exposure, if not carefully analysed.
To avoid these pitfalls, organisations need to shift their focus from isolated use cases to a strategic AI architecture. Rather than letting each function build its own AI in isolation, companies should consider a centralised AI governance — one that orchestrates intelligence across applications and data domains. This approach allows AI efforts to generate insights that span across business functions and applications.
Central orchestration also supports unified data governance, consistent model monitoring, and compliance with regulatory requirements. It can enable organisations to build reusable models that serve multiple purposes if needed, reducing duplication and improving scalability. Most importantly, it gives the business control over its AI development — rather than relying entirely or only on what vendors choose to offer.
In today’s environment, where AI is often pursued for its novelty and value for individual processes or functions, it’s important to pause and reflect. Developing AI without a plan may deliver short-term wins, but it will almost certainly lead to long-term challenges — in cost, complexity, and control. Organisations must recognise that AI is not just a feature to be built or taken into use in existing tools. It is a capability that needs to be planned, governed, and aligned with business objectives.
Therefore, AI should be a strategic capability — not a collection of disconnected experiments. It should grow with your organisation, not become a patchwork of features. The question is not whether to build AI capabilities or whether to use vendor offered built-in AI solutions. It is how to select the right fit to build the AI capabilities in a way that scales, integrates, and delivers lasting value.
Kati Kolehmainen, Senior Manager, CFO Advisory