With the recent explosion of innovation in the AI industry, companies are facing an overwhelming choice of models, AI chat/search services, automation agents, and vendors. These new technologies present compelling investment cases driven by the potential for incredible improvements in productivity and customer service, and as a result, adoption is taking off at breakneck speed!
Part of the promise of Generative AI is that anyone can create new content, automate using agents, or write software code, so it shouldn’t be a shock that many people across the enterprise are making use of these tools. A recent poll by Gallup shows that use of AI tools in the workplace nearly doubled in the last 12 months, with 40% of white-collar workers using AI regularly, including 8% of workers who use it daily. Rapid adoption by individuals and at the departmental level is, however, creating a new challenge of AI Fragmentation, where the speed of organic adoption of AI tools is creating new risks in terms of quality control, consistency, and accuracy of the responses AIs provide.
This fragmentation introduces significant risks for organizations, as the lack of centralized oversight can lead to inconsistent outputs, reduced quality control, and potential inaccuracies in AI-generated responses. As AI becomes more embedded in daily workflows, these risks can undermine the very productivity and customer service gains that drove the initial enthusiasm for adoption.
Does the proliferation and variety of AI tools really matter?
Putting aside the governance / security concerns of unchecked adoption, this fragmentation presents additional challenges for IT teams driving AI transformation initiatives.
As companies start to adopt agents into their daily workflows, businesses need to recognise, monitor and mitigate for quality control issues that could introduce commercial, security or reputational risks.
Blurred boundaries
Few people will disagree with the old adage “choose the right tool for the job”, but with AI vendors competing in a land grab for emerging use cases, the boundaries of which tool should be used when are getting blurred.
In many ways “citizen developers” are spearheading the uptake of AI tools, incorporating the use of chat tools into their approach to completing tasks and day to day work. This is happening with both IT validated and supplied tools, as well as “dark adoption”, where personal accounts and free to use tools are being leveraged outside of IT governance, introducing both quality control and data leakage risks.
With AI vendors in “land grab” mode, they are all too keen to facilitate the creating and sharing of chat agents, which individuals can create by uploading a couple of files or pointing to a website or two, then sharing the link with colleagues.
However, often these consumer-focused tools are not always the right solution for use cases where the responses are sent to customers of the business in question, creating potential legal liabilities or CSAT issues.
Using AI chatbots to quote pricing or provide product support Q&A are solid use cases, but consumer tools that cap the volume of content that can be uploaded to ground responses or lack tools to benchmark or monitor response accuracy are inherently unsuitable for enterprise-wide use.
“A key AI concern is silver bullet workarounds that become production dependencies putting the company at risk.”
Shelly Seewald, CIO – Tungsten Automation
Garbage in garbage out
One way to think about Generative AI is that it is a giant, intelligent text generation engine. The text you get back, is a function of:
This in turn means the quality or appropriateness of the inputs (prompts) directly affects the quality or appropriateness of the response from the model.
For ad-hoc interactions with an AI, where a user is having a turn-based conversation, the “prompting” process is iterative – if a user doesn’t get the desired response, they refine what they are asking the AI to do, gradually building to the outcome they want. By definition, this process is “human in the loop” (HITL) so the AI is getting constant feedback to improve results.
However, with agents created by citizen developers, the exact way a system has been prompted, the model provided in the service, and the appropriateness and amount of the data supplied will determine how effective the agent is at fulfilling its intended purpose.
Small changes in any of these variables can have big implications on the trustworthiness and accuracy of the results, which can mean the difference between success or failure of an AI project. In fact, Gartner predicts that 60% of AI projects will be abandoned due to lack of suitable data.
The answer to addressing AI fragmentation isn’t necessarily to have fewer tools – making sure every model and agent is grounded in the same approved content and data, with consistent policies and monitoring, goes a long way towards addressing quality control concerns without compromising flexibility or choice. When assistants pull from a single, continuously updated source of truth - with permissions, versioning, and quality checks - teams can move fast without risking accuracy, compliance, or brand trust. Standardize the source and let tool choice vary.
Make this practical by centralizing where approved content and policies live; sync it to systems of record; enforce access controls and lineage; surface citations; and measure answer quality across channels and use cases. Platforms like TotalAgility can orchestrate multiple models and agents around that shared content and governance, reducing fragmentation and enabling AI that’s reliable, consistent, trustworthy and aligned to business goals.