Cost and model complexity remain barriers to enterprise AI, IBM finds
-Exclusive: New report from IBM reveals concerns for enterprise AI usage and provides guidance on a path forward. …
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
There is no one large language model (LLM) to rule them all, at least not according to enterprise IT leaders surveyed by IBM.
That finding is part of a new report released today by the IBM Institute for Business Value, titled “The CEO’s Guide to Generative AI: AI Model Optimization.” The report is based on a survey of U.S.-based executives collaborating with Oxford Economics. According to IBM, the report aims to provide CEOs with actionable insights to make informed decisions about AI implementation and optimization within their organizations. It also provides its fair share of interesting views on how enterprise AI adoption is actually rolling out in the real world.
Key findings from the report include:
- Model specialization: The study debunks the myth of a universal AI model, emphasizing the need for task-specific model selection.
- Model diversity: Organizations currently use an average of 11 different AI models and project a 50% increase within three years.
- Cost barriers: 63% of executives cite model cost as the primary obstacle to generative AI adoption.
- Model complexity: 58% cited model complexity as a top concern.
- Optimization techniques: Fine-tuning and prompt engineering can improve model accuracy by 25%, yet only 42% of executives consistently employ these methods.
- Open model growth: Enterprises expect to increase their adoption of open models by 63% over the next three years, outpacing other model types.
“From what I see, enterprise technology leaders are very well educated about the types of models available today and understand that for their specific use cases, each model would have their strengths and limitations,” Shobhit Varshney, VP and senior partner at IBM Consulting told VentureBeat in an exclusive interview. “But other C-suite leaders are still catching up and learning what LLMs can do and can’t do, and generally think of one large gen AI model that can handle different tasks.”
How enterprises can optimize AI cost efficiency
Cost is always a concern for any enterprise IT effort and that’s certainly the case regarding gen AI models.
Varshney noted that there are a lot of factors that affect the cost efficiency of enterprise AI models. He explained that enterprises can host models internally, paying for the underlying compute and storage or cloud providers can host the models, typically charging based on input and output tokens consumed.
The report advocates for a nuanced approach. The recommendation is to deploy large models for complex, high-stakes tasks requiring broad knowledge and high accuracy. Enterprises should then consider utilizing niche models for specialized, efficiency-critical applications.
“Enterprises can get great performance out of the box from larger models but could also invest a bit in fine-tuning a small model to get to similar performance,” Varshney said. “Before embarking on their gen AI use case, enterprises need to quantify the business impact that use case would deliver and the incremental cost of leveraging the LLM vs. other traditional AI alternatives.”
Why open models matter for enterprise AI deployment
A key finding in the study is a desire by most enterprise IT leaders to use open models rather than closed models for gen AI.
That finding isn’t all that surprising, given the forward momentum and progress of open models. With Meta’s recent release of Llama 3.1 and Mistral’s Large 2, researchers now benchmark open models ahead of proprietary rivals.
Varshney highlighted the value of community and security when it comes to open models for enterprise AI deployment.
“With open, you get a wider community to review and fortify AI systems,” he said. “Enterprises can adapt these models to their specific domain, data and use cases.”
While enterprises increasingly prefer open models, Varshney noted that companies should start with an AI strategy, not the models.
He explained that IBM Consulting helps its clients look across the enterprise and determine the processes and use cases where AI can have the biggest impact — customer service, IT operations and back office processes like HR and supply chain are some of the best places to start. Once a use case is prioritized, IBM can break the workflow down into steps and surgically insert the right technology for the task, whether it’s automation, traditional AI or generative AI.
“If generative AI is the right technology for the task, you have to look at a variety of factors and constraints to help you choose the right model, like the task complexity, cost envelope, how accurate it needs to be, latency of response, auditability for compliance, context window,” he said. “You fit the model to the task and the constraints of the business process itself, and overall you’ll have the right mix of models for your AI strategy.”