How to find the business value in AI and ML
Despite the fact that businesses are increasingly undertaking initiatives to leverage ML and AI, many tools and projects lack appropriate resources, are far less productive than they should be, lag in deployment, and more often than not, fail or are abandoned. …
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – August 3. Join AI and data leaders for insightful talks and exciting networking opportunities. Learn More
There’s no doubt that, when applied effectively, machine learning (ML) and artificial intelligence (AI) have proven potential to deliver significant value and cutting-edge technological innovation.
But many organizations are struggling with the “effectively” part, according to a new survey.
Despite the fact that businesses are increasingly undertaking initiatives to leverage ML and AI, many tools and projects lack appropriate resources, are far less productive than they should be, lag in deployment, and more often than not, fail or are abandoned.
In short, business value is rarely captured – and very often falls short of expectations – because significant time, resources and budgets are being wasted, according to a 2021 survey of ML practitioners, “Too Much Friction, Too Little ML.”
“Building AI is hard,” said Gideon Mendels, CEO and co-founder of Comet, the enterprise ML development platform company that commissioned the survey. “ML is often a slow, iterative process with many potential pitfalls and moving parts. Adding to that challenge, the tools and processes for ML development are still being developed. Most companies are still trying to figure out their processes and stack.”
More than 500 enterprise ML practitioners across the U.S. took part in the online survey, which Comet performed with research company Censuswide. The query of ML development experiences and the factors that impact the capability to deliver expected business value revealed that many tools and processes are very often “nascent, disconnected, and complex,” according to Mendels.
Meeting the potential of ML and AI
“There has been so much enthusiasm around AI, and ML specifically, over the past several years based on its potential, but the realities of generating experiments and deploying models have often fallen well short of expectations,” said Mendels. “We wanted to look deeper into where the friction lies so that issues can be addressed.”
Notably, 68% of respondents said they scrapped anywhere from 40 to 80% of experiments.
As such, there is a serious lag in model deployment:
- Just 6% of surveyed teams reported being able to make a model live in less than 30 days.
- 43% said they required up to three months to deploy a single ML project.
- 47% said they required four to six months to deploy a single ML project.
This was due to breakdown and mismanagement of data science lifecycles beyond the normal iterative process of experimentation. Reported impediments included lack of infrastructure, API integration errors, reproducibility failures, and debugging failures.
It’s true that running, adjusting and re-running of experiments is integral to the model development process, Mendels said – this can involve changing the model itself, tweaking its hyperparameters, utilizing different datasets, or changing code to evaluate how that impacts algorithms.
“All these changes happen repeatedly, sometimes with only minute differences each time,” he said. Yet this integral process can make it difficult to determine which experiments and parameters produce which results, whether that has to do with runtime environments, configuration files, data versions, or a multitude of other factors.
Poor experiment management can further exacerbate this because results can’t be reproduced accurately or consistently. “It can throw an entire project off the rails, wasting countless hours of a team’s work,” Mendels said.
Meanwhile, when models are deployed, nearly one-quarter failed in the real world for more than half (56.5%) of the companies surveyed.
One reason for all this is that budgets are “woefully inadequate”: 88% of respondents have an annual budget of less than $75,000 for ML tools and infrastructure.
Manual and ML and don’t mix
Without the right financial support, ML teams must track experiments manually: 58% of respondents reported doing so. This in turn places enormous strain on workers, creates challenges for team collaboration and model lineage tracking, causes projects to take far longer to complete, hinders model auditability, and leads to unintentional mistakes, Mendels pointed out.
All this said, companies are not intentionally withholding budgets or misallocating ML resources: 63% of respondents said their organizations would increase ML budgets in 2022. Yet many still “don’t know what to do” with that funding.
“ML is a fairly new paradigm and as such companies are still learning what is required to realize ROI,” said Mendels. Many companies first focus on recruiting talent – then preparing the right datasets. Yet substantial investment in correct infrastructure is critical, he said.
Before companies allocate more money and resources to ML programs, they must first address core operational issues – this is the only way they will see positive ROI, Mendels said – and consider extensibility and customizability. If teams are maxed out and struggling with visibility, reproducibility, and cost-efficiency, they will grapple with adding models, experiments, and deployments.
“If an organization is using ML, they will achieve more value – faster – by taking a closer look at their tools and processes and budgeting appropriately for ML development,” Mendels said. “The best way for businesses to be productive with their AI initiatives is to apply people, processes, and tools strategically across the ML lifecycle.”
Data science teams can improve efficiencies and build models faster with platforms such as Comet’s, Mendels said. The New York City-headquartered company manages and optimizes the entire ML development and workflow of ML models – from early experimentation and production. It offers both standalone experiment tracking and model production monitoring and its platform can run on any infrastructure and within existing software and data stacks.
The company supports a community of tens of thousands of users and academic teams who use its platform for free, and some of its high-profile enterprise customers include Ancestry, Cepsa, Etsy, Uber and Zappos.
Ultimately, Mendels emphasized the fact that tools for building ML have evolved dramatically in recent years, and the field continues to expand and improve to help solve for the problems identified in the survey.
“Leading edge companies that have implemented modern AI development platforms are realizing the benefits, full potential, and value from their machine learning initiatives,” he said, “which is quite exciting.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More