Computer vision has become one of AI’s most promising applications, combining ever-improving cameras with faster and smarter automated object recognition. During today’s Transform 2020 digital conference, Intel VP Brian McCarson spoke with VentureBeat CEO Matt Marshall about computer vision’s role in the growing industrial internet of things (IIoT) market. The conversation highlighted a particularly compelling emergent use case: hugely improved product defect detection that promises to make everything from computer screens to cars more reliable.
Manufacturers seeking to eliminate product defects haven’t historically lacked staff or defect screening expertise, McCarson said — they have been held back by limitations of the human eye. In modern consumer products, defects can be microscopic or near-microscopic, such as bad screen pixels or surface issues in aluminum car transmission components. While people are great at detecting motion and changes in patterns, they can’t always spot tiny details like these, so as computer vision evolved, Intel saw an opportunity.
Working with Alibaba as a cloud service provider, Intel developed a computer vision solution that improved a vehicle metal fabricator’s positive defect detection rate from roughly 20% to over 99%. In real-world terms, that’s the difference between missing 4 out of 5 defects or fewer than 1 in 100, a sea change that translates to substantially more reliable cars, at least for components screened with computer vision.
McCarson called the solution both affordable and “very scalable,” as manufacturers can add it to a million-dollar production line for only a few thousand dollars — and without additional manufacturing changes. “So for literally a return on investment that’s measured in days,” he said, “[manufacturers] were able to go deploy what turned out to be one of the world’s most advanced defect quality control implementations.” Intel is now working with hundreds of additional factories on similar implementations. Adopters will be able to improve their production yields, cut product returns, and increase operating margins, all while reducing negative manufacturing and return-related impacts on the environment.
Another major area benefitting from industrial IoT, McCarson explained, is the availability of open source software that helps companies without deep AI experience develop performant computer vision solutions. Intel’s distribution of the free convolutional neural network toolkit OpenVINO, for instance, includes ready-to-go visual inference models that have already been adapted to various use cases, providing initially 80-90% effective computer vision solutions companies can tweak to reach 96-97% performance.
McCarson suggested open source is part of an industry trend away from proprietary, walled garden devices like Blackberries toward platforms with foundations that support future innovations. Perhaps unsurprisingly given the flexible rather than purpose-limited nature of Intel’s edge AI solutions, the company wants customers to think about future-proofing — buying the flexibility to adapt to upcoming AI needs — instead of just getting something good enough for today’s applications.
When Marshall asked about the computer vision market opportunity, McCarson described it as “pretty astounding.” Within the consumer products manufacturing industry, the industrial automation segment alone is annually a half-trillion-dollar business globally, and is already proving open to adopting AI and computer vision to efficiently solve manufacturing problems that were nearly insurmountable just years ago. Over the next 2-5 years, McCarson expects a “very massive shift” toward both traditional time series data analytics and modern computer vision to detect defects, track inventory, and improve machine uptime, with edge-based computer vision and data analytics growing in tandem.
During Q&A, McCarson was asked about future trends in AI hardware, and replied that one big theme is offering hardware that’s ready for future change: Companies now want to seamlessly update hardware using software, rather than relying on the archaic practice of having to send out trucks to do updates. He also noted that AI is at this stage limited less by hardware performance than by the availability of models to perform certain tasks, suggesting that the onus is now on software developers to create models that make great use of available technology.