Freedom of choice? How recent Zoom AI policy changes betrayed consumer trust

Zoom is facing severe backlash for changes it quietly made to its Terms of Service (TOS) back in March related to AI — raising new questions about customer privacy, choice and trust. …

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


Video conferencing and messaging provider Zoom is facing severe backlash for changes it quietly made to its Terms of Service (TOS) back in March related to AI — raising new questions about customer privacy, choice and trust. These questions will apply to every company grappling with AI at a a time of growing debate around how large language models (LLMs) are trained on individual’s data, but it is particularly concerning for the reputation of a company like Zoom, which has become ubiquitous for everything from office meetings to remote school.

Yesterday, reports spread widely that Zoom had made changes to its TOS which clarified that the company can train AI on user data, with no way to opt out. The news appeared to begin with a post on X (formerly Twitter) yesterday from author Ted Gioia — which now has over 2 million views.

According to Katie Gardner,  a partner at international law firm Gunderson Dettmer, it’s common for companies to frequently update their Terms of Service as their practices change, and some privacy regulations, such as the CCPA, require companies to update their Privacy Policies annually. “Companies need to notify users of material changes to their practices if they want the changes to be legally enforceable against them,” she told VentureBeat in a phone interview. “At least in the case of Zoom, if done quietly, it was likely because the change wasn’t material – it was just stating more explicitly something it already retained the rights to do.”

That said, she pointed out that tech companies are currently making these updates because they’re seeing the backlash from regulators. “The methods by which companies are collecting consent for using user data for training purposes are targets of enhanced regulatory review,” she said, including the FTC’s recently-announced resolutions of actions against Ring and Amazon related to the transparency and accuracy of notices to users about the use of their data for training models. 

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

Register Now

“In addition to fines, the outcome in both was to require the companies to delete proprietary models – a penalty that will be meaningful for any company investing heavily into training their own models,” she said.

Zoom responded to the uproar this morning

This morning, in response to the uproar, Zoom posted on X (formerly Twitter), saying that “as part of our commitment to transparency and user control, we are providing clarity on our approach to two essential aspects of our services: Zoom’s AI features and customer content sharing for product improvement purposes. Our goal is to enable Zoom account owners and administrators to have control over these features and decisions, and we’re here to shed light on how we do that.”

In a linked blog post that seemed to only raise further confusion, Zoom said “to reiterate: we do not use audio, video, or chat content for training our models without customer consent.” However, a look at the company’s highly-complex, lengthy Terms of Service is difficult to decipher — a quick glance does not make the AI policies clear to any regular user.

In addition, Zoom’s own generative AI features are bewildering: For example, the company explained that it recently introduced Zoom IQ Meeting Summary and Zoom IQ Team Chat Compose “on a free trial basis to enhance your Zoom experience.” These features, it explained, offers automated meeting summaries and AI-powered chat composition. “Zoom account owners and administrators control whether to enable these AI features for their accounts,” the company wrote.

When those services are enabled, “you will also be presented with a transparent consent process for training our AI models using your customer content,” the company blog post says. “Your content is used solely to improve the performance and accuracy of these AI services. And even if you chose to share your data, it will not be used for training of any third-party models. ”

However, the blog post does not clarify that the service is turned on by default with a small check box. When a call begins, other people in the call get notified that “Meeting Summary has been enabled.” The popup says “The account owner may allow Zoom to access and use your inputs and AI-generated content for the purpose of providing the feature and for Zoom IQ product improvement, including model training.”

Participants can either click “Leave Meeting,” or “Got it.” That means if users don’t leave the call, they automatically agree to allow Zoom to collect data to build and improve its AI — but is there really that kind of freedom of choice on a work meeting or in a remote classroom?

In addition, the reality for the average end user is that today’s use of the web makes it impossible for most people to understand how companies are using their data, said Gardner, even if they are given a place to exercise the choices they are presented with. Yet with video and audio, especially in scenarios with children, there may be even more consumer discomfort around the use of personal data.

“When it comes to us regulation, there is a focus on risk scenarios that cause the most harm, such as to children,” she explained. “And video is an area that seems private, so this idea that other people are listening, it gives people more discomfort, for sure.”

Zoom is no stranger to AI controversy

Zoom is no stranger to controversies around the use of AI in its products. In April 2022, the company came under fire after saying it might soon include emotion AI features in its sales-targeted products. A nonprofit advocacy group, Fight for the Future, published an open letter to the company: It said Zoom’s possible offering would be a “major breach of user trust,”  is “inherently biased,” and “a marketing gimmick.” 

Over the past few months, Zoom has gone all in on generative AI. In March, it announced a partnership with OpenAI, and recently said it is teaming up with AI startup Anthropic to integrate Anthropic’s Claude AI assistant into Zoom’s productivity platform. The company has also made an investment of an undisclosed amount in Google-backed Anthropic through its global investment arm.

But the current controversy, which is going viral not just across social media but mainstream media today, comes at a particularly precarious time for Zoom’s business. The company benefitted from the shift to remote work during the Covid-19 pandemic, but shares plummeted in late 2022 as people began to resume their normal routines and work commutes. Even Zoom itself reportedly has been telling employees to come back into the office. The last thing Zoom needs now is a backlash that further alienates users.

It is a conversation that, of course, goes far beyond Zoom to all tech companies and publishers: How will corporate America tackle taking advantage of AI while also holding onto customer trust, privacy and consent? And what can they learn from what appears to be Zoom’s epic PR fail?

Companies may intend to minimize or mitigate the risk of regulatory scrutiny, said Gardner — but they should consider the current environment as well. “If you’re a company that is under the microscope, people are going to pay attention to these minor changes,” she said. “In this current environment, where everyone is very attuned to what companies are doing with with user data, there’s this balance and this line that companies need to walk — between avoiding regulatory scrutiny and maintaining trust with their end users.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.