All posts in “terms of service”

Alexa, does the Echo Dot Kids protect children’s privacy?

A coalition of child protection and privacy groups has filed a complaint with the Federal Trade Commission (FTC) urging it to investigate a kid-focused edition of Amazon’s Echo smart speaker.

The complaint against Amazon Echo Dot Kids, which has been lodged with the FTC by groups including the Campaign for a Commercial-Free Childhood, the Center for Digital Democracy and the Consumer Federation of America, argues that the ecommerce giant is violating the Children’s Online Privacy Protection Act (Coppa) — including by failing to obtain proper consents for the use of kids’ data.

As with its other smart speaker Echo devices the Echo Dot Kids continually listens for a wake word and then responds to voice commands by recording and processing users’ speech. The difference with this Echo is it’s intended for children to use — which makes it subject to US privacy regulation intended to protect kids from commercial exploitation online.

The complaint, which can be read in full via the group’s complaint website, argues that Amazon fails to provide adequate information to parents about what personal data will be collected from their children when they use the Echo Dot Kids; how their information will be used; and which third parties it will be shared with — meaning parents do not have enough information to make an informed decision about whether to give consent for their child’s data to be processed.

They also accuse Amazon of providing at best “unclear and confusing” information per its obligation under Coppa to also provide notice to parents to obtain consent for children’s information to be collected by third parties via the online service — such as those providing Alexa “skills” (aka apps the AI can interact with to expand its utility).

A number of other concerns are also being raised about Amazon’s device with the FTC.

Amazon released the Echo Dot Kids a year ago — and, as we noted at the time, it’s essentially a brightly bumpered iteration of the company’s standard Echo Dot hardware.

There are differences in the software, though. In parallel Amazon updated its Alexa smart assistant — adding parental controls, aka its FreeTime software, to the child-focused smart speaker.

Amazon said the free version of FreeTime that comes bundled with the Echo Dot Kids provides parents with controls to manage their kids’ use of the product, including device time limits; parental controls over skills and services; and the ability to view kids’ activity via a parental dashboard in the app. The software also removes the ability for Alexa to be used to make phone calls outside the home (while keeping an intercom functionality).

A paid premium tier of FreeTime (called FreeTime Unlimited) also bundles additional kid-friendly content, including Audible books, ad-free radio stations from iHeartRadio Family, and premium skills and stories from the likes of Disney, National Geographic and Nickelodeon .

At the time it announced the Echo Dot Kids, Amazon said it had tweaked its voice assistant to support kid-focused interactions — saying it had trained the AI to understand children’s questions and speech patterns, and incorporated new answers targeted specifically at kids (such as jokes).

But while the company was ploughing resource into adding a parental control layer to Echo and making Alexa’s speech recognition kid-friendly, the Coppa complaint argues it failed to pay enough attention to the data protection and privacy obligations that apply to products targeted at children — as the Echo Dot Kids clearly is.

Or, to put it another way, Amazon offers parents some controls over how their children can interact with the product — but not enough controls over how Amazon (and others) can interact with their children’s data via the same always-on microphone.

More specifically, the group argues that Amazon is failing to meet its obligation as the operator of a child-directed service to provide notice and obtain consent for third parties operating on the Alexa platform to use children’s data — noting that its Children’s Privacy Disclosure policy states it does not apply to third party services and skills.

Instead the complaint says Amazon tells parents they should review the skill’s policies concerning data collection and use. “Our investigation found that only about 15% of kid skills provide a link to a privacy policy. Thus, Amazon’s notice to parents regarding data collection by third parties appears designed to discourage parental engagement and avoid Amazon’s responsibilities under Coppa,” the group writes in a summary of their complaint.

They are also objecting to how Amazon is obtaining parental consent — arguing its system for doing so is inadequate because it’s merely asking that a credit or debit/debit gift card number be inputted.

“It does not verify that the person “consenting” is the child’s parent as required by Coppa,” they argue. “Nor does Amazon verify that the person consenting is even an adult because it allows the use of debit gift cards and does not require a financial transaction for verification.”

Another objection is that Amazon is retaining audio recordings of children’s voices far longer than necessary — keeping them indefinitely unless a parent actively goes in and deletes the recordings, despite Coppa requiring that children’s data be held for no longer than is reasonably necessary.

They found that additional data (such as transcripts of audio recordings) was also still retained even after audio recordings had been deleted. A parent must contact Amazon customer service to explicitly request deletion of their child’s entire profile to remove that data residue — meaning that to delete all recorded kids’ data a parent has to nix their access to parental controls and their kids’ access to content provided via FreeTime — so the complaint argues that Amazon’s process for parents to delete children’s information is “unduly burdensome” too.

Their investigation also found the company’s process for letting parents review children’s information to be similarly arduous, with no ability for parents to search the collected data — meaning they have to listen/read every recording of their child to understand what has been stored.

They further highlights that children’s Echo Dot Kids’ audio recordings can of course include sensitive personal details — such as if a child uses Alexa’s ‘remember’ feature to ask the AI to remember personal data such as their address and contact details or personal health information like a food allergy.

[embedded content]

The group’s complaint also flags the risk of other children having their data collected and processed by Amazon without their parents consent — such as when a child has a friend or family member visiting on a playdate and they end up playing with the Echo together.

Responding to the complaint, Amazon has denied it is in breach of Coppa. In a statement a company spokesperson said: “FreeTime on Alexa and Echo Dot Kids Edition are compliant with the Children’s Online Privacy Protection Act (COPPA). Customers can find more information on Alexa and overall privacy practices here: https://www.amazon.com/alexa/voice [amazon.com].”

An Amazon spokesperson also told us it only allows kid skills to collect personal information from children outside of FreeTime Unlimited (i.e. the paid tier) — and then only if the skill has a privacy policy and the developer separately obtains verified consent from the parent, adding that most kid skills do not have a privacy policy because they do not collect any personal information.

At the time of writing the FTC had not responded to a request for comment on the complaint.

Over in Europe, there has been growing concern over the use of children’s data by online services. A report by England’s children’s commissioner late last year warned kids are being “datafied”, and suggested profiling at such an early age could lead to a data-disadvantaged generation.

Responding to rising concerns the UK privacy regulator launched a consultation on a draft Code of Practice for age appropriate design last month, asking for feedback on 16 proposed standards online services must meet to protect children’s privacy — including requiring that product makers put the best interests of the child at the fore, deliver transparent T&Cs, minimize data use and set high privacy defaults.

The UK government has also recently published a Whitepaper setting out a policy plan to regulate Internet content which has a heavy focus on child safety.

Facebook accused of blocking wider efforts to study its ad platform

Facebook has been accused of blocking the ability of independent researchers to effectively study how political disinformation flows across its ad platform.

Adverts that the social network’s business is designed to monetize have — at very least — the potential to influence people and push voters’ buttons, as the Cambridge Anaytica Facebook data misuse scandal highlighted last year.

Since that story exploded into a major global scandal for Facebook the company has faced a chorus of calls for increased transparency and accountability from policymakers on both sides of the Atlantic.

It has responded with lashings of obfuscation, misdirection and worse.

Among Facebook’s less controversial efforts to counter the threat that disinformation poses to its business are what it bills “election security” initiatives, such as identity checks for political advertisers. Even as these efforts have looked hopelessly flat-footed, patchy and piecemeal in the face of concerned attempts to use its tools to amplify disinformation in markets around the world.

Perhaps more significantly — under amped up political pressure — Facebook has launched a searchable ad archive. And access to Facebook ad data certainly has the potential to let external researchers hold the company’s claims to account.

But only if access is not equally flat-footed, patchy and piecemeal, with the risk that selective access to ad data ends up being just as controlled and manipulated as everything else on Facebook’s platform.

So far Facebook’s efforts on this front continue to attract criticism for falling way short.

“the opposite of what they claim to be doing… “

The company opened access to an ad archive API last month, via which it provides rate-limited access to a keyword search tool that lets researchers query historical ad data. (Researchers first need to pass an identity check process and agree to the Facebook developer platform terms of service before they can access the API.)

However a review of the tool by not-for-profit Mozilla rates the API as a lot of weak-sauce ‘transparency-washing’ — rather than a good faith attempt to support public interest research which could genuinely help quantify the societal costs of Facebook’s ad business.

“The fact is, the API doesn’t provide necessary data. And it is designed in ways that hinders the important work of researchers, who inform the public and policymakers about the nature and consequences of misinformation,” it writes in a blog post where it argues that Facebook’s ad API meets just two out of five minimum standards it previously set out — backed by a group of sixty academics, hailing from research institutions including Oxford University, the University of Amsterdam, Vrije Universiteit Brussel, Stiftung Neue Verantwortung, and many more.

Instead of providing comprehensive political advertising content, as the experts argue a good open API must, Mozilla writes that “it’s impossible to determine if Facebook’s API is comprehensive, because it requires you to use keywords to search the database”.

“It does not provide you with all ad data and allow you to filter it down using specific criteria or filters, the way nearly all other online databases do. And since you cannot download data in bulk and ads in the API are not given a unique identifier, Facebook makes it impossible to get a complete picture of all of the ads running on their platform (which is exactly the opposite of what they claim to be doing),” it adds.

Facebook’s tool is also criticized for failing to provide targeting criteria and engagement information for ads — thereby making it impossible for researchers to understand what advertisers on its platform are paying the company to reach; as well as how effective (or otherwise) these Facebook ads might be.

This exact issue was raised with a number of Facebook executives by British parliamentarians last year, during the course of a multi-month investigation into online disinformation. At one point Facebook’s CTO was asked point blank whether the company would be providing ad targeting data as part of planned political ad transparency measures — only to provide a fuzzy answer.

Of course there are plenty of reasons why Facebook might be reluctant to enable truly independent outsiders to quantify the efficacy of political ads on its platform and therefore, by extension, its ad business.

Including, of course, the specific scandalous example of the Cambridge Analytica data heist itself, which was carried out by an academic, called Dr Aleksandr Kogan, then attached to Cambridge University, who used his access to Facebook’s developer platform to deploy a quiz app designed to harvest user data without (most) people’s knowledge or consent in order to sell the info to the disgraced digital campaign company (which worked on various U.S. campaigns, including the presidential campaigns of Ted Cruz and Donald Trump).

But that just highlights the scale of the problem of so much market power being concentrated in the hands of a single adtech giant which has zero incentives to voluntarily report accurate metrics about its true reach and power to influence the world’s 2BN+ Facebook users.

Add to that, in a typical crisis PR response to multiple bad headlines last year, Facebook repeatedly sought to paint Kogan as a rogue actor — suggesting he was not at all a representative sample of the advertiser activity on its platform.

So, by the same token, any effort by Facebook to tar genuine research as similarly risky rightly deserves a robust rebuttal. The historical actions of one individual shouldn’t be used as an excuse to shut the door to a respected research community.

“The current API design puts huge constraints on researchers, rather than allowing them to discover what is really happening on the platform,” Mozilla argues, suggesting the various limitations imposed by Facebook — including search rate limits — means it could take researchers “months” to evaluate ads in a particular region or on a certain topic.

Again, from Facebook’s point of view, there’s plenty to be gained by delaying the release of any more platform usage skeletons from its bulging historical data closet. (The ‘historical app audit’ it announced with much fanfare last year continues to trickle along at a disclosure pace of its own choosing.)

The two areas where Facebook’s API is given a tentative thumbs up by Mozilla is in providing access to up-to-date and historical data (the seven year availability of the data is badged “pretty good”); and for the API being accessible to and shareable with the general public (at least once they’ve gone through Facebook’s identity confirm process).

Though in both cases Mozilla also cautions it’s still possible that further blocking tactics might emerge — depending on how Facebook supports/constrains access going forward.

It does not look entirely coincidental that the criticism of Facebook’s API for being “inadequate” has landed on the same day that Facebook has pushed out publicity about opening up access to a database of URLs its users have linked to since 2017 — which is being made available to a select group of academics.

In that case 60 researchers, drawn from 30 institutions, who have been chosen by the U.S.’ Social Science Research Council.

Notably the Facebook-selected research dataset entirely skips past the 2016 U.S. presidential election, when Russian election propaganda infamously targeted hundreds of millions of U.S. Facebook voters.

The UK’s 2016 Brexit vote is also not covered by the January 2017 onwards scope of the dataset.

Though Facebook does say it is “committed to advancing this important initiative”, suggesting it could expand the scope of the dataset and/or who can access it at some unspecified future time.

It also claims ‘privacy and security’ considerations are holding up efforts to release research data quicker.

“We understand many stakeholders are eager for data to be made available as quickly as possible,” it writes. “While we remain committed to advancing this important initiative, Facebook is also committed to taking the time necessary to incorporate the highest privacy protections and build a data infrastructure that provides data in a secure manner.”

In Europe, Facebook committed itself to supporting good faith, public interest research when it signed up to the European Commission’s Code of Practice on disinformation last year.

The EU-wide Code includes a specific commitment that platform signatories “empower the research community to monitor online disinformation through privacy-compliant access to the platforms’ data”, in addition to other actions such as tackling fake accounts and making political ads and issue based ads more transparent.

However here, too, Facebook appears to be using ‘privacy-compliance’ as an excuse to water down the level of transparency that it’s offering to external researchers.

TechCrunch understands that, in private, Facebook has responded to concerns raised about its ad API’s limits by saying it cannot provide researchers with more fulsome data about ads — including the targeting criteria for ads — because doing so would violate its commitments under the EU’s General Data Protection Regulation (GDPR) framework.

That argument is of course pure ‘cakeism’. Aka Facebook is trying to have its cake and eat it where privacy and data protection is concerned.

In plainer English, Facebook is trying to use European privacy regulation to shield its business from deeper and more meaningful scrutiny. Yet this is the very same company — and here comes the richly fudgy cakeism — that elsewhere contends personal data its platform pervasively harvests on users’ interests is not personal data. (In that case Facebook has also been found allowing sensitive inferred data to be used for targeting ads — which experts suggest violates the GDPR.)

So, tl;dr, Facebook can be found seizing upon privacy regulation when it suits its business interests to do so — i.e. to try to avoid the level of transparency necessary for external researchers to evaluate the impact its ad platform and business has on wider society and democracy.

Yet argues against GDPR when the privacy regulation stands in the way of monetizing users’ eyeballs by stuffing them with intrusive ads targeted by pervasive surveillance of everyone’s interests.

Such contradictions have not at all escaped privacy experts.

“The GDPR in practice — not just Facebook’s usual weak interpretation of it — does not stop organisations from publishing aggregate information, such as which demographics or geographic areas saw or were targeted for certain adverts, where such data is not fine-grained enough to pick an individual out,” says Michael Veale, a research fellow at the Alan Turing Institute — and one of ten researchers who co-wrote the Mozilla-backed guidelines for what makes an effective ad API.

“Facebook would require a lawful basis to do the aggregation for the purpose of publishing, which would not be difficult, as providing data to enable public scrutiny of the legality and ethics of data processing is a legitimate interest if I have ever seen one,” he also tells us. “Facebook constantly reuse data for different and unclearly related purposes, and so claiming they could legally not reuse data to put their own activities in the spotlight is, frankly, pathetic.

“Statistical agencies have long been familiar with techniques such as differential privacy which stop aggregated information leaking information about specific individuals. Many differential privacy researchers already work at Facebook, so the expertise is clearly there.”

“It seems more likely that Facebook doesn’t want to release information on targeting as it would likely embarrass [it] and their customers,” Veale adds. “It is also possible that Facebook has confidentiality agreements with specific advertisers who may be caught red-handed for practices that go beyond public expectations. Data protection law isn’t blocking the disinfecting light of transparency, Facebook is.”

Asked about the URL database that Facebook has released to selected researchers today, Veale says it’s a welcome step but points to further limitations.

“It’s a good thing that Facebook is starting to work more openly on research questions, particularly those which might point to problematic use of this platform. The initial cohort appears to be geographically diverse, which is refreshing — although appears to lack any academics from Indian universities, far and away Facebook’s largest userbase,” he tells us.

“Time will tell whether this limited dataset will later expand to other issues, and how much researchers are expected to moderate their findings if they hope for continued amicable engagement.”

“It’s very possible for Facebook to effectively cherry-pick datasets to try to avoid issues they know exist, but you also cannot start building a collaborative process on all fronts and issues. Time will tell how open the multinational wishes to be,” Veale adds.

We’ve reached out to Facebook for comment on the criticism of its ad archive API.

Facebook hit with three privacy investigations in a single day

Third time lucky — unless you’re Facebook .

The social networking giant was hit Thursday by a trio of investigations over its privacy practices following a particularly tumultuous month of security lapses and privacy violations — the latest in a string of embarrassing and damaging breaches at the company, much of its own doing.

First came a probe by the Irish data protection authority looking into the breach of “hundreds of millions” of Facebook and Instagram user passwords that were stored in plaintext on its servers. The company will be investigated under the European GDPR data protection law, which could lead to fines of up to four percent of its global annual revenue for the infringing year — already some several billions of dollars.

Then, Canadian authorities confirmed that the beleaguered social networking giant broke its strict privacy laws, reports TechCrunch’s Natasha Lomas. The Office of the Privacy Commissioner of Canada said it plans to take Facebook to federal court to force the company to correct its “serious contraventions” of Canadian privacy law. The findings came in the aftermath of the Cambridge Analytica scandal, which vacuumed up more than 600,000 profiles of Canadian citizens.

Lastly, and slightly closer to home, Facebook was hit by its third investigation — this time by New York attorney general Letitia James. The state chief law enforcer is looking into the recent “unauthorized collection” of 1.5 million user email addresses, which Facebook used for profile verification, but inadvertently also scraped their contact lists.

“It is time Facebook is held accountable for how it handles consumers’ personal information,” said James in a statement. “Facebook has repeatedly demonstrated a lack of respect for consumers’ information while at the same time profiting from mining that data.”

Facebook spokesperson Jay Nancarrow said the company is “in touch with the New York State attorney general’s office and are responding to their questions on this matter.”

You might think a trifecta of terrible news would be crushing for the social network. Alas, its stock is up close to 6 percent at market close, adding some $40 billion to its value.

Facebook broke Canadian privacy law, joint probe finds

The latest damning assessment of Facebook’s trampling of user privacy comes from the Canadian and Columbia privacy commissioners — which have just published the results of an investigation kicked off in the wake of the Cambridge Analytica data misuse scandal last year.

They found the social network company committed serious contraventions of local laws and failed generally to take responsibility for protecting the personal information of Canadians.

Facebook has disputed the findings and refused to implement the watchdogs’ recommendations — including refusing to voluntarily submit to audits of its privacy policies and practices over the next five years.

The Office of the Privacy Commissioner of Canada said it therefore plans to take Facebook to Federal Court to seek an order to force it the company to correct its deficient privacy practices.

Both watchdogs have also called for local privacy laws to be beefed up so that regulators have stronger sanctioning powers to protect the public’s interest.

“Facebook’s refusal to act responsibly is deeply troubling given the vast amount of sensitive personal information users have entrusted to this company,” said Daniel Therrien, privacy commissioner of Canada, in a statement. “Their privacy framework was empty, and their vague terms were so elastic that they were not meaningful for privacy protection.

“The stark contradiction between Facebook’s public promises to mend its ways on privacy and its refusal to address the serious problems we’ve identified – or even acknowledge that it broke the law – is extremely concerning.”

“Facebook has spent more than a decade expressing contrition for its actions and avowing its commitment to people’s privacy. But when it comes to taking concrete actions needed to fix transgressions they demonstrate disregard,” added B.C. information and privacy commissioner, Michael McEvoy, in another supporting statement. “The ability to levy meaningful fines would be an important starting point.”

“It is untenable that organizations are allowed to reject my office’s legal findings as mere opinions,” added Therrien.

We’ve reached out to Facebook for comment.

The privacy watchdogs combined their efforts to investigate Facebook and Cambridge Analytica-linked data company Aggregate IQ last year — setting out to determine whether the companies had complied with local privacy laws.

More than 600,000 Canadians had their data extracted from Facebook via an app whose developer was working with Cambridge Analytica to try to build profiles of U.S. voters.

Among the privacy-related deficiencies the two watchdogs are attaching to Facebook’s business are what they dub “superficial and ineffective safeguards” of user data that enabled unauthorized access by third party apps on its platform; a failure to obtain meaningful consent for the use of users’ friends’ data; a lack of proper oversight of the privacy practices of apps using Facebook’s platform, with a reliance on contractual terms and “wholly inadequate” monitoring of compliance.

All familiar stuff if you were following the twists and turns of the Cambridge Analytica data misuse saga last year. (Aleksandr Kogan, the third party app developer at the centre of the Cambridge Analytica data misuse scandal also accused Facebook of not having a valid developer policy.)

“A basic principle of privacy laws is that organizations are responsible for the personal information under their control. Instead, Facebook attempted to shift responsibility for protecting personal information to the apps on its platform, as well as to users themselves,” the watchdogs write, further accusing Facebook of an overall lack of responsibility for the personal data of users.

They also point out that their findings are of particular concern given an earlier 2009 investigation of Facebook by the federal commissioner’s office — which found similar contraventions with respect to Facebook seeking overly broad, uninformed consent for disclosures of personal information to third-party apps, as well as inadequate monitoring to protect against unauthorized data access by apps.

“If Facebook had implemented the 2009 investigation’s recommendations meaningfully, the risk of unauthorized access and use of Canadians’ personal information by third party apps could have been avoided or significantly mitigated,” they add.

(Oh hai, deja vu… )

The commissioners are calling for not only the power to levy financial penalties on companies that break privacy laws — as equivalent watchdogs in Europe already can — but also broader authority to inspect the practices of organizations to independently confirm privacy laws are being respected.

“This measure would be in alignment with the powers that exist in the U.K. and several other countries,” they note.

“Giving the federal Commissioner order-making powers would also ensure that his findings and remedial measures are binding on organizations that refuse to comply with the law,” they add.

The UK’s data protection watchdog levied the maximum possible fine on Facebook last year — although it’s ‘just’ £500,000 (and Facebook is appealing, claiming there’s no evidence that UK users’ data was misused).

But an updated pan-EU privacy framework, GDPR, which came into force after the Cambridge Analytica-related data misuse occurred, has massively upgraded the maximum possible fines that European data watchdogs can hand down for privacy violations. (And the Irish DPC, the lead privacy regulator for Facebook’s European business, has a very long list of open probes against Facebook and Facebook-owned platforms. So watch that space.)

Earlier this year a U.K. parliamentary committee which spend multiple months last year investigating Facebook and Cambridge Analytica, as part of a wider inquiry into online disinformation, called for Facebook’s use of user data to be investigated by the privacy watchdog.

The committee also urged the UK’s Competition and Markets Authority to undertake an antitrust probe Facebook’s business practices, and recommended that the social media ad market face a comprehensive audit to address concerns about its lack of transparency.

Facebook agrees it will be liable for future Cambrige Analyticas

Facebook has agreed to a number of major terms of service changes in the fallout of the Cambridge Analytica scandal.
Facebook has agreed to a number of major terms of service changes in the fallout of the Cambridge Analytica scandal.

Image: Chesnot/Getty Images

Some big changes are coming to Facebook’s terms of service in the fallout of the Cambridge Analytica scandal.

The European Commission announced this week that the world’s biggest social networking website has agreed to a few major changes to its terms and conditions. Facebook confirmed that these changes would go into effect for users worldwide and not just in the EU.

Most of these Facebook terms changes stem directly from issues which arose during the Cambridge Analytica scandal, where a third party company inappropriately mined Facebook user data.

One of the biggest changes is an about-face for the social media platform. Facebook will amend its terms on “limitation of liability” when it comes to future Cambridge Analytica-esque issues. The company “now acknowledges its responsibility in case of negligence, for instance in case data has been mishandled by third parties.” Previously, Facebook’s position was that it was not liable for third-parties misusing its platform.

Another major change is that Facebook will update its terms to explicitly say “that it does not charge users for its services in return for users’ agreement to share their data and to be exposed to commercial advertisements.” While Facebook does already disclose to its users the reason they’re seeing advertisements, the company has agreed to now “clearly explain that their business model relies on selling targeted advertising services to traders by using the data from the profiles of its users.” Basically, it’s going to make it clear as day to users that the site is free because they are the product.

Facebook has also agreed to update its data retention policies. When a Facebook user deletes their content, Facebook will now only store the removed content for no more than an additional 90 days for “technical reasons” or if law enforcement makes a request.

Additionally, Facebook “amended its power to unilaterally change terms and conditions by limiting it to cases where the changes are reasonable also taking into account the interest of the consumer” and “amend the language clarifying the right to appeal of users when their content has been removed.”

“Today Facebook finally shows commitment to more transparency and straight forward language in its terms of use,” said EU Commissioner for Justice, Consumers and Gender Equality Vera Jourová. “A company that wants to restore consumers trust after the Facebook/ Cambridge Analytica scandal should not hide behind complicated, legalistic jargon on how it is making billions on people’s data.”

“Now, users will clearly understand that their data is used by the social network to sell targeted ads,” she said.

The European Commission expects Facebook to implement these changes by June and “will closely monitor the implementation.” The Commission has also reached out to other social networks, such as Twitter, seeking similar terms of service updates.

Uploads%252fvideo uploaders%252fdistribution thumb%252fimage%252f91020%252f105fcb84 e34c 43d1 816c 49daae9895b9.jpg%252foriginal.jpg?signature=mp6qqzialp9w0xqq4yzyoz ln9a=&source=https%3a%2f%2fblueprint api production.s3.amazonaws