All posts in “regulation”

Why commerce companies are the advertising players to watch in a privacy-centric world

The unchecked digital land grab for consumers’ personal data that has been going on for more than a decade is coming to an end, and the dominoes have begun to fall when it comes to the regulation of consumer privacy and data security.

We’re witnessing the beginning of a sweeping upheaval in how companies are allowed to obtain, process, manage, use and sell consumer data, and the implications for the digital ad competitive landscape are massive.

On the backdrop of evolving privacy expectations and requirements, we’re seeing the rise of a new class of digital advertising player: consumer-facing apps and commerce platforms. These commerce companies are emerging as the most likely beneficiaries of this new regulatory privacy landscape — and we’re not just talking about e-commerce giants like Amazon.

Traditional commerce companies like eBay, Target and Walmart have publicly spoken about advertising as a major focus area for growth, but even companies like Starbucks and Uber have an edge in consumer data consent and, thus, an edge over incumbent media players in the fight for ad revenues.

Tectonic regulatory shifts

GettyImages 912948496

Image via Getty Images / alashi

By now, most executives, investors and entrepreneurs are aware of the growing acronym soup of privacy regulation, the two most prominent ingredients being the GDPR (General Data Protection Regulation) and the CCPA (California Consumer Privacy Act).

The startups creating the future of RegTech and financial services

Technology has been used to manage regulatory risk since the advent of the ledger book (or the Bloomberg terminal, depending on your reference point). However, the cost-consciousness internalized by banks during the 2008 financial crisis combined with more robust methods of analyzing large datasets has spurred innovation and increased efficiency by automating tasks that previously required manual reviews and other labor-intensive efforts.

So even if RegTech wasn’t born during the financial crisis, it was probably old enough to drive a car by 2008. The intervening 11 years have seen RegTech’s scope and influence grow.

RegTech startups targeting financial services, or FinServ for short, require very different growth strategies — even compared to other enterprise software companies. From a practical perspective, everything from the security requirements influencing software architecture and development to the sales process are substantially different for FinServ RegTechs.

The most successful RegTechs are those that draw on expertise from security-minded engineers, FinServ-savvy sales staff as well as legal and compliance professionals from the industry. FinServ RegTechs have emerged in a number of areas due to the increasing directives emanating from financial regulators.

This new crop of startups performs sophisticated background checks and transaction monitoring for anti-money laundering purposes pursuant to the Bank Secrecy Act, the Office of Foreign Asset Control (OFAC) and FINRA rules; tracks supervision requirements and retention for electronic communications under FINRA, SEC, and CFTC regulations; as well as monitors information security and privacy laws from the EU, SEC, and several US state regulators such as the New York Department of Financial Services (“NYDFS”).

In this article, we’ll examine RegTech startups in these three fields to determine how solutions have been structured to meet regulatory demand as well as some of the operational and regulatory challenges they face.

Know Your Customer and Anti-Money Laundering

FCC passes measure urging carriers to block robocalls by default

The FCC voted at its open meeting this week to adopt an anti-robocall measure, but it may or may not lead to any abatement of this maddening practice — and it might not be free, either. That said, it’s a start towards addressing a problem that’s far from simple and enormously irritating to consumers.

The last two years have seen the robocall problem grow and grow, and although there are steps you can take right now to improve things, they may not totally eliminate the issue or perhaps won’t be available on your plan or carrier.

Under fire for not acting quick enough in the face of a nationwide epidemic of scam calls, the FCC has taken action about as fast as a federal regulator can be expected to, and there are two main parts to its plan to fight robocalls, one of which was approved today at the Commission’s open meeting.

The first item was proposed formally last month by Chairman Ajit Pai, and although it amounts to little more than nudging carriers, it could be helpful.

Carriers have the ability to apply whatever tools they have to detect and block robocalls before they even reach users’ phones. But it’s possible, if unlikely, that a user may prefer not to have that service active. And carriers have complained that they are afraid blocking calls by default may in fact be prohibited by existing FCC regulations.

The FCC has said before that this is not the case and that carriers should go ahead and opt everyone into these blocking services (one can always opt out), but carriers have balked. The rulemaking approved today basically just makes it crystal clear that carriers are permitted, and indeed encouraged, to opt consumers into call-blocking schemes.

That’s good, but to be clear, Wednesday’s resolution does not require carriers to do anything, nor does it prohibit carriers from charging for such a service — as indeed Sprint, AT&T, and Verizon already do in some form or another. (TechCrunch is owned by Verizon Media, but this does not affect our coverage.)

Commissioner Starks noted in his approving statement that the FCC will be watching the implementation of this policy carefully for the possibility of abuse by carriers.

At my request, the item [i.e. his addition to the proposal] will give us critical feedback on how our tools are performing. It will now study the availability of call blocking solutions; the fees charged, if any, for these services; the effectiveness of various categories of call blocking tools; and an assessment of the number of subscribers availing themselves of available call blocking tools.

A second rule is still gestating, existing right now more or less only as a threat from the FCC should carriers fail to step up their game. The industry has put together a sort of universal caller ID system called STIR/SHAKEN (Secure Telephony Identity Revisited / Secure Handling of Asserted information using toKENs), but has been slow to roll it out. Pai said late last year that if carriers didn’t put it in place by the end of 2019, the FCC would be forced to take regulatory action.

Why the Commission didn’t simply take regulatory action in the first place is a valid question, and one some Commissioners and others have asked. Be that as it may, the threat is there and seems to have spurred carriers to action. There have been tests, but as yet no carrier has rolled out a working anti-robocall system based on STIR/SHAKEN.

Pai has said regarding these systems that “we [i.e. the FCC] do not anticipate that there would be costs passed on to the consumer,” and it does seem unlikely that your carrier will opt you into a call-blocking scheme that costs you money. But never underestimate the underhandedness and avarice of a telecommunications company. I would not be surprised if new subscribers get this added as a line item or something; Watch your bills carefully.

Google Play cracks down on marijuana apps, loot boxes and more

On Wednesday, Google href=”https://techcrunch.com/2019/05/29/following-ftc-complaint-google-rolls-out-new-policies-around-kids-apps-on-google-play/”> rolled out new policies around kids’ apps on Google Play following an FTC complaint claiming a lack of attention to app’s compliance with children’s privacy laws, and other rules around content. However, kids’ apps weren’t the only area being addressed this week. As it turns out, Google also cracked down on loot boxes, marijuana apps, while also expanding sections detailing prohibitions around hate speech, sexual content, and counterfeit goods, among other things.

The two more notable changes include a crackdown on “loot boxes” and a ban on apps that offer marijuana delivery — while the service providers’ apps can remain, the actual ordering process has to take place outside of the app itself, Google said.

Specifically, Google will no longer allow apps offering the ability to order marijuana through an in-app shopping cart, those that assist users in the delivery or pickup of marijuana, or those that facilitate the sale of THC products.

This isn’t a huge surprise — Apple already bans apps that allow for the sale of marijuana, tobacco, or other controlled substances in a similar fashion. On iOS, apps like Eaze and Weedmaps are allowed, but they don’t offer an ordering function. That’s the same policy Google is now applying on Google Play, too.

This is a complex subject for Google, Apple, and other app marketplace providers to tackle. Though some states have legalized the sale of marijuana, the laws vary. And it’s still illegal according to the federal government. Opting out of playing middleman here is probably the right step for app marketplace platforms.

That said, we understand Google has no intention of outright banning marijuana ordering and delivery apps.

The company knows they’re popular and wants them to stay. It’s even giving them a grace period of a 30 days to make changes, and is working with the affected app developers to ensure they’ll remain accessible.

“These apps simply need to move the shopping cart flow outside of the app itself to be compliant with this new policy,” a spokesperson explained. “We’ve been in contact with many of the developers and are working with them to answer any technical questions and help them implement the changes without customer disruption.”

Another big change impacts loot boxes — a form of gambling popular among gamers. Essentially, people pay a fee to receive a random selection of in-game items, some of which may be rare or valuable. Loot boxes have been heavily criticized for a variety of reasons, including their negative effect on gameplay and how they’re often marketed to children.

Last week, a new Senate bill was introduced with bipartisan support that would prohibit the sale of loot boxes to children, and fine those in violation.

Google Play hasn’t gone so far as to ban loot boxes entirely, but instead says games have to now disclose the odds of getting each item.

In addition to these changes, Google rolled out a handful of more minor updates, detailed on its Developer Policy Center website. 

Here, Google says it has expanded the definition of what it considers sexual content to include a variety of new examples, like illustrations of sexual poses, content depicts sexual aids and fetishes, and depictions of nudity that wouldn’t be appropriate in a public context. It also added “content that is lewd or profane,” according to Android Police which compared the old and new versions of the policy.

Definitions that are somewhat “open to interpretation” is something that Apple commonly uses to gain better editorial control over its own App Store. By adding a ban of “lewd or profane” content, Google can opt to reject apps that aren’t covered by other examples.

Google also expanded its list of examples around hate speech to include: “compilations of assertions intended to prove that a protected group is inhuman, inferior or worthy of being hated;” “apps that contain theories about a protected group possessing negative characteristics (e.g. malicious, corrupt, evil, etc.), or explicitly or implicitly claims the group is a threat;” and “content or speech trying to encourage others to believe that people should be hated or discriminated against because they are a member of a protected group.”

Additional changes include an update to the Intellectual Property policy that more clearly prohibits the sale or promotion for sale of counterfeit goods within an app; a clarification of the User Generated Content policy to explicitly prohibit monetization features that encourage objectionable behavior by users; and an update the Gambling policy with more examples.

A Google spokesperson says the company regularly updates its Play Store developer policies in accordance with best practices and legal regulations around the world. However, the most recent set of changes to err on the side of getting ahead of increased regulation — not only in terms of kids’ apps and data privacy, but also other areas now under legal scrutiny, like loot boxes and marijuana sales.

What does ‘regulating Facebook’ mean? Here’s an example

Many officials claim that governments should regulate Facebook and other social platforms, but few describe what it actually means. A few days ago, France released a report that outlines what France — and maybe the European Union — plans to do when it comes to content moderation.

It’s an insightful 34-page document with a nuanced take on toxic content and how to deal with it. There are some brand new ideas in the report that are worth exploring. Instead of moderating content directly, the regulator in charge of social networks would tell Facebook and other social networks a list of objectives. For instance, if a racist photo goes viral and is distributed to 5 percent of monthly active users in France, you could consider that the social network has failed to fulfill its obligations.

This isn’t just wishful thinking as the regulator would be able to fine the company up to 4 percent of the company’s global annual turnover in case of a systemic failure to moderate toxic content.

The government plans to turn the report into new pieces of regulation in the coming months. France doesn’t plan to stop there. It is already lobbying other countries (in Europe, the Group of 7 nations and beyond) so that they could all come up with cross-border regulation and have a real impact on moderation processes. So let’s dive into the future of social network regulation.

Facebook first opened its doors

When Facebook CEO Mark Zuckerberg testified before Congress in April 2018, it felt like regulation was inevitable. And the company itself has been aware of this for a while.