All posts in “Social”

Interfaith social network raises $14M Series A to add new features to its mobile app, an interfaith social networking app for members of religious communities, has raised a $14 million Series A led by TPG Growth. Previous investors Science Inc. and Greylock Partners also returned for the round, which brings the Santa Monica-based startup’s total funding, including a seed round announced last June, to $16 million.

Founded in 2016, the app was created to give faith communities who had previously relied on Facebook groups, group texts or email chains to stay in touch a tailor-made place to chat, request prayers and donate money to non-profits and religious organizations. Religious leaders also get access to analytics to gauge how many people they reach through so they can grow their community’s membership.’s newest funding will be used for product development, including the addition of live audio and video, which can be used to broadcast sermons and music performances, and features to help communities fundraise for causes and events. founder and chief executive officer Steve Gatena told TechCrunch in an email that it also wants to build “the world’s largest directory of faith organizations.”

Since its seed round, Gatena says has signed up users in more than 185 countries and now has more than 20,000 religious communities on the app. It also participated in a recent hackathon hosted by the Vatican.

Even though one might expect’s users to be mostly younger people who already rely on social networks to stay connected with almost everyone in their lives, Gatena says covers a wide range of demographics because many people invite their families and friends to the app to join groups. So far, users have created groups dedicated to youth sports, mission trips, addiction recovery, cancer treatment and mental health issues like depression and recovery, among other topics.

“While the youth might have been some of the first to find us, we see our most vibrant activity from women across the country and around the world who are active in their local communities and want to strengthen offline connections through digital prayer requests, praise reports and words of encouragement,” Gatena said.

In a press statement, Science Inc. chief executive officer Michael Jones said “We’re blown away by the continued impact has had on its members and are excited to see how the platform will scale to inspire more people to connect with their faith leaders, unite and heal through prayer and give back to their communities.”

YouTube to add Wikipedia background info on conspiracy videos

YouTube is taking action on the proliferation of conspiracy videos found on its platform: YouTube CEO Susan Wojcicki told an SXSW panel Tuesday that the company would be introducing so-called “information cues” sourced from relevant Wikipedia articles on videos that talk about popular conspiracy theories.

These will appear as text boxes that can prevent alternative perspectives on subjects including chemtrails and the supposedly fake Moon landing, both of which were used as examples to show how this would work in practice during the panel. The info pop-up appears below the video but above the title and description, giving it a certain amount of prominence in the interface.

The YouTube CEO didn’t go into detail about how many conspiracy theories will be covered by the feature, but praised the format’s extensibility, suggesting that it could grow to expand as many as needed, and that it could also introduce alternate information sources in addition to Wikipedia.

Some critics are pointing out that this looks less like a solution to YouTube’s role perpetuating and legitimizing batshit crazy ideas, and more like a way for it to absolve itself of a responsibility of taking a more critical look at the problem. In fact, the examples YouTube itself provided on stage seem to back up this criticism, since the Moon landing video contained only a brief couple of sentences (one cut in half) visible on the video itself, the content of which doesn’t even necessarily counter the info shared by the conspiracist who posted the video.

The bottom line is that all social platforms relying on user-generated content will eventually become completely co-opted and unusable.

UN says Facebook is accelerating ethnic violence in Myanmar

The United Nations has warned that Facebook’s platform is contributing to the spread of hate speech and ethnic violence in crisis hit Myanmar.

It’s yet another black mark against social media at a time when the tech industry’s reputation as an accelerator of false information is attracting criticism from the highest places.

This week the government of Sri Lanka also sought to block access to Facebook and two other of its social services, WhatsApp and Instagram, in an attempt to stem mob violence against its local Muslim minority — citing inflammatory social media posts.

“These platforms are banned because they were spreading hate speeches and amplifying them,” a government spokesman told the New York Times.

India has also struggled for years with false information being spread by social media platforms like WhatsApp then triggering riots, communal violence and even leading to deaths.

While humans telling lies is nothing new, the speed at which misinformation and disinformation can now spread, thanks to digitally networked communities linked on social media, is.

Moderating that risk is the challenge big tech platforms stand accusing of failing.

UN human rights experts investigating a possible genocide in Rakhine state warned yesterday that Facebook’s platform is being used by ultra-nationalist Buddhists to incite violence and hatred against the Rohingya and other ethnic minorities.

A security crackdown in the country last summer led to around 650,000 Rohingya Muslims fleeing into neighboring Bangladesh. Since then there have been multiple reports of state-led violence against the refugees, and the UN has been leading a fact-finding mission in the country.

Yesterday, chairman of the mission, Marzuki Darusman, told reporters that the social media platform had played a “determining role” in Myanmar’s crisis (via Reuters).

Darusman said Facebook has “substantively contributed to the level of acrimony and dissension and conflict” within the public sphere. “Hate speech is certainly of course a part of that,” he continued, adding: “As far as the Myanmar situation is concerned, social media is Facebook, and Facebook is social media.”

In Myanmar, Ashin Wirathu, an ultranationalist Buddhist monk who preaches hate against the Rohingya, has been able to build up large followings on social media — using Facebook to spread divisive and hate-fueling messages.

Speaking to reporters yesterday, UN investigator Yanghee Lee, described Facebook as a huge part of public, civil and private life in Myanmar, noting it is used by the government to disseminate information to the public.

However she also flagged how the platform has been appropriated by ultra-nationalist elements to spread hate against minorities.

In the case of Wirathu, Facebook has sometimes removed or restricted his pages — but does not appear to have done enough.

“Everything is done through Facebook in Myanmar,” said Lee. “It was used to convey public messages but we know that the ultra-nationalist Buddhists have their own Facebooks and are really inciting a lot of violence and a lot of hatred against the Rohingya or other ethnic minorities.”

“I’m afraid that Facebook has now turned into a beast, and not what it originally intended,” she added.

We reached out to the company with questions but at the time of writing Facebook had not responded.

For years Myanmar’s military dictatorship entirely controlled and censored the press but in 2011 it began what was billed as a gradual democratic transition — which included opening up to new media services such as Facebook. And the platform essentially went from ground zero to becoming the most important information source in Myanmar in a handful of years.

Local Facebook users are now thought to number over 30 million.

But as uptake ballooned, human rights groups sounded alarms over how Facebook is being used to spread hate speech and stoke ethnic violence.

Last year New York Times reporter, Paul Moyer, also warned that government Facebook channel were being used to spread anti-Rohingya propaganda — implying the platform has also been appropriated as a citizen control tool by the state seeding its own propaganda.

And while states maliciously misappropriating social media to foster hate against their own citizens may not be a problem in every country where the tech industry operates, social media platforms amplifying hate speech is certainly a universal concern — from Asia, to Europe, to America.

Featured Image: Nur Photo/Getty Images

Report calls for algorithmic transparency and education to fight fake news

A report commissioned by European lawmakers has called for more transparency from online platforms to help combat the spread of false information online.

It also calls for urgent investment in media and information literacy education, and strategies to empower journalists and foster a diverse and sustainable news media ecosystem.

The High-Level Expert Group (HLEG), which authored the report, was set up last November by the European Union’s executive body to help inform its response to the ‘fake news’ crisis which is currently challenging Western lawmakers to come up with an effective and proportionate response.

The HLEG favors the term ‘disinformation’ — arguing (quite rightly) that the ‘fake news’ badge does not adequately capture “the complex problems of disinformation that also involves content which blends fabricated information with facts”.

‘Fake news’ has also of course become fatally politicized (hi, Trump!), and the label is frequently erroneously applied to try to close down criticism and derail debate by undermining trust and being insulting. (Fake news really is best imagined as a self-feeding ouroboros.)

“Disinformation, as used in the Report, includes all forms of false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit,” says the HLEG’s chair, professor Madeleine de Cock Buning, in a report forward.

“This report is just the beginning of the process and will feed the Commission reflection on a response to the phenomenon,” writes Mariya Gabriel, the EC commissioner for digital economy and society, in another forward. “Our challenge will now lie in delivering concrete options that will safeguard EU values and benefit every European citizen.”

The Commission’s next steps will be to work on coming up with those “tangible options” to better address the risks posed by disinformation being smeared around online.

Gabriel writes that it’s her intention to trigger “a free, pluralistic democratic, societal, and economic debate in Europe” which fully respects “fundamental EU values, e.g. freedom of speech, media pluralism and media freedom”.

“Given the complexity of the problem, which requires a multi-stakeholder solution, there is no single lever to achieve these ambitions and eradicate disinformation from the media ecosystem,” she adds. “Improving the ability of platforms and media to address the phenomenon requires a holistic approach, the identification of areas where changes are required, and the development of specific recommendations in these areas.”

A “multi-dimensional” approach

There is certainly no single button fix being recommended here. Nor is the group advocating for any tangible social media regulations at this point.

Rather, its 42-page report recommends a “multi-dimensional” approach to tackling online disinformation, over the short and long term — including emphasizing the importance of media literacy and education and advocating for support for traditional media industries; at the same time as warning over censorship risks and calling for more research to underpin strategies that could help combat the problem.

It does suggest a “Code of Principles” for online platforms and social networks to commit to — with increased transparency about how algorithms distribute news being one of several recommended steps.

The report lists five core “pillars” which underpin the its various “interconnected and mutually reinforcing responses” — all of which are in turn aimed at forming a holistic overarching strategy to attack the problem from multiple angles and time-scales.

These five pillars are:

  • enhance transparency of online news, involving an adequate and privacy-compliant sharing of data about the systems that enable their circulation online;
  • promote media and information literacy to counter disinformation and help users navigate the digital media environment;
  • develop tools for empowering users and journalists to tackle disinformation and foster a positive engagement with fast-evolving information technologies;
  • safeguard the diversity and sustainability of the European news media ecosystem;
  • promote continued research on the impact of disinformation in Europe to evaluate the measures taken by different actors and constantly adjust the necessary responses;

Zooming further in, the report discusses and promotes various actions — such as advocating for “clearly identifiable” disclosures for sponsored content, including for political ad purposes; and for information on payments to human influencers and the use of bot-based amplification techniques to be “made available in order for users to understand whether the apparent popularity of a given piece of online information or the apparent popularity of an influencer is the result of artificial amplification or is supported by targeted investment”.

It also promotes a strategy of battling ‘bad speech’ by expanding access to ‘more, better speech’ — promoting the idea that disinformation could be ‘diluted’ “with quality information”.

Although, on that front, a recent piece of MIT research investigating how fact-checked information spreads on Twitter, studying a decade’s worth of tweets, suggests that without some form of very specific algorithmic intervention such an approach could well struggle to triumph against human nature — as information that has been fact-checked as false was found to spread further and faster than information that had been fact-checked as true.

In short, humans find clickbait more spreadable. And that’s why, at least in part, disinformation has scaled into the horribly self-reinforcing problem it has.

A bit of algorithmic transparency

The report’s push for a degree of algorithmic accountability by calling for a little disinfecting transparency from tech platforms is perhaps its most interesting and edgy aspect. Though its suggestions here are extremely cautious.

“[P]latforms should provide transparent and relevant information on the functioning of algorithms that select and display information without prejudice to platforms IPRs [intellectual property rights],” the committee of experts writes. “Transparency of algorithms needs to be addressed with caution. Platforms are unique in the way they provide access to information depending on their technological design, and therefore measures to access information will always be reliant on the type of platform.

“It is acknowledged however that, more information on the working of algorithms would enable users to better understand why they get the information that they get via platform services, and would help newsrooms to better market their services online. As a first step platforms should create contact desks where media outlets can get such information.”

The HLEG’s is itself made up of 39 members — billed as representing a range of industry and stakeholder points of view “from the civil society, social media platforms, news media organisations, journalists and academia”.

And, yes, staffers from Facebook, Google and Twitter are listed as members — so the major social media tech platforms and disinformation spreaders are directly involved in shaping these recommendations. (See the end of this post for the full list of people/organizations in the HLEG.)

A Twitter spokesman confirmed the company has been engaged with the process from the beginning but declined to provide a statement in response to the report. At the time of writing requests for comment from Facebook and Google had not been answered.

The presence of powerful tech platforms in the Commission’s advisor body on this issue may explain why the group’s suggestions on algorithmic accountability comes across as rather dilute.

Though you could say that at least the importance of increased transparency is being affirmed — even by social media’s giants.

But are platforms the real problem?

One of the HLEG’s members, European consumer advocacy organization BEUC, voted against the report — arguing the group had missed an opportunity to push for a sector inquiry to investigate the link between advertising revenue policies of platforms and the dissemination of disinformation.

And this criticism does seem to have some substance. As, for all the report’s discussion of possible ways to support a pluralistic news media ecosystem, the unspoken elephant in the room is that Facebook and Google are gobbling up the majority of digital advertising profits.

Facebook very deliberately made news distribution its business — even if it’s dialing back that approach now, in the face of a backlash.

In a critical statement, Monique Goyens, director general of BEUC, said: “This report contains many useful recommendations but fails to touch upon one of the core causes of fake news. Disinformation is spreading too easily online. Evidence of the role of behavioral advertising in the dissemination of fake news is piling up. Platforms such as Google or Facebook massively benefit from users reading and sharing fake news articles which contain advertisements. But this expert group choose to ignore this business model. This is head-in-the-sand politics.”

Giving another assessment, academic Paul Bernal, IT, IP and media law lecturer at the UEA School of Law in the UK, and not himself a member of the HLEG, also argues the report comes up short — by failing to robustly interrogate the role of platform power in the spread of disinformation.

His view is that “the whole idea of ‘sharing’ as a mantra” is inherently linked to disinformation’s power online.

“[The report] is a start, but it misses some fundamental issues. The point about promoting media and information literacy is the biggest and most important one — I don’t think it can be emphasized enough, but it needs to be broader than it immediately appears. People need to understand not only when ‘news’ is misinformation, but to understand the way it is spread,” Bernal told TechCrunch.

“That means questioning the role of social media — and here I don’t think the High Level Group has been brave enough. Their recommendations don’t even mention addressing this, and I find myself wondering why.

“From my own research, the biggest single factor in the current problem is the way that news is distributed — Facebook, Google and Twitter in particular.”

“We need to find a way to help people to wean themselves off using Facebook as a source of news — the very nature of Facebook means that misinformation will be spread, and politically motivated misinformation in particular,” he added. “Unless this is addressed, almost everything else is just rearranging the deckchairs on the Titanic.”

Beyond filter bubbles

But Lisa-Maria Neudert, a researcher at the Oxford Internet Institute, who says she was involved with the HLEG’s work (her colleague at the Institute, Rasmus Nielsen, is also a member of the group), played down the notion that the report is not robust enough in probing how social media platforms are accelerating the problem of disinformation — flagging its call for increased transparency and for strategies to create “a media ecosystem that is more diverse and is more sustainable”.

Though she added: “I can see, however, how one of the common critiques would be that the social networks themselves need to do more.”

She went on to suggest that negative results following Germany’s decision to push for a social media hate speech law — which requires valid takedowns to be executed within 24 hours and includes a regime of penalties that can scale up to €50M — may have influenced the group’s decision to push for a far more light-touch approach.

The Commission itself has warned it could draw up EU-wide legislation to regulate platforms over hate speech. Though, for now, it’s been pursuing a voluntary Code of Conduct approach. (It has also been turning up the heat over terrorist content specifically.)

“[In Germany social media platforms] have an incentive to delete content really generously because there are heavy fines if they fail to take down content,” said Neudert, criticizing the regulation. “[Another] catch is that there is no legal oversight involved. So now you have, basically, social networks making decisions that used to be with courts and that often used to be a matter of months and months of weighing different legal [considerations].”

“That also just really clearly showed that once you are thinking about regulation, it is really important that regulators as well as tech companies, and as well as the media system, are really working together here. Because we are at a point where we have very complex systems, we have very complex levers, we have a lot of information… So it is a delicate topic, really, and I think there’s no catch-all regulation where we can get rid of all the fake news.”

Also today, Sir Tim Berners-Lee, the inventor of the world wide web, published an open letter warning that disinformation threatens the social utility of the web, and making the case for a direct causal link between a few “powerful” big tech platforms and false information being accelerated damagingly online.

In contrast to his assessment, the report’s weakness in speaking directly to any link between big tech platforms and disinformation does look pretty gaping.

Asked about this, Neudert agreed the topic is being “talked about in the EU”, though she said it’s being discussed more within the context of antitrust.

She also claimed there’s a growing body of research “debunking the idea that we have filter bubbles”, and counter-suggesting that online influence sources are in fact “more diverse”.

“I oftentimes do feel like I live in my own personal social bubble or echo chamber. However research does suggest otherwise — it does suggest that there’s, on the one hand, much more information that we’re getting, and also much more diverse information that we’re getting,” she claimed.

“I’m not so sure if your Facebook or if your Twitter is actually a gatekeeper of information,” she added. “I think your Facebook and your Twitter on some hand still, more or less, give you all of the information you have on the Internet.

“Where it gets more problematic is then if you also have algorithms on top of it that are promoting some issue to make them appear larger over the Internet — to make them appear at the very top of the news feed.”

She gave the example — also called out recently in an article by academic and techno-sociologist, Zeynep Tufecki — of YouTube’s problematic recommendation algorithms, which have been accused of having a quasi-radicalizing effect because they are choosing ever more extreme content to surface in their mission to keep viewers engaged.

“This is where I think this argument is becoming powerful,” Neudert told TechCrunch. “It is not something where the truth is already dictated and where it is set in stone. A lot of the outcomes are really emerging.

“The other part of course is you can have many, many different and diverse opinions — but there’s also things to be said about what are the effects of information being presented in whatever kind of format, providing it with credibility, and people trusting that kind of information.”

Being able to distinguish between fact and fiction on social media is “such a pressing problem”, she added.

Less trusted sources

One tangible result of that pressing fact or fiction problem that’s also being highlighted by the Commission today in a related piece of work — its latest Eurobarometer survey — is the erosion of consumer trust in tech platforms.

The majority of respondents to this EC survey viewed traditional media as the most trusted source of news (radio 70%, TV 66%, print 63%) vs online sources being the least trusted (26% and 27%, respectively for news and video hosting websites).

So there seem to be some pretty clear trust risks, at least, for tech platforms becoming synonymous with online disinformation.

The vast majority of Eurobarometer survey respondents (83%) also said they viewed fake news as a danger to democracy — whatever fake news meant to them in the moment they were being asked for their views on it. And those figures could certainly be read — or spun — as support for new regulations. So again, platforms do need to worry about public opinion.

Discussing potential technology-based responses to help combat disinformation, Neudert’s view is that automated fact-checking tools and bot detectors are “getting better” — and even “getting useful” when combined with the work of human checkers.

“For the next couple of years that to me looks like the lowest fruitful approach,” she said, advocating for such tools as an alternative and proportionate strategy (vs the stick of a new legal regime) for working across the vast scale of online content that needs moderation without risking the pitfall of chilling censorship.

“I do think that this combination of technology to drive attention to patterns of problems, and to larger trends of problem areas, and that then combined with human oversight, human detection, human debunking, right now is an important alley to go to,” she said.

But to achieve gains there she conceded that access to platforms’ metadata will be crucial — access that, it must also be said, is most certainly not the rule right now; and which has also frequently not been forthcoming, even when platforms were reasonably pressed regarding specific concerns.

Despite the closed door historical arrogance of platforms to access requests, Neudert nevertheless argues for “flexibility” now and “more dialogue and “more openness”, rather than heavy-handed German-style content laws.

But she also cautions that online disinformation is likely to get worse in the short term, with AI now being actively deployed in the potentially lucrative business of creating fakes, such as Adobe’s experiments with its VoCo speech editing tool.

Wider industry pushes to engineer better conversational systems to enhance products like voice assistants are also fueling developments here.

“My worry is also that there are a lot of people who have a lot of interest in putting money towards [systems that can create plausible fakes],” she said. “A lot of money is being devoted to artificial intelligence getting better and better and it can be used for the one side but it can also be used for the other side.

“I do hope with the technology developing and getting better we also have a simultaneous movement of research to debunk what is a fake, what is not a fake.”

On the lesser known anti-fake tech front she said interesting things are happening too, flagging a tool that can analyze videos to determine whether a human in a clip has “a real pulse” and “real breathing”, for example.

“There is a lot of super interesting things that can be done around that,” she added. “But I hope that kind of research also gets the money and gets the attention that it needs because maybe it is not something that is as easily monetizable as, say, deepfake software.”

One thing is becoming crystal clear about disinformation: This is a human problem.

Perhaps the oldest and most human problem there is. It’s just that now we’re having to confront these unpleasant and inconvenient fundamental truths about our nature writ very large indeed — not just acted out online but also accelerated by the digital sphere.

Below is the full list of members of the Commission’s HLEG:

Featured Image: Thomas Faull/Getty Images

TV Time, the TV tracking app with over a million daily users, can now find your next binge

With TV programming now spread out across a variety of services beyond traditional network TV, it can be hard to know what to watch next and what’s popular, given how much great content there is to choose from. An app called TV Time is helping with that, by allowing TV fans to track shows they’re watching, discover new programs, and socialize with fellow fans following each episode. Now the company is doubling down on its ability to help you find your next show with the launch of personalized recommendations.

Your recommendations are based on the app’s understanding of dozens of signals, including things like what shows you watched, which you binged through, those your friends watch, those where you engaged in the show’s community, those you’ve favorited, and more.

You can additionally filter your recommendations by network, status (ended, upcoming, etc.), genre, service where the show is available, and other factors.

(Above: TV Time identifies me as a sci-fi nerd who has yet to watch Stargate. I know, I know!) 

What makes TV Time’s recommendations unique is that they’re based on your viewing behavior across all of television, not just a single service like Netflix.

“There’s an incredible amount of quality TV being made today – some of the budgets are insane – and because all these different platforms are doing it, it becomes more confusing for the consumer to find out where to watch, to remember where they left off, and to remember when the premiere is ,” says TV Time COO Dan Brian. “It’s hard to wrangle it all.”

Similarly, figuring out what to watch next is also difficult because there are so many ways to track what content you like, but none of that data is currently available across platforms. That is, Netflix doesn’t know what you like on Amazon or HBO, and so on.

“We took the 8 billion episodes of TV that had been tracked in the app – episodes that are from every platform that exists,” Brian explains. “And because we have that 360 degree view of what people watch, we should be able to make the best recommendations to you as to what to watch next.”

In practice, the recommendation algorithm leaned a bit too hard on picks from television’s back catalog, I found in testing, and didn’t as heavily take into account favorited shows as I’d like. But the promise of machine learning is that its recommendations will get better in time, the more it’s used to find and follow new shows.

TV Time began its life as WhipClip, a source for a legal collection of GIFs from favorite shows, before pivoting to become a social TV community.

This is an area startups in the past tried to enter, as with GetGlue (acquired by i.TV half a decade ago), or social TV pioneer Miso, which shut down in 2014. Arguably, these companies arrived too soon – before the cord-cutting trend gave way to dozens of streaming services, and a la carte options galore for building out your own personal bundle of TV.

While in the past, all of America seemingly watched the most popular TV shows together at the same time, finding someone today who likes a show you watch is less common. And to find them watching it at the same time as you are is even rarer, thanks to the end of “appointment television,” for almost everything except for a small number of breakthrough hits like “Game of Thrones.”

That makes an app like TV Time feel like a place where you can really find “your people” – whether it’s fellow sci-fi lovers, or reality show junkies, or whatever else you’re into.

Beyond its new recommendations, you can use the app to track favorite shows, find out when shows are returning, or discover what’s popular among the community, and more.

On your profile, you can set shows as favorites, track your personal TV viewing data, and see how your posts to TV Time’s show communities are doing. You can also follow friends to see what shows people you know are watching.

After marking an episode as “watched,” you can then hop into the community to view the reactions, which are shared in the form of GIFs, photos, videos, memes, and more. The app makes meme-building easy, thanks to an included set of screencaps that can be mixed with other content, then shared to the community.

You can also record a video reaction – something that’s a popular activity on YouTube, and available in more of short-form format on TV Time.

There’s a surprising amount of community engagement, too.

A show may have a hundred or more reactions posted by fans, many with hundreds of likes. And a buzzy show like “The Walking Dead” may have hundreds of video reactions alone after a new episode premieres.

The company says that there are now more than a million people using the app daily, where they’re able to track any one of over 60,000 shows and more than 8 billion TV episodes. Users check in with TV Time some 45 million times per month, and engage over a half a million times by posting comments, photos, GIFs, videos, and more.

As TV Time improves these recommendations in the weeks ahead, it will also begin to show your “TV Time score” – how well your interests match with a given piece of content – in all the show pages in the app.

With its newfound ability to make personalized show suggestions, TV Time hopes to attract more casual TV fans, and eventually be able to sell its data on what people are watching to help inform TV producers and networks what to shows to fund next, as well as provide competitive insights, among other things.

“But we’ll never sell individual profiles of people,” Brian stresses. “We’re working with partners on aggregate, anonymized data about what the trends are,” he says.

Today TV Time generates revenue through a premium tier, which has tens of thousands of paying users. But this will likely be dropped in time as its data business scales up.

TV Time is a free download on iOS and Android.