All posts in “AI”

The ‘Godfathers of AI’ win Turing Award

"Godfathers of AI" Geoffrey Hinton, Yann LeCun, and Yoshua Bengio have won the 2018 Turing Award for their work on neural networks.
“Godfathers of AI” Geoffrey Hinton, Yann LeCun, and Yoshua Bengio have won the 2018 Turing Award for their work on neural networks.

Image: Getty Images/iStockphoto

The winners of the 2018 Turing Award have been announced.

Geoffrey Hinton, Yann LeCun, and Yoshua Bengio — sometimes referred to as the “godfathers of artificial intelligence” — have won the 2018 Turing Award for their work on neural networks. The three artificial intelligence pioneers’ work basically laid the foundation for modern AI technologies.

In the 1980s and early 1990s, artificial intelligence experienced a renewed popularity within the scientific community. However, by the mid-90s, scientists had failed to make any major advancements in AI, making it harder to secure funding or publish research. Hinton, LeCun, and Bengio remained undeterred and continued with their work.

In 2004, in an effort to revive the field, Hinton put together a new research program with “less than $400,000 in funding from the Canadian Institute for Advanced Research.” The program would focus on “neural computation and adaptive perception.” Bengio and LeCun joined Hinton in the program.

By 2012, the Hinton-led program came up with a deep learning neural network algorithm that performed more than 40 percent better than what came before. 

Self-driving cars, voice assistants, and facial recognition technology are just a few of the advancements made possible by Hinton, LeCun, and Bengio’s work.

The award, named after British mathematician Alan Turing, carries a $1 million prize, which the trio will split. Previous Turing Award winners include Tim Berners-Lee, best known for inventing the World Wide Web.

Hinton is currently a top AI researcher at Google. LeCun is now at Facebook, working as the company’s chief AI scientist. Bengio has remained in academia but has worked with companies such as AT&T, Microsoft, and IBM.

Uploads%252fvideo uploaders%252fdistribution thumb%252fimage%252f90866%252f5b7c5e16 e0ea 4478 8a8e 6e615fdabd5a.jpg%252foriginal.jpg?signature=kbny swis2pngwvrjhv9fw2cuos=&source=https%3a%2f%2fblueprint api production.s3.amazonaws

McDonald’s to customize its drive-throughs with AI

Going on a late night McDonald’s food run may be a lot different in the future. McDonald’s has purchased Dynamic Yield Ltd., an Israeli artificial intelligence company. McDonald’s said it plans to use Dynamic Yield’s technology to change the drive-through experience. As long as they don’t mess with our McGriddles it should be cool though. 

Facebook’s AI couldn’t spot mass murder

Facebook has given another update on measures it took and what more it’s doing in the wake of the livestreamed video of a gun massacre by a far right terrorist who killed 50 people in two mosques in Christchurch, New Zealand.

Earlier this week the company said the video of the slayings had been viewed less than 200 times during the livestream broadcast itself, and about about 4,000 times before it was removed from Facebook — with the stream not reported to Facebook until 12 minutes after it had ended.

None of the users who watched the killings unfold on the company’s platform in real-time apparently reported the stream to the company, according to the company.

It also previously said it removed 1.5 million versions of the video from its site in the first 24 hours after the livestream, with 1.2M of those caught at the point of upload — meaning it failed to stop 300,000 uploads at that point. Though as we pointed out in our earlier report those stats are cherrypicked — and only represent the videos Facebook identified. We found other versions of the video still circulating on its platform 12 hours later.

In the wake of the livestreamed terror attack, Facebook has continued to face calls from world leaders to do more to make sure such content cannot be distributed by its platform.

The prime minister of New Zealand, Jacinda Ardern told media yesterday that the video “should not be distributed, available, able to be viewed”, dubbing it: “Horrendous.”

She confirmed Facebook had been in contact with her government but emphasized that in her view the company has not done enough.

She also later told the New Zealand parliament: “We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published. They are the publisher. Not just the postman.”

We asked Facebook for a response to Ardern’s call for online content platforms to accept publisher-level responsibility for the content they distribute. Its spokesman avoided the question — pointing instead to its latest piece of crisis PR which it titles: “A Further Update on New Zealand Terrorist Attack”.

Here it writes that “people are looking to understand how online platforms such as Facebook were used to circulate horrific videos of the terrorist attack”, saying it therefore “wanted to provide additional information from our review into how our products were used and how we can improve going forward”, before going on to reiterate many of the details it has previously put out.

Including that the massacre video was quickly shared to the 8chan message board by a user posting a link to a copy of the video on a file-sharing site. This was prior to Facebook itself being alerted to the video being broadcast on its platform.

It goes on to imply 8chan was a hub for broader sharing of the video — claiming that: “Forensic identifiers on many of the videos later circulated, such as a bookmarks toolbar visible in a screen recording, match the content posted to 8chan.”

So it’s clearly trying to make sure it’s not singled out by political leaders seek policy responses to the challenge posed by online hate and terrorist content.

Further details it chooses to dwell on in the update is how the AIs it uses to aid the human content review process of flagged Facebook Live streams are in fact tuned to “detect and prioritize videos that are likely to contain suicidal or harmful acts” — with the AI pushing such videos to the top of human moderators’ content heaps, above all the other stuff they also need to look at.

Clearly “harmful acts” were involved in the New Zealand terrorist attack. Yet Facebook’s AI was unable to detected a massacre unfolding in real time. A mass killing involving an automatic weapon slipped right under the robot’s radar.

Facebook explains this by saying it’s because it does not have the training data to create an algorithm that understands it’s looking at mass murder unfolding in real time.

It also implies the task of training an AI to catch such a horrific scenario is exacerbated by the proliferation of videos of first person shooter videogames on online content platforms.

It writes: “[T]his particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare. Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground.”

The videogame element is a chilling detail to consider.

It suggests that a harmful real-life act that mimics a violent video game might just blend into the background, as far as AI moderation systems are concerned; invisible in a sea of innocuous, virtually violent content churned out by gamers. (Which in turn makes you wonder whether the Internet-steeped killer in Christchurch knew — or suspected — that filming the attack from a videogame-esque first person shooter perspective might offer a workaround to dupe Facebook’s imperfect AI watchdogs.)

Facebook post is doubly emphatic that AI is “not perfect” and is “never going to be perfect”.

“People will continue to be part of the equation, whether it’s the people on our team who review content, or people who use our services and report content to us,” it writes, reiterating yet again that it has ~30,000 people working in “safety and security”, about half of whom are doing the sweating hideous toil of content review.

This is, as we’ve said many times before, a fantastically tiny number of human moderators given the vast scale of content continually uploaded to Facebook’s 2.2BN+ user platform.

Moderating Facebook remains a hopeless task because so few humans are doing it.

Moreover AI can’t really help. (Later in the blog post Facebook also writes vaguely that there are “millions” of livestreams broadcast on its platform every day, saying that’s why adding a short broadcast delay — such as TV stations do — wouldn’t at all help catch inappropriate real-time content.)

At the same time Facebook’s update makes it clear how much its ‘safety and security’ systems rely on unpaid humans too: Aka Facebook users taking the time and mind to report harmful content.

Some might say that’s an excellent argument for a social media tax.

The fact Facebook did not get a single report of the Christchurch massacre livestream while the terrorist attack unfolded meant the content was not prioritized for “accelerated review” by its systems, which it explains prioritize reports attached to videos that are still being streamed — because “if there is real-world harm we have a better chance to alert first responders and try to get help on the ground”.

Though it also says it expanded its acceleration logic last year to “also cover videos that were very recently live, in the past few hours”.

But again it did so with a focus on suicide prevention — meaning the Christchurch video would only have been flagged for acceleration review in the hours after the stream ended if it had been reported as suicide content.

So the ‘problem’ is that Facebook’s systems don’t prioritize mass murder.

“In [the first] report, and a number of subsequent reports, the video was reported for reasons other than suicide and as such it was handled according to different procedures,” it writes, adding it’s “learning from this” and “re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review”.

No shit.

Facebook also discusses its failure to stop versions of the massacre video from resurfacing on its platform, having been — as it tells it — “so effective” at preventing the spread of propaganda from terrorist organizations like ISIS with the use of image and video matching tech.

It claims  its tech was outfoxed in this case by “bad actors” creating many different edited versions of the video to try to thwart filters, as well as by the various ways “a broader set of people distributed the video and unintentionally made it harder to match copies”.

So, essentially, the ‘virality’ of the awful event created too many versions of the video for Facebook’s matching tech to cope.

“Some people may have seen the video on a computer or TV, filmed that with a phone and sent it to a friend. Still others may have watched the video on their computer, recorded their screen and passed that on. Websites and pages, eager to get attention from people seeking out the video, re-cut and re-recorded the video into various formats,” it writes, in what reads like another attempt to spread blame for the amplification role that its 2.2BN+ user platform plays.

In all Facebook says it found and blocked more than 800 visually-distinct variants of the video that were circulating on its platform.

It reveals it resorted to using audio matching technology to try to detect videos that had been visually altered but had the same soundtrack. And again claims it’s trying to learn and come up with better techniques for blocking content that’s being re-shared widely by individuals as well as being rebroadcast by mainstream media. So any kind of major news event, basically.

In a section on next steps Facebook says improving its matching technology to prevent the spread of inappropriate viral videos being spread is its priority.

But audio matching clearly won’t help if malicious re-sharers just both re-edit the visuals and switch the soundtrack too in future.

It also concedes it needs to be able to react faster “to this kind of content on a live streamed video” — though it has no firm fixes to offer there either, saying only that it will explore “whether and how AI can be used for these cases, and how to get to user reports faster”.

Another priority it claims among its “next steps” is fighting “hate speech of all kinds on our platform”, saying this includes more than 200 white supremacist organizations globally “whose content we are removing through proactive detection technology”.

It’s glossing over plenty of criticism on that front too though — including research that suggests banned far right hate preachers are easily able to evade detection on its platform. Plus its own foot-dragging on shutting down far right extremists. (Facebook only finally banned one infamous UK far right activist last month, for example.)

In its last PR sop, Facebook says it’s committed to expanding its industry collaboration to tackle hate speech via the Global Internet Forum to Counter Terrorism (GIFCT), which formed in 2017 as platforms were being squeezed by politicians to scrub ISIS content — in a collective attempt to stave off tighter regulation.

“We are experimenting with sharing URLs systematically rather than just content hashes, are working to address the range of terrorists and violent extremists operating online, and intend to refine and improve our ability to collaborate in a crisis,” Facebook writes now, offering more vague experiments as politicians call for content responsibility.

coParenter helps divorced parents settle disputes using A.I. and human mediation

A former judge and family law educator has teamed up with tech entrepreneurs to launch an app they hope will help divorced parents better manage their co-parenting disputes, communications, shared calendar, and other decisions within a single platform. The app, called coParenter, aims to be more comprehensive than its competitors, while also leveraging a combination of A.I. technology and on-demand human interaction to help co-parents navigate high-conflict situations.

The idea for coParenter emerged from co-founder Hon. Sherrill A. Ellsworth’s personal experience and entrepreneur Jonathan Verk, who had been through a divorce himself.

Ellsworth had been a presiding judge of the Superior Court in Riverside County, California for 20 years and a family law educator for ten. During this time, she saw firsthand how families were destroyed by today’s legal system.

“I witnessed countless families torn apart as they slogged through the family law system. I saw how families would battle over the simplest of disagreements like where their child will go to school, what doctor they should see and what their diet should be — all matters that belong at home, not in a courtroom,” she says.

Ellsworth also notes that 80 percent of the disagreements presented in the courtroom didn’t even require legal intervention – but most of the cases she presided over involved parents asking the judge to make the co-parenting decision.

As she came to the end of her career, she began to realize the legal system just wasn’t built for these sorts of situations.

She then met Jonathan Verk, previously EVP Strategic Partnerships at Shazam and now coParenter CEO. Verk had just divorced himself and had an idea about how technology could help make the co-parenting process easier. He already had on board his longtime friend and serial entrepreneur Eric Weiss, now COO, to help build the system. But he needed someone with legal expertise.

That’s how coParenter was born.

The app, also built by CTO Niels Hansen, today exists alongside a whole host of other tools built for different aspects of the coparenting process.

That includes those apps designed to document communication like OurFamilyWizard, Talking Parents, AppClose, and Divvito Messenger; those for sharing calendars, like Custody Connection, Custody X Exchange, Alimentor; and even those that offer a combination of features like WeParent, 2houses, SmartCoparent, and Fayr, among others.

But the team at coParenter argues that their app covers all aspects of coparenting, including communication, documentation, calendar and schedule sharing, location-based tools for pickup and dropoff logging, expense tracking and reimbursements, schedule change requests, tools for making decisions on day-to-day parenting choices like haircuts, diet, allowance, use of media, etc., and more.

Notably, coParenter also offers a “solo mode” – meaning you can use the app even if the other co-parent refuses to do the same. This is a key feature that many rival apps lack.

However, the biggest differentiator is how coParenter puts a mediator of sorts in your pocket.

The app begins by using A.I., machine learning, and sentiment analysis technology to keep conversations civil. The tech will jump in to flag curse words, inflammatory phrases and offense names to keep a heated conversation from escalating – much like a human mediator would do when trying to calm two warring parties.

When conversations take a bad turn, the app will pop up a warning message that asks the parent if they’re sure they want to use that term, allowing them time to pause and think. (If only social media platforms had built features like this!)

When parents need more assistance, they can opt to use the app instead of turning to lawyers.

The company offers on-demand access to professionals as both monthly ($12.99/mo – 20 credits, or enough for 2 mediations) or yearly ($119.99/year – 240 credits) subscriptions. Both parents can subscribe for $199.99/year, each receiving 240 credits.

“Comparatively, an average hour with a lawyer costs between $250 and upwards of $200, just to file a single motion,” Ellsworth says.

These professionals are not mediators, but are licensed in their respective fields – typically family law attorneys, therapists, social workers, or other retired bench officers with strong conflict resolution backgrounds. Ellsworth oversees the professionals to ensure they have the proper guidance.

All communication between the parent and the professional is considered confidential and not subject to admission as evidence, as the goal is to stay out of the courts. However, all the history and documentation elsewhere in the app can be used in court, if the parents do end up there.

The app has been in beta for nearly a year, and officially launched this January. To date, coParenter claims it’s already helped to resolve over 4,000 disputes and over 2,000 co-parents have used it for scheduling. 81 percent of the disputing parents resolved all their issues in the app, without needed a professional mediator or legal professional, the company says.

CoParenter is available on both iOS and Android.