All posts in “AI”

A predictive keyboard wrote a ‘Game of Thrones’ script and it’s hilariously perfect

April 2019 is just too damn long to wait for the eight and final season of Game of Thrones. I mean, you can only rewatch the Lannister Drogon BBQ episode from Season 7 so many times while holding out for another full year.

But don’t worry. A hero has emerged from the flames to cast a light and end our long, dark night. And it’s an algorithm.

The folks at Botnik Studios, a team of artists and developers, heard our pleas and put their technological magic to use. They created a script for Episode 1 of Season 8 using predictive text AI that learned everything from previous Game of Thrones scripts. And honestly, the result is not only eerily spot-on, but possibly more entertaining than any other Game of Thrones episode to date.

Here’s the full “script,” and we dare you to try and find a single flaw in this masterpiece:

Image: botnik studios

It’s hard to pick favorites, but here are some of the stand out moments from this revealing episode:

  • Varys’ characteristically candid assessment that, “Morals are like balls…. I find myself utterly without them.”

  • Euron, lord of prick, taking up hair-braiding for Yara.

  • We’re calling it now: “We’re just fating around like cats in a castle!” is the new “I drink and I know things.” Pre-order your t-shirts today.

  • Predictably, Bran’s ability to see all continues to be useless, as gazes into the future to assure everyone that, “we are all still friends then.”

  • [Jon is being beautiful and looking hard at things] pretty much sums up the entirety of Kit Harrigton’s acting over the past seven seasons.

  • Dany adds another title to her name, but she really just wants is a pretty chair

  • Let’s call it a day and replace all of Jon’s future lines with: “I am guilty of never having fun”

  • Cersei’s extremely on-brand new pet: a bird skeleton.

  • Qyburn licking his lips and touching the ground “in a sex way” is more true to his character in the books than anything we’ve seen on screen.

  • SPOILER ALERT: Davos’ true connection to onions isn’t smuggling, but conversation.

  • Dany, a few episodes too late, realizes, “I will ride a dragon even if you don’t want me to!”

  • SPOILER ALERT: Tormund doesn’t make it after the White Walkers destroyed the wall, but still refuses to bend the knee as an undead zombie because FREE FOLK ARE METAL AS HELL.

Phew. What a wild ride. Can’t wait for next Sunday’s episode. Until then, we’ll be pouring ovver Reddit theories proving that Tyrion is secretly the Night King in the future.  

H/T Winter is Coming

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f84365%2fb9fa289d 82a2 44ad b305 5438a1a05e16

No, I don’t want to have long conversations with Alexa

Amazon’s Alexa is great for many things. I love walking through my door saying “Alexa, turn on all lights.” I like asking Alexa for the weather and telling it to play music. I really appreciate how Amazon is constantly improving its capabilities with new skills.

But what I don’t want is to have any kind of extended conversation with Alexa. I’m the boss and Alexa’s my assistant. I’m sorry, but there’s no room anything deeper in this relationship. 

This rant is inspired by a Carnegie Mellon University project in which a team of students is trying to make Alexa more chatty.

The 11-member Team Tartan is one of eight from around the world who have been awarded $250,000 to compete in the Alexa Prize Challenge.

These teams are to “create socialbots that can converse coherently and engagingly with humans on a range of current events and popular topics such as entertainment, sports, politics, technology, and fashion.”

The specific goal is to make Alexa hold a conversation for 20 minutes, which if you stop and think about it is a very long time to be talking with a digital assistant. Most queries with Alexa (or any digital assistant such as Siri or Google Assistant) last seconds, not the span of a Seinfeld episode.

KDKA, a CBS Pittsburgh local news division, which interviewed the Tartan team, posed a good question: Do Alexa users even want to have conversations with it?

One Alexa user they spoke to said it’d depend on the technology, specifically on the accuracy of Alexa’s ability to hear your words. 

But I’m not convinced anyone really does want this. I was one of the first ever to use an Echo with Alexa voice controls and I’ve been using it daily for over three years. Not once have I wanted to have a more engaging conversation with it.

I also live alone no roommates and no pets (not even a fish) and I admit it does get lonely sometimes, but I still never feel the urge to chitchat with Alexa.

As digital assistants get smarter, it’s natural some people might want to converse with them for more than just the occasional “what’s the weather?” and “who’s leading the Eastern Conference playoffs in the NBA?” 

The Google Assistant is the best digital assistant there is because it understands context. If you ask it one thing, and then another, it understands that they’re probably related.

But this whole “digital assistants being our friends” reminded reminded of that one episode of Mr. Robot where FBI agent Dominique DiPierro lays in bed talking to Alexa. As she skulks under her sheets talking to what is essentially a robot, she realizes she has no friends and no relationships. You can see an illustrated reenactment of the entire scene below:

[embedded content]

“Alexa, wake me up at um…,” Dom tells Alexa.

“Sorry, I didn’t understand the question you were asking,” Alexa replies.

“Because I wasn’t asking a question you dumb bitch,” retorts Dom.

Alexa has no answer to this. Their conversation continues with an increasingly depressed Dom asking it “Are we friends?” This continues for a few more questions with her asking Alexa’s favorite color, the color of its eyes, and whether or not it loves her.

This. Is. All. So. Depressing.

It makes you realize that despite technology making life more convenient, it’s also made us all much more lonely. A conversation with Alexa might be a short-term cure for loneliness, but it won’t dig you out of your darkness over time. For that, nothing will ever be as good as social interactions with humans.

No matter how smart digital assistants get, they can never replace real exchanges between friends and family. They just can’t because at the end of the day, they’re pre-programmed to say and respond to a set of commands. 

Even if machine learning helps expand that, the vessels used to deliver digital assistants an Echo, HomePod, smartphone, or whatever are still machines incapable of true emotions. They know no sympathy or empathy. They’ll leave you feeling just as empty as before your phony conversation started.

Even talking to a dog or cat is more valuable than a digital assistant. Though they can’t speak back in human language, they still emote with facial expressions and body language. Alexa can’t do either of those things.

When you’re on your death bed, you’ll remember the conversations you had with those closest to you. Who’ll remember the lame chit-chat you had with Alexa? Not me. 

Alexa won’t sit in the park with me on a nice summer day. Alexa won’t go to the bar and be my shoulder to lean on when life is keeps kicking my ass. 

So no, I don’t need or want to have any convo with digital assistants. Turn on my lights, play my music, and read me the news. That’s all I need from them. But long, fake convos? No thanks. Let’s remember we’re humans. Go out and make friends in the real world. Let’s stop living like introverted hermits hiding behind our tech. 

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f85417%2fdb3b6f71 3156 4768 9a6c 0c6499fcde7d

Sergey Brin shouts out Ethereum and warns of AI in annual founders’ letter

When Sergey Brin talks, people listen. And right now Brin is talking about AI and cryptocurrency. 

As the president of Alphabet, Google’s parent company, Brin is uniquely positioned to not only see what technological developments are coming down the pike but to influence them as well. That makes the company’s annual founders’ letter an important indicator for the tech world. According to Brin, who wrote this year’s, the signs are mixed. 

He opens the letter by quoting from Charles Dickens’s A Tale of Two Cities, and proceeds from there to dive into the huge growth in computing power that he’s witnessed during his tenure at Google and Alphabet. 

“There are several factors at play in this boom of computing,” he writes. “First, of course, is the steady hum of Moore’s Law, although some of the traditional measures such as transistor counts, density, and clock frequencies have slowed. The second factor is greater demand, stemming from advanced graphics in gaming and, surprisingly, from the GPU-friendly proof-of-work algorithms found in some of today’s leading cryptocurrencies, such as Ethereum.”

That’s right, cryptocurrency, even if it turns out to not be good for anything else, has in Brin’s mind at least pushed computing forward. So there’s that. 

Like I said, the future.

Like I said, the future.

Image: Kim Kulish/getty

The main focus of the letter, however, is artificial intelligence. Brin highlights the Google products and services that benefit from neural networks, and the list is quite extensive. 

understand images in Google Photos;

enable Waymo cars to recognize and distinguish objects safely;

significantly improve sound and camera quality in our hardware;

understand and produce speech for Google Home;

translate over 100 languages in Google Translate;

caption over a billion videos in 10 languages on YouTube;

improve the efficiency of our data centers;

suggest short replies to emails;

help doctors diagnose diseases, such as diabetic retinopathy;

discover new planetary systems;

create better neural networks (AutoML);
… and much more.

But it’s not all roses. Brin brings us back down to earth by asking “how might [powerful tools] manipulate people” and whether or not they’re safe. 

And much like Elon Musk has done before him (albeit in a more subdued manner), Brin acknowledges that artificial intelligence could pose some not-quite-yet-understood risks. 

“Most notably, safety spans a wide range of concerns from the fears of sci-fi style sentience to the more near-term questions such as validating the performance of self-driving cars.”

So there you have it. According to one of the most influential people in the world of tech, the future is bright — thanks in small part to cryptocurrency — and it will continue to be bright as long as AI doesn’t ruin it all for us. 

Fingers crossed. 

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f85191%2f7a3ca773 92e9 4824 8e81 13eb1ea93fc0

How this AI platform is taking over the business world

Despite the pop culture fiction that artificial intelligence is a far-away puzzle just waiting to be unlocked, the truth is that AI is already a tool that most use often, some of us even every day. 

Just think about it: every time you stop to ask Siri or Google Home or any of our friendly neighborhood pocket assistants for directions or help finding the nearest coffee spot, you’re interfacing with an elegant example of easy-to-use AI. 

For businesses, using chatbots as the first line of defence against customer complaints is now common, providing a handy way to triage queries and customer needs. Again, AI is already an efficient solution to an everyday problem. And while it’s easy to imagine moments where the lives of everyday people intersect with artificial intelligence, and even find examples of our interactions with AI within smaller businesses, for global enterprises the reality of large-scale AI has yet to unfold. 

Enter Genpact. This global professional services firm looks to do just that: leverage artificial intelligence, at potentially massive scale, to ultimately bring us closer to making that future a reality. In this effort they’ve unveiled Genpact Cora, an interconnected platform of best-in-class technologies that span from robotic automation to advanced data visualization to artificial intelligence.  

What global business hasn’t dreamed of a future where artificial intelligence can realistically help solve challenges, at a scale? Just think about it: in the same way that we use virtual assistants in our daily lives to help us navigate a busy schedule or find quick solutions to our everyday problems, there could be tech just on the horizon that helps businesses operate more smoothly, in much the same ways. 

There’s a key difference between Genpact Cora and the helpful virtual assistant in your pocket, however. Nitin Bhat, Senior Vice President at Genpact, explains that because Cora is built on AI and other advanced technologies, it can learn and prescribe what actions are needed for business clients to improve their processes and improve their competitive position in the market.

The implications of this are huge. For businesses eager to bring AI into the fold, it’s not about flashy new products and dazzling tech, but more importantly about using those tools work harder to meet businesses’ goals and prove out the value inherent to the tech itself. 

While AI continues to fundamentally change the way businesses at the enterprise level engage with both customers and other businesses, the truth is that it is largely an unmet need. Genpact estimates that more than 70 percent of enterprise processes can be can be automated, for example, by robotic process automation or through machine learning and intelligent automation. 

More than 70 percent of enterprise processes can be can be automated.

While this type of technology is readily available, adoption depends a lot on a company’s culture and DNA. So many functions may be ready for AI-driven automation, while others would benefit more greatly from robotic automation, for example. 

“With the explosion of new digital technologies and solution options, leaders are struggling to determine how to best exploit these disruptive digital innovations in a pragmatic, industrialized (at scale) and risk-mitigated manner,” Bhat says. Just like with the virtual assistants we use in our daily lives, businesses too must rely on the harmony between machine and humans to ensure that things run smoothly. 

 “Math, science and data analytics will still have a huge role to play, but as machines do more of that work, it will have little value without the human connection of creativity, emotion, judgment, and relationships,” Bhat says. Ultimately, a command and control center that is led by humans and powered by AI can reduce risk of possible errors in how these tools are used. 

There’s still so much remaining to be discovered in the future of AI, but Genpact believes it is on the cutting edge of the industry, being the first to combine automation, analytics, and AI engines in one platform that is designed to bring humans and machine together. As with many new tech tools, there’s still a lot of hype and unrealistic expectations of what the technologies can do, but AI adoption at the large-scale enterprise level is now more real than ever, and Genpact is leading the charge in that effort. 

“Digitization, AI, and other emerging technologies are forcing companies to refresh the way they serve their customers. Just like what we’re seeing in the Twitter universe with real time, the most successful companies are the ones who adjust their strategies in real time to reflect current needs and upcoming trends,” Bhat says. 

Find out more about the implications of AI and Genpact Cora here.

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f85661%2fd19d13c5 05b1 46c9 9ab2 bdbc2692c407

Facebook has a new job posting calling for chip designers

Facebook has posted a job opening looking for an expert in ASIC and FPGA, two custom silicon designs that companies can gear toward specific use cases — particularly in machine learning and artificial intelligence.

There’s been a lot of speculation in the valley as to what Facebook’s interpretation of custom silicon might be, especially as it looks to optimize its machine learning tools — something that CEO Mark Zuckerberg referred to as a potential solution for identifying misinformation on Facebook using AI. The whispers of Facebook’s customized hardware range depending on who you talk to, but generally center around operating on the massive graph Facebook possesses around personal data. Most in the industry speculate that it’s being optimized for Caffe2, an AI infrastructure deployed at Facebook, that would help it tackle those kinds of complex problems.

FPGA is designed to be a more flexible and modular design, which is being championed by Intel as a way to offer the ability to adapt to a changing machine learning-driven landscape. The downside that’s commonly cited when referring to FPGA is that it is a niche piece of hardware that is complex to calibrate and modify, as well as expensive, making it less of a cover-all solution for machine learning projects. ASIC is similarly a customized piece of silicon that a company can gear toward something specific, like mining cryptocurrency.

Facebook’s director of AI research tweeted about the job posting this morning, noting that he previously worked in chip design:

While the whispers grow louder and louder about Facebook’s potential hardware efforts, this does seem to serve as at least another partial data point that the company is looking to dive deep into custom hardware to deal with its AI problems. That would mostly exist on the server side, though Facebook is looking into other devices like a smart speaker. Given the immense amount of data Facebook has, it would make sense that the company would look into customized hardware rather than use off-the-shelf components like those from Nvidia.

(The wildest rumor we’ve heard about Facebook’s approach is that it’s a diurnal system, flipping between machine training and inference depending on the time of day and whether people are, well, asleep in that region.)

Most of the other large players have found themselves looking into their own customized hardware. Google has its TPU for its own operations, while Amazon is also reportedly working on chips for both training and inference. Apple, too, is reportedly working on its own silicon, which could potentially rip Intel out of its line of computers. Microsoft is also diving into FPGA as a potential approach for machine learning problems.

Still, that it’s looking into ASIC and FPGA does seem to be just that — dipping toes into the water for FPGA and ASIC. Nvidia has a lot of control over the AI space with its GPU technology, which it can optimize for popular AI frameworks like TensorFlow. And there are also a large number of very well-funded startups exploring customized AI hardware, including Cerebras Systems, SambaNova Systems, Mythic, and Graphcore (and that isn’t even getting into the large amount of activity coming out of China). So there are, to be sure, a lot of different interpretations as to what this looks like.

One significant problem Facebook may face is that this job opening may just sit up in perpetuity. Another common criticism of FPGA as a solution is that it is hard to find developers that specialize in FPGA. While these kinds of problems are becoming much more interesting, it’s not clear if this is more of an experiment than Facebook’s full all-in on custom hardware for its operations.

But nonetheless, this seems like more confirmation of Facebook’s custom hardware ambitions, and another piece of validation that Facebook’s data set is becoming so increasingly large that if it hopes to tackle complex AI problems like misinformation, it’s going to have to figure out how to create some kind of specialized hardware to actually deal with it.

A representative from Facebook did not yet return a request for comment.