All posts in “Tech News”

Why it makes sense for Twitter to take on Snapchat Discover

Facebook may not be the only competitor Snap needs to worry about. 

Twitter is reportedly working on a new Snapchat Discover-like feature that emphasizes photos and videos tied to specific events, according to CNBC.

The news follows an earlier report from Bloomberg that Twitter was working on a Snapchat-like camera feature.

Details are scarce, and CNBC notes that the feature may never end up seeing the light of day, but the “camera first feature” is apparently meant to encourage people to share more photos and videos associated with news that often breaks via Twitter.

Such a feature would directly compete with elements of Snapchat Discover, as well as Snap Maps — both of which curate Stories-based current events. A Twitter spokesperson declined to comment on the report, but it’s not surprising Twitter would want to build such a feature.

Increasingly, Snapchat has become a go-to source to get a first-person view of breaking news. Time and time again, the best place to get a raw, unfiltered look at what’s happening during a major event like the Olympics — or, more tragically, during a news event like a natural disaster or a mass shooting — is not Twitter but Snapchat.

If you’re Twitter, which prides itself on being the platform for “what’s happening,” this is a worrying trend. The company wants to ensure that their platform stays top of mind when news happens. Incorporating a way for users to more easily share photos and videos — which can then be pulled into Moments or other features that highlight breaking news — would make a lot of sense. 

Whether or not it would have any measurable impact on Snapchat is another matter. But it’s a good way to ensure that Twitter retains its reputation as a go-to source for news.

[embedded content]

Now we know why Siri was so dumb for so long

Seven years after Siri launched on the iPhone 4S and it's still not as smart as it should be.
Seven years after Siri launched on the iPhone 4S and it’s still not as smart as it should be.

Image: jhila farzaneh/mashable

It’s no secret that Siri is way behind other voice assistants like the Google Assistant and Amazon’s Alexa when it comes to comprehension and total number of skills. 

Apple has drastically improved Siri over the years, adding new features and upgrading its voice to sound more human-like, but its ongoing shortcomings really revealed themselves in the recent launch of the HomePod, the company’s first product that’s almost entirely controlled by the voice assistant.

So how did Apple screw up Siri so badly when it was released so far in advance of the competition? A new report from The Information reveals how years of missteps left Siri eating dust.

According to the report, after acquiring the original Siri app in 2010 for $200 million, Apple proceeded to quickly integrate the digital assistant into the iPhone 4S in 2011. There was so much potential for Siri, and Apple promised to bring voice controls to the masses just as it did multi-touch on the original iPhone.

Except the voice-controlled computing revolution never quite happened the way Apple predicted. iPhones users quickly realized that Siri couldn’t do a lot of things. And even after Apple opened Siri up with SiriKit in 2016, it still isn’t as intelligent as the Google Assistant or Alexa.

So what the heck happened?

According The Information, it all went downhill after Steve Jobs died in 2011. Jobs’ death marked the beginning of Siri’s downfall.

Instead of continuously updating Siri so that it would get smarter faster, Richard Williamson, one of the former iOS chief Scott Forstall’s deputies, reportedly only wanted to update the assistant annually to coincide with new iOS releases.

Frustrated by all the patching they were doing to Siri, engineers reportedly batted around the idea of starting over.

This is, of course, not how a digital assistant should be treated. As Google and Amazon have demonstrated, digital assistants need to constantly be updated in the background in order to keep up with the ever-changing demands of its users.

Williamson denies the accusations that he slowed Siri development down and instead cast blame on Siri’s creators. 

“It was slow, when it worked at all,” Williamson said. “The software was riddled with serious bugs. Those problems lie entirely with the original Siri team, certainly not me.”

Other problems over the years included layering new elements on top of Siri using technologies culled from new acquisitions. For example, the Siri team had issues integrating new search features from Apple’s acquisition of Topsy in 2013 and natural language features from the VocalIQ acquisition in 2015.

“Members of the Topsy team expressed a reluctance to work with a Siri team they viewed as slow and bogged down by the initial infrastructure that had been patched up but never completely replaced since it launched.”

Frustrated by all the patching they were doing to Siri, engineers reportedly considered starting over from scratch. Instead of building on top of Siri’s reportedly bad infrastructure, they would rebuild Siri from the ground up — correctly on the second time around. Of course, when you’re serving hundreds of millions of users across all of Apple’s devices, that’s a tall task.

The most revealing part of the report exposes how Apple didn’t even have plans to integrate Siri into HomePod until after the Amazon Echo launched:

In a sign of how unprepared Apple was to deal with a rivalry, two Siri team members told The Information that their team didn’t even learn about Apple’s HomePod project until 2015—after Amazon unveiled the Echo in late 2014. One of Apple’s original plans was to launch its speaker without Siri included, according to a source.

Right now, it looks like Siri won’t be blown up and a rebuilt. And if Apple wants to transform its assistant into a true competitor to the Google Assistant and Alexa, it’ll need to sort out its internal management issues and decide what it really wants Siri to be. For all users, we hope it’s more intelligence and deeper integration with third-party apps and services.

[embedded content]

Little Caesars patents a pizza-making robot

A robotic waitress delivers a pizza at a restaurant in Pakistan.
A robotic waitress delivers a pizza at a restaurant in Pakistan.

Image: ss mizra/afp/Getty Images

Robots can already complete a wide variety of tasks for their human overlords, but they may soon be about to conquer the final frontier: making pizzas.

As first reported by ZDNet, Little Caesars has received a new patent for an “automated pizza assembly system,” or what is essentially a robot that makes pizza.

The patent describes it as “a robot including a stationary base and an articulating arm having a gripper attached to the end is operable to grip a pizza pan having pizza dough therein.”

Little Caesars' patented robot from the side.

Little Caesars’ patented robot from the side.

Image: screenshot: monica chin/Little caesars/

The robot will then rotate the pizza pan through “the cheese spreading station” and the “pepperoni applying station.” The patent claims that the robot and its stations will “properly distribute the cheese and pepperoni on the pizza.” 

This patent isn’t all that surprising, when you consider how quickly the entire fast-food industry has moved toward automation. Establishments like McDonalds and Wal-Mart already have robots heavily involved in their most basic procedures. Even the smaller burger chain CaliBurger has a burger-flipping robot of its own, though it’s currently on unpaid leave. It’s worth noting that CaliBurger’s robot worker also requires humans to prepare buns and place patties on its grill. 

This new Little Caesars’ patent doesn’t necessarily mean a pizza-making robot is coming to your neighborhood anytime soon, or even that it will come at all. Still, it’s an exciting sign for anyone who hates to cook, but loves to eat pizza. Its widespread use could mean a more efficient kitchen, and free up time for employees to focus on customer service — plus maybe it will lower the cost of making an already dirt-cheap $5 hot-and-ready pizza.

[embedded content]

Equifax exec who sold nearly $1 million in shares charged with insider trading

Equifax’s former chief information officer has been indicted for insider trading, making him the first executive to face criminal charges following the company’s massive data breach that exposed the personal data of more than 145 million Americans.

Jun Ying, who was the company’s CIO at the time the company was hacked last summer, will be arraigned in federal court this week on charges of insider trading, according to the Department of Justice.

For a CIO at a financial company, Ying didn’t exactly do a great job at covering his tracks. 

According to a DOJ statement, following a meeting on a Friday, he texted a coworker that “Sounds bad. We may be the one breached.” The next Monday morning, he searched the web to see how a data breach had affected the stock price of competitor Experian. Later that same morning, he exercised all the stock options available to him. 

He then sold the shares — a move nabbed him $950,000 before Equifax’s data breach was made public. Had he sold after the breach, he would have lost $117,000, according to a statement from the SEC.

Stunningly, Ying is not the only executive who faced scrutiny for selling shares ahead of the Equifax’s public disclosure of the breach. Three other top executives, including its chief financial officer, president of workforce solutions, and president of U.S. information solutions, also dumped hundreds of thousands of dollars in shares just days before alerting the public to the breach.

Neither the SEC or the DOJ has commented on those cases.

[embedded content]

Meet the man whose voice became Stephen Hawking’s

A man and a voice who will be missed.
A man and a voice who will be missed.

Image: Karwai Tang/Getty Images

Stephen Hawking’s computer-generated voice is so iconic that it’s trademarked — The filmmakers behind The Theory of Everything had to get Hawking’s personal permission to use the voice in his biopic.

But that voice has an interesting origin story of its own.

Back in the ’80s, when Hawking was first exploring text-to-speech communication options after he lost the power of speech, a pioneer in computer-generated speech algorithms was working at MIT on that very thing. His name was Dennis Klatt.

As Wired uncovered, Klatt’s work was incorporated into one of the first devices that translated speech into text: the DECtalk. The company that made the speech synthesizer for Hawking’s very first computer used the voice Klatt had recorded for computer synthesis. The voice was called ‘Perfect Paul,’ and it was based on recordings of Klatt himself. 

In essence, Klatt lent his voice to the program that would become known the world over as the voice of Stephen Hawking.

Hawking passed away on Wednesday at the age of 76. The renowned cosmologist lived with amyotrophic lateral sclerosis, or ALS, for 55 years. His death has prompted an outpouring of love, support, and admiration for his work and his inspirational outlook on life. It’s also prompted reflection on how he managed to have such an enormous impact on science and the world, when his primary mode of communication for the last four decades was a nerve sensor in his cheek that allowed him to type, and a text-to-speech computer. 

Though Hawking had only had the voice for a short time, it quickly became his own. According to Wired, when the company that produced the synthesizer offered Hawking an upgrade in 1988, he refused it. Even recently, as Intel worked on software upgrades for Hawking over the last decade, they searched through the dusty archives of a long-since-acquired company so they could use the original Klatt-recorded voice, at Hawking’s request.

Klatt was an American engineer who passed away in 1989, just a year after Hawking insisted on keeping ‘Perfect Paul’ as his own. He was a member of MIT’s Speech Communication Group, and according to his obituary, had a special interest in applying his research in computational linguistics to assist people with disabilities.

Hawking has been known to defend and champion his voice. During a 2014 meeting with the Queen, she jokingly asked the British Hawking “have you still got that American voice?” Hawking, like the sass machine that he is, replied “Yes, it is copyrighted actually.”

Hawking doesn’t actually consider his voice fully “American.” In a section on his website entitled “The Computer,” Hawking explains his voice technology:

“I use a separate hardware synthesizer, made by Speech Plus,” he writes. “It is the best I have heard, although it gives me an accent that has been described variously as Scandinavian, American or Scottish.”

It’s an accent, and a voice, that will be missed.

You can find Hawking’s last lecture which he gave in Japan earlier this month on his website. It’s called ‘The Beginning of Time.’

[embedded content]