All posts in “Big Tech Companies”

Google adds food delivery to Maps and search results

You can now get dinner without leaving Google Maps.
You can now get dinner without leaving Google Maps.

Image: interim archives / Getty Images

Google just added yet another reason to never have to leave its services.

The company is now adding food delivery to the lineup of things you can do directly in Maps and Search without switching to a separate app. Now, when you search for restaurants in either Maps or Search, you can place an order with a new “Order Online” button.

In some ways, it’s similar to the way Google added rideshare services to Google Maps. Like those integrations, you can get a look at multiple delivery services available for each restaurant, along with info about relevant delivery fees. The feature will include DoorDash, Postmates, Delivery.com, Slice, and ChowNow to start, with more services being added in the future.

But while you still need to switch apps directly to hail a ride, you can actually complete your full order without leaving Maps or Search. (Payment will be handled in the app via Google Pay.)

The company is also adding food delivery to its Assistant app, so you can place orders with your voice, or re-order a previous meal. Google says the Assistant functionality is limited to its mobile app for now. Butt it seems like a feature it could eventually bring to its smart speakers, especially now that Google is opening up more functionality of its smart displays to third-parties.

Uploads%252fvideo uploaders%252fdistribution thumb%252fimage%252f91080%252f8d33fc8e 0fd7 449f b61b 36aac983949a.jpg%252foriginal.jpg?signature=qqwdgwlklnkudp7 oaomea1 bts=&source=https%3a%2f%2fblueprint api production.s3.amazonaws

Amazon is reportedly making a wearable device meant to read your emotions

Amazon is at the forefront of selling speakers that listen to people’s conversations. However, this rumored development reaches a whole new level of creepy.

Bloomberg reported on Thursday that the e-commerce giant is developing a device to be worn on the wrist that would recognize its users emotional state through their voice commands. The report is based on internal documents Bloomberg acquired as well as Amazon patent filings from recent years that line up with the device’s supposed feature set.

Internally called Dylan, the wearable will supposedly work with a smartphone app to analyze your voice and figure out how you’re feeling. From there, it could do all sorts of things, such as recommend a specific meal or product. Bloomberg’s report also suggested the device could help wearers get better at interacting with others, but it’s unclear how exactly that would work.

To be clear, this thing may never see the light of day. Companies develop and scrap ideas without publicly revealing them all the time. When contacted by Mashable, Amazon declined to comment. 

Still, this fits into Amazon’s larger plan to be a part of customers’ lives as much as possible. The former online bookstore now has a line of voice-activated speakers with their own privacy controversies, as well as a growing brick-and-mortar retail operation. It’s even rumored to be working on an AirPods competitor.

Emotional analysis might seem a little over-the-top for Amazon, but the company has been active when it comes facial recognition and body scanning tech.

Google had to reconfigure Glass for business purposes partly because it was creepy.

Google had to reconfigure Glass for business purposes partly because it was creepy.

Image: Robert Couto Photography/google

If it does get a public release, it will be fascinating to see how it’s received by consumers. Google Glass had to be converted into an enterprise product partially because people found it creepy. Have things changed enough since 2015 for an emotion-reading wearable to succeed?

Regardless, we might be doomed if we need a watch to tell us how to talk to other people.

Cms%252f2018%252f3%252f84d71bd9 6c91 eb53%252fthumb%252f00001.jpg%252foriginal.jpg?signature=bw1kqg yjtdawwh9po5x1 e4icw=&source=https%3a%2f%2fvdist.aws.mashable

Mark Zuckerberg reportedly made a fake, racist social media profile in Cameron Winklevoss’ name

So funny.
So funny.

Image: David L. Ryan / The Boston Globe / getty

If you believe Ben Mezrich’s reporting, young Mark Zuckerberg was a huge asshole. Like, the make a fake and racist social media profile for someone you don’t like kind of asshole. 

The author of The Accidental Billionaires, the 2009 basis for Aaron Sorkin’s The Social Network, is back at it again with some wild claims about the Facebook CEO’s early days. Specifically, while promoting his latest book on The Jim Rome Show, Mezrich alleged that Zuckerberg once made a fake online profile for Cameron Winklevoss that just so happened to be sexist and racist. 

The book, Bitcoin Billionaires: A True Story of Genius, Betrayal, and Redemption, focuses on both Bitcoin and the Winklevoss twins. In it, Mezrich claims he documents an episode where Zuckerberg hacked the Winklevoss’ ConnectU website (previously known as HavardConnection, and the supposed inspiration for Facebook) in order to make a fake account for Cameron. He bases the accusation on Instant Messages that were shared after The Social Network was written.

“Zuckerberg lied to [the Winklevoss twins], he planned on screwing them over,” Mezrich explains. “He actually hacked into their program and made a fake profile of Cameron Winklevoss through of all this racist, like sexist, crazy stuff. And all this stuff never came out.”

[embedded content]

Importantly, while there’s no doubt that young Zuckerberg was a piece of work, it’s worth taking Mezrich’s latest reporting with a grain of salt. While Mashable has not had a chance to review the book ourselves, the New York Times has — and one specific line from said review sticks out. 

“And then there is Mezrich’s jarring disclosure at the outset that some details and settings described in the book are ‘imagined,'” writes the Times‘ David Enrich. “It is hard to overcome the impression that large swaths of the book fall into that fictional zone.”

So, yeah. We don’t know if this little tidbit falls into that “imagined” category or not. We do, however, have one definitely not imagined piece of evidence that Zuckerberg was contemptuous of his collegiate contemporaries. In blunt IMs he sent to a friend during the early days of Facebook, which were published by a Silicon Valley gossip blog, he wrote about the personal information Harvard students’ provided him.

“People just submitted it,” he wrote. “I don’t know why. They ‘trust me.’ Dumb fucks.”

Uploads%252fvideo uploaders%252fdistribution thumb%252fimage%252f91466%252f19d8944b 03c2 4bfb b53a 4798ab45c4ae.png%252foriginal.png?signature=9olqxoyi6roivpztfdaiu4w n 0=&source=https%3a%2f%2fblueprint api production.s3.amazonaws

Amazon shareholders shut down proposal to limit facial recognition sales

I love your face, alive girl.
I love your face, alive girl.

Image: SAUL LOEB / getty 

Amazon will sell its facial-recognition technology to whomever it damn well pleases thank you very much. 

That message was loud and clear Wednesday morning at the company’s annual shareholder meeting, where two proposals meant to regulate the sale and investigate the use of Amazon’s Rekognition technology were opposed by the company and voted down by shareholders. The failed effort to limit the sale of its controversial product to law enforcement comes at a time when Rekognition is increasingly being criticized for biases and false positives

Amazon confirmed to Mashable via email that both proposals failed. Our questions regarding the specific vote tallies on each, as well as the company’s response to criticism over Rekognition, were not answered.  

The first of the two proposals, if passed, would have at least temporarily stopped the sale of Rekognition to governments around the world. 

“[Shareholders] request that the Board of Directors prohibit sales of facial recognition technology to government agencies unless the Board concludes, after an evaluation using independent evidence, that the technology does not cause or contribute to actual or potential violations of civil and human rights,” it read.

The second measure requested an “independent study” of Rekognition and a subsequent report to shareholders detailing, among other things, “[the] extent to which such technology may endanger, threaten, or violate privacy and or civil rights, and unfairly or disproportionately target or surveil people of color, immigrants and activists in the United States[.]”

Amazon, which in January of this year was very publicly criticized for selling Rekognition to the feds, flat out rejected both. 

On Wednesday morning, before the vote, the ACLU urged Amazon to change its surveillance tech policies. 

“We’re at @Amazon’s shareholder meeting today urging shareholders to take action in response to the company’s failure to address the civil rights impacts of its face surveillance technology,” tweeted the civil liberties organization.

Some, but as of yet not a voting majority, of the company’s shareholders are clearly receptive to that message. 

And so with the calls for reform coming from both inside and outside its house, Amazon won’t be able to put the issue of Rekognition behind it anytime soon — even with its victory today. But hey, moral and ethical headwinds have never stopped Amazon’s continuous march toward dominance before. 

Uploads%252fvideo uploaders%252fdistribution thumb%252fimage%252f90808%252fb35ea507 776a 477d a648 fc2a89092c05.jpg%252foriginal.jpg?signature=1fyrjg7wzyhvoty5wovu7vtp0ti=&source=https%3a%2f%2fblueprint api production.s3.amazonaws

Now even the U.N. is worried about sexism in voice assistants

Please, let this man explain!
Please, let this man explain!

Image: Sergii Kharchenko/NurPhoto via Getty Images

The U.N. is not here for Siri’s sexist jokes.

The United Nations Educational, Scientific, and Cultural Organization (UNESCO) has published an in-depth report about how women, girls, and the world as a whole, lose when technical education and the tech sector exclude women. 

Within the report is a razor-sharp section about the phenomenon of gendered A.I. voice assistants, like Siri or Alexa. The whole report is titled “I’d blush if I could,” a reference to the almost flirtatious response Siri would give a user if they said, “Hey Siri, you’re a bitch.” (Apple changed the voice response in April 2019).

“Siri’s ‘female’ obsequiousness – and the servility expressed by so many other digital assistants projected as young women – provides a powerful illustration of gender biases coded into technology products, pervasive in the technology sector and apparent in digital skills education,” the report reads.

The report is thorough and wide-ranging in its purpose of arguing for promoting women’s educational and professional development in tech. That makes the fact that it seizes on voice assistants as an illustration of this gargantuan problem all the more impactful.

The report analyzes inherent gender bias in voice assistants for two purposes: to demonstrate how unequal workplaces can produce sexist products, and how sexist products can perpetuate dangerous, misogynistic behaviors. 

“The limited participation of women and girls in the technology sector can ripple outward with surprising speed, replicating existing gender biases and creating new ones,” the report reads. 

Many news outlets, including Mashable, have reported on how AI can take on the prejudices of its makers. Others have decried the sexism inherent in default-female voice assistants, compounded when these A.I.s demure when a user sends abusive language “her” way. 

Now, even the U.N. is coming for sexism in artificial intelligence— showing that there’s nothing cute about Siri or Cortana’s appeasing remarks.

It’s startling to comprehend the sexism coded into these A.I. responses to goads from users. It’s almost as if the A.I. takes on the stance of a woman who walks the tightrope of neither rebuking, nor accepting, the unwanted advances or hostile language of someone who has power over “her.”

Coy A.I. responses to abusive language are illustrative of the problem of sexism in A.I., but the report takes issue with the larger default of voice assistants as female, as well. The report details how these decisions to make voice assistants female were wholly intentional, and determined by mostly male engineering teams. These product decisions, however, have troublesome consequences when it comes to perpetuating misogynistic gender norms. 

“Because the speech of most voice assistants is female, it sends a signal that women are obliging, docile and eager-to-please helpers, available at the touch of a button or with a blunt voice command,” the report reads. “The assistant holds no power of agency beyond what the commander asks of it. It honours commands and responds to queries regardless of their tone or hostility. In many communities, this reinforces commonly held gender biases that women are subservient and tolerant of poor treatment.”

For these reasons, the report argues that it is crucial to include women in the development process of A.I. It’s not enough, the report says, for male engineering team to address their biases — for many biases are unconscious. 

If we want our world — that will increasingly be run by A.I. — to be an equal one, women have to have an equal hand in building it.

Uploads%252fvideo uploaders%252fdistribution thumb%252fimage%252f90087%252ff5f66956 2268 4d21 8e99 6158f75a656a.jpg%252foriginal.jpg?signature=6yjz45ztzlu7gvzy2xyhedewbby=&source=https%3a%2f%2fblueprint api production.s3.amazonaws