All posts in “Microsoft”

Microsoft confirms plans for a new flagship store in Regent Street opposite Apple

Shopping may be turning into an increasingly virtual experience, with people buying goods online and through apps, but there is no denying the power of a physical in-store experience — a lesson that Microsoft is taking to heart. Today the company announced that it would be opening a new flagship store in London in Regent Street near Oxford Circus — just a stone’s (or an iPhone’s) throw from the Apple flagship store that saw a huge revamp a year ago.

The area around Oxford Circus, which is at the intersection of Oxford Street and Regent Street, is one of the most high-profile shopping precincts in the world, so having a presence there underscores Microsoft’s strategy to double down on retail.

“We couldn’t be happier to be opening a flagship store in the heart of central London at Oxford Circus, where two of the world’s most iconic shopping streets meet,” said Cindy Rose, head of Microsoft in the U.K. “We know our customers and fans, whether they are from London, the broader U.K. or just visiting, will love our bold plans for the space. This will be so much more than just a great place to experience all that is possible with Microsoft, but a real hub for the community where we’ll be bringing to life our passion for helping people explore their creativity through an ambitious program of workshops and training along with moments that work to unite the community.”

The announcement comes after a day of speculation about the new store, but also after what has seemed like years of stops and starts for Microsoft and its retail plans in London.

Back in 2012, it was reported that Microsoft planned to open a retail store in the city in March 2013, after it emerged that the company had registered a separate company in the UK to handle a retail business. But it seems that the March 2013 launch never happened.

Then in 2015, yet more reports emerged, this time noting that the company might have abandoned its plans for a store after all, after the dissolution of that 2012 entity.

But the story didn’t end there. Later in 2015, yet more rumors emerged of a retail plan for Microsoft in London. But once more nothing materialised.

And in a way, the starts and stops are not too surprising. The company’s mobile efforts, which had been seen as a major part of its consumer business, were floundering, and while Apple has been nailing retail for years, others that tried to replicate the success had failed. Notably, Samsung retreated from its own physical store efforts in London, which had been located in a mall, not in the Regent Street area.

Now, it may be third time lucky for the Xbox and Windows maker. Notably, today’s announcement seems to be the first time that the company has officially confirmed any plans directly.

There is no timescale noted on Microsoft’s announcement. What is more obvious is that this is as much about creating a showcase for Microsoft products — in a symbolic location right near Apple’s store — as it is about creating a space where people can buy those Microsoft products.

“The  United Kingdom is home to some of our most passionate fans,” David Porter, head of Microsoft Stores, writes in his blog post announcing the news. “We already enjoy connecting through our partners and in our digital stores, and look forward to bringing a physical store to the region as another great choice for customers to experience the best technology from Microsoft.”

There are currently 75 Microsoft Stores globally, with two flagships, in New York (pictured above, also very close to Apple’s ‘cube’ store) and Sydney.

While it’s not clear how much Microsoft makes from its retail stores today, the gold standard shows that it can clearly be a lucrative business: Apple today is the world’s most lucrative retailer, according to research from CoStar, which said that the Mac and iPhone company made $5,546 per square foot in the last year. As a point of comparison, though, the average was a mere $325 per square foot.

Big fat Stratolaunch plane tests all six of its engines

Remember the “Roc”? The awe-inspiring, gigantic carrier aircraft from Stratolaunch, the company owned by Microsoft cofounder Paul Allen?

Back in June, it was ready to make its first rollout for a series of tests, starting with fuel testing. 

Well, this week the six-engine mothership completed its first round of engine testing, as tweeted by Allen: 

As explained by Stratolaunch in a blog, the engine testing was divided into three phases:

First as a ‘dry motor,’ where we used an auxiliary power unit to charge the engine. Next, as a ‘wet motor,’ where we introduced fuel. Finally, each engine was started one at a time and allowed to idle. In these initial tests, each of the six engines operated as expected.

With its 385-foot wingspan — longer than a football field — and 500,000-pound weight, the “Roc” is a big fat aircraft that will eventually carry up to three rockets and launch them into space — which sounds awesome. 

Besides the fueling tests, the company trialled the flight control, the electrical, pneumatic, and fire detection systems. 

“Over the next few months, we will continue to test the aircraft’s engines at higher power levels and varying configurations, culminating to the start of taxi tests,” Jean Floyd, CEO at Stratolaunch, said. 0c50 3248%2fthumb%2f00001

Tech giants told to remove extremist content much faster

Tech giants are once again being urged to do more to tackle the spread of online extremism on their platforms. Leaders of the UK, France and Italy are taking time out at a UN summit today to meet with Google, Facebook and Microsoft.

This follows an agreement in May for G7 nations to take joint action on online extremism.

The possibility of fining social media firms which fail to meet collective targets for illegal content takedowns has also been floated by the heads of state. Earlier this year the German government proposed a regime of fines for social media firms that fail to meet local takedown targets for illegal content.

The Guardian reports today that the UK government would like to see the time it takes for online extremist content to be removed to be greatly speeded up — from an average of 36 hours down to just two.

That’s a considerably narrower timeframe than the 24 hour window for performing such takedowns agreed within a voluntary European Commission code of conduct which the four major social media platformed signed up to in 2016.

Now the group of European leaders, led by the UK Prime Minister Theresa May, apparently want to go even further by radically squeezing the window of time before content must be taken down — and they apparently want to see evidence of progress from the tech giants in a month’s time, when their interior ministers meet at the G7.

According to UK Home Office analysis, ISIS shared 27,000 links to extremist content in the first five months of the 2017 and, once shared, the material remained available online for an average of 36 hours. That, says May, is not good enough.

Ultimately the government wants companies to develop technology to spot extremist material early and prevent it being shared in the first place — something UK Home Secretary Amber Rudd called for earlier this year.

While, in June, the tech industry bandied together to offer a joint front on this issue, under the banner of the Global Internet Forum to Counter Terrorism (GIFCT) — which they said would collaborate on engineering solutions, sharing content classification techniques and effective reporting methods for users.

The initiative also includes sharing counterspeech practices as another string for them to publicly pluck to respond to pressure to do more to eject terrorist propaganda from their platforms.

In response to the latest calls from European leaders to enhance online extremism identification and takedown systems, a GIFCT spokesperson provided the following responsibility-distributing statement:

Combatting terrorism requires responses from government, civil society and the private sector, often working collaboratively. The Global Internet Forum to Counter Terrorism was founded to help do just this and we’ve made strides in the past year through initiatives like the Shared Industry Hash Database.  We’ll continue our efforts in the years to come, focusing on new technologies, in-depth research, and best practices. Together, we are committed to doing everything in our power to ensure that our platforms are not used to distribute terrorist content.

Monika Bickert, Facebook’s director of global policy management, is also speaking at today’s meeting with European leaders — and she’s slated to talk up the company’s investments in AI technology, while also emphasizing that the problem cannot be fixed by tech alone.

“Already, AI has begun to help us identify terrorist imagery at the time of upload so we can stop the upload, understand text-based signals for terrorist support, remove terrorist clusters and related content, and detect new accounts created by repeat offenders,” Bickert was expected to say today.

“AI has tremendous potential in all these areas — but there still remain those instances where human oversight is necessary. AI can spot a terrorist’s insignia or flag, but has a hard time interpreting a poster’s intent. That’s why we have thousands of reviewers, who are native speakers in dozens of languages, reviewing content — including content that might be related to terrorism — to make sure we get it right.”

In May, following various media reports about moderation failures on a range of issues (not just online extremism), Facebook announced it would be expanding the number of human reviewers it employs — adding 3,000 to the existing 4,500 people it has working in this capacity. Although it’s not clear over what time period those additional hires were to be brought in.

But the vast size of Facebook’s platform — which passed more than two billion users in June — means even a team of 7,500 people, aided by the best AI tools that money can build, surely has forlorn hope of being able to keep on top of the sheer volume of user generated content being distributed daily on its platform.

And even if Facebook is prioritizing takedowns of extremist content (vs moderating other types of potentially problematic content), it’s still facing a staggeringly massive haystack of content to sift through, with only a tiny team of overworked (yet, says Bickert, essential) human reviewers attached to this task, at a time when political thumbscrews are being turned on tech giants to get much better at nixing online extremism — and fast.

If Facebook isn’t able to deliver the hoped for speed improvements in a month’s time it could raise awkward political questions about why it’s not able to improve its standards, and perhaps invite greater political scrutiny of the small size of its human moderation team vs the vast size of the task they have to do.

Yesterday, ahead of meeting the European leaders, Twitter released its latest Transparency Report covering government requests for content takedowns, in which it claimed some big wins in using its own in-house technology to automatically identify pro-terrorism accounts — including specifying that it had also been able to suspend the majority of these accounts (~75%) before they were able to tweet.

The company, which has only around 328M active monthly users (and inevitable a far smaller volume of content to review vs Facebook) revealed it had closed nearly 300,000 pro-terror accounts in the past six months, and said government reports of terrorism accounts had dropped 80 per cent since its prior report.

Twitter argues that terrorists have shifted much of their propaganda efforts elsewhere — pointing to messaging platform Telegram as the new tool of choice for ISIS extremists. This is a view backed up by Charlie Winter, senior research fellow at the International Center for the Study of Radicalization and Political Violence (ICSR).

Winter tells TechCrunch: “Now, there’s no two ways about it — Telegram is first and foremost the centre of gravity online for the Islamic State, and other Salafi jihadist groups. Places like Twitter, YouTube and Facebook are all way more inhospitable than they’ve ever been to online extremism.

There’s no two ways about it — Telegram is first and foremost the centre of gravity online for the Islamic State.

“Yes there are still pockets of extremists using these platforms but they are, in the grand scheme of things, and certainly compared to 2014/2015 vanishingly small.”

Discussing how Telegram is responding to extremism propaganda, he says: “I don’t think they’re doing nothing. But I think they could do more… There’s a whole set of channels which are very easily identifiable as the keynotes of Islamic State propaganda determination, that are really quite resilient on Telegram. And I think that it wouldn’t be hard to identify them — and it wouldn’t be hard to remove them.

“But were Telegram to do that the Islamic State would simply find another platform to use instead. So it’s only ever going to be a temporary measure. It’s only ever going to be reactive. And I think maybe we need to think a little bit more outside the box than just taking the channels down.”

“I don’t think it’s a complete waste of time [for the government to still be pressurizing tech giants over extremism],” Winter adds. “I think that it’s really important to have these big ISPs playing a really proactive role. But I do feel like policy or at least rhetoric is stuck in 2014/2015 when platforms like Twitter were playing a much more important role for groups like the Islamic State.”

Indeed, Twitter’s latest Transparency Report shows that the vast majority of recent government reports pertaining to its content involve complaints about “abusive behavior”.  Which suggests that, as Twitter shrinks its terrorism problem, another long-standing issue — dealing with abuse on its platform — is rapidly zooming into view as the next political hot potato for it to grapple with.

Meanwhile, Telegram is an altogether smaller player than the social giants most frequently called out by politicians over online extremism — though not a tiddler by any means, announcing it had passed 100M monthly users in February 2016.

But not having a large and fixed corporate presence in any country makes the nomadic team behind the platform — led by Russian exile Pavel Durov, its co-founder — an altogether harder target for politicians to wring concessions from. Telegram is simply not going to turn up to a meeting with political leaders.

That said, the company has shown itself responsive to public criticism about extremist use of its platform. In the wake of the 2015 Paris terror attacks it announced it had closed a swathe of public channels that had been used to broadcast ISIS-related content.

It has apparently continued to purge thousands of ISIS channels since then — claiming it nixed more than 8,800 this August alone, for example. Although, and nonetheless, this level of effort does not appear enough to persuade ISIS of the need to switch to another app platform with lower ‘suspension friction’ to continue spreading its propaganda. So it looks like Telegram needs to step up its efforts if it wants to ditch the dubious honor of being known as the go-to platform for ISIS et extremist al.

“Telegram is important to the Islamic State for a great many different reasons — and other Salafi jihadist group too like Al-Qaeda or Harakat Ahrar ash-Sham al-Islamiyya in Syria,” says Winter. “It uses it first and foremost… for disseminating propaganda — so whether that’s videos, photo reports, newspaper, magazine and all that. It also uses it on a more communal basis, for encouraging interaction between supporters.

“And there’s a whole other layer of it that I don’t think anyone sees really which I’m talking about in a hypothetical sense because I think it would be very difficult to penetrate where the groups will be using it for more operational things. But again, without being in an intelligence service, I don’t think it’s possible to penetrate that part of Telegram.

“And there’s also evidence to suggest that the Islamic State actually migrates onto even more heavily encrypted platforms for the really secure stuff.”

Responding to the expert view that Telegram has become the “platform of choice for the Islamic State”, Durov tells TechCrunch: “We are taking down thousands of terrorism-related channels monthly and are constantly raising the efficiency of this process. We are also open to ideas on how to improve it further, if… the ICSR has specific suggestions.”

As Winter hints, there’s also terrorist chatter concerning governments that takes place out of the public view — on encrypted communication channels. And this is another area where the UK government especially has, in recent years, ramped up political pressure on tech giants (for now European lawmakers appear generally more hesitant to push for a decrypt law; while the U.S. has seen attempts to legislate but nothing has yet come to pass on that front).

End-to-end encryption still under pressure

A Sky News report yesterday, citing UK government sources, claimed that Facebook-owned WhatsApp had been asked by British officials this summer to come up with technical solutions to allow them to access the content of messages on its end-to-end encrypted platform to further government agencies’ counterterrorism investigations — so, effectively, to ask the firm to build a backdoor into its crypto.

This is something the UK Home Secretary, Amber Rudd, has explicitly said is the government’s intention. Speaking in June she said it wanted big Internet firms to work with it to limit their use of e2e encryption. And one of those big Internet firms was presumably WhatsApp.

WhatsApp apparently rejected the backdoor demand put to it by the government this summer, according to Sky’s report.

We reached out to the messaging giant to confirm or deny Sky’s report but a WhatsApp spokesman did not provide a direct response or any statement. Instead he pointed us to existing information on the company’s website — including an FAQ in which it states: “WhatsApp has no ability to see the content of messages or listen to calls on WhatsApp. That’s because the encryption and decryption of messages sent on WhatsApp occurs entirely on your device.”

He also flagged up a note on its website for law enforcement which details the information it can provide and the circumstances in which it would do so: “A valid subpoena issued in connection with an official criminal investigation is required to compel the disclosure of basic subscriber records (defined in 18 U.S.C. Section 2703(c)(2)), which may include (if available): name, service start date, last seen date, IP address, and email address.”

Facebook CSO Alex Stamos also previously told us the company would refuse to comply if the UK government handed it a so-called Technical Capability Notice (TCN) asking for decrypted data — on the grounds that its use of e2e encryption means it does not hold encryption keys and thus cannot provide decrypted data — though the wider question is really how the UK government might then respond to such a corporate refusal to comply with UK law.

Properly implemented e2e encryption ensures that the operators of a messaging platform cannot access the contents of the missives moving around the system. Although e2e encryption can still leak metadata — so it’s possible for intelligence on who is talking to whom and when (for example) to be passed by companies like WhatsApp to government agencies.

Facebook has confirmed it provides WhatsApp metadata to government agencies when served a valid warrant (as well as sharing metadata between WhatsApp and its other business units for its own commercial and ad-targeting purposes).

Talking up the counter-terror potential of sharing metadata appears to be the company’s current strategy for trying to steer the UK government away from demands it backdoor WhatsApp’s encryption — with Facebook’s Sheryl Sandberg arguing in July that metadata can help inform governments about terrorist activity.

In the UK successive governments have been ramping up political pressure on the use of e2e encryption for years — with politicians proudly declaring themselves uncomfortable with rising use of the tech. While domestic surveillance legislation passed at the end of last year has been widely interpreted as giving security agencies powers to place requirements on companies not to use e2e encryption and/or to require comms services providers to build in backdoors so they can provide access to decrypted data when handed a state warrant. So, on the surface, there’s a legal threat to the continued viability of e2e encryption in the UK.

However the question of how the government could seek to enforce decryption on powerful tech giants, which are mostly headquartered overseas, have millions of engaged local users and sell e2e encryption as a core part of their proposition, is unclear. Even with the legal power to demand it, they’d still be asking for legible data from owners of systems designed not to enable third parties to read that data.

One crypto expert we contacted for comment on the conundrum, who cannot be identified because they were not authorized to speak to the press by their employer, neatly sums up the problem for politicians squaring up to tech giants using e2e encryption: “They could close you down but do they want to? If you aren’t keeping records, you can’t turn them over.”

It’s really not clear how long the political compass will keep swinging around and pointing at tech firms to accuse them of building systems that are impeding governments’ counterterrorism efforts — whether that’s related to the spread of extremist propaganda online, or to a narrower consideration like providing warranted access to encrypted messages.

As noted above, the UK government legislated last year to enshrine expansive and intrusive investigatory powers in a new framework, called the Investigatory Powers Act — which includes the ability to collect digital information in bulk and for spy agencies to maintain vast databases of personal information on citizens who are not (yet) suspected of any wrongdoing in order that they can sift these records when they choose. (Powers that are incidentally being challenged under European human rights law.)

And with such powers on its statute books you’d hope there would be more pressure for UK politicians to take responsibility for the state’s own intelligence failures — rather than seeking to scapegoat technologies such as encryption. But the crypto wars are apparently, sad to say, a neverending story.

On extremist propaganda, the co-ordinated political push by European leaders to get tech platforms to take more responsibility for user generated content which they’re freely distributing, liberally monetizing and algorithmically amplifying does at least have more substance to it. Even if, ultimately, it’s likely to be just as futile a strategy for fixing the underlying problem.

Because even if you could wave a magic wand and make all online extremist propaganda vanish you wouldn’t have fixed the core problem of why terrorist ideologies exist. Nor removed the pull that those extremist ideas can pose for certain individuals. It’s just attacking the symptom of a problem, rather than interrogating the root causes.

The ICSR’s Winter is generally downbeat on how the current political strategy for tackling online extremism is focusing so much attention on restricting access to content.

“[UK PM] Theresa May is always talking about removing the safe spaces and shutting down part of the Internet were terrorists exchange instructions and propaganda and that sort of stuff, and I just feel that’s a Sisyphean task,” he tells TechCrunch. “Maybe you do get it to work on any one platform they’re just going to go onto a different one and you’ll have exactly the same sort of problem all over again.

“I think they are publicly making too much of a thing out of restricting access to content. And I think the role that is being described to the public that propaganda takes is very, very different to the one that it actually has. It’s much more nuanced, and much more complex than simply something which is used to “radicalize and recruit people”. It’s much much more than that.

“And we’re clearly not going to get to that kind of debate in a mainstream media discourse because no one has the time to hear about all the nuances and complexities of propaganda but I do think that the government puts too much emphasis on the online space — in a manner that is often devoid of nuance and I don’t think that is necessarily the most constructive way to go about this.”

Microsoft sends invites to mysterious ‘mixed reality’ event

[embedded content]

Microsoft is ready to reveal its next big move in mixed reality.

The company invited a handful of journalists to a intimate and rather mysterious San Francisco event on Oct. 3. On the agenda: A reveal of Microsoft’s full mixed-reality strategy and, perhaps, the next big steps.

The announcement comes as something of a surprise, since experts and pundits have spent weeks speculating that Microsoft’s next big event and product unveiling would happen on Oct. 31 when Microsoft Surface chief Panos Panay delivers a keynote at the Future Decoded event in London.

However, sources tell us that anyone expecting new Surface hardware at that event will be disappointed. Instead, Panay will focus on Microsoft’s aggressive Creativity push and how that’s shaping Microsoft’s current and future strategy.

That doesn’t mean however, that this new Oct. 3 event is about going to satisfy the demand for fresh Surface devices, either. In addition, even with the “Mixed Reality” tease, there won’t be a new HoloLens mixed reality headset for developers or consumers. 

Will there be any new hardware for us to sink our teeth into from Microsoft or its partners? No one is saying, but it’s fair to say the possibility exists.

As for the invite, it’s spare and offers few clues about what we’ll see on October 3, beyond, “This event is an opportunity to hear where Microsoft is headed next—and to experience Windows Mixed Reality for yourself.”

The ring leader for this experience is none other than Alex Kipman. Kipman, a technical fellow at Microsoft, invented HoloLens.

What we do know is that those who attend the Oct. 3 event will get their deepest immersion yet in Microsoft’s brand of mixed reality, which is actually a spectrum that runs from augmented reality experiences to full-immersive virtual reality.

Some of this will help lay the groundwork for the Windows 10 Fall Creators Update release on Oct. 17. That update integrates Microsoft’s mixed reality platform and allows third-party VR headsets (they start shipping on that same day), allowing them to work with the operating system and third-party experiences written to take advantage of Windows 10’s new AR and VR skills.

Microsoft is going to have a busy fall. In addition to the Windows 10 update release and this mixed-reality deep dive and potential product reveal, Microsoft is preparing to launch Xbox One X on Nov. 7. Xbox is a Windows 10 device as well, which means it will likely end up with more mixed-reality skills in the not-to-distant future.  

Https%3a%2f%2fblueprint api uploaders%2fdistribution thumb%2fimage%2f80465%2f8f504795 903b 45eb b188 b5bb107a388b

Bing launches Google-like fact-checking label

Bing is taking action against fake news, in a similar fashion to what Google has already done. 

Microsoft’s search engine is adding a “Fact Check” label to the search results “to help users find fact checking information on news, and with major stories and webpages within the Bing search results,” it said in a blog post

The label could be used on both news articles and web pages from fact-checking organisations, such as Snopes or PolitiFact, to quickly identify whether the claim is true or false. 

See this example provided by Bing on a hoax about Florida Governor Rick Scott’s critical condition after Hurricane Irma: 

Image: bing

Bing may apply the label on various subjects, such as news, health, science, and politics. 

In April, Google launched a similar feature on Search and News — a “Fact Check” tag which appears in Google search results and which tells users whether a certain claim is being fact-checked by Politifact and Snopes. bf65 5b83%2fthumb%2f00001