All posts in “Social”

Reminder: Other people’s lives are not fodder for your feeds

#PlaneBae

You should cringe when you read that hashtag. Because it’s a reminder that people are being socially engineered by technology platforms to objectify and spy on each other for voyeuristic pleasure and profit.

The short version of the story attached to the cringeworthy hashtag is this: Earlier this month an individual, called Rosey Blair, spent all the hours of a plane flight using her smartphone and social media feeds to invade the privacy of her seat neighbors — publicly gossiping about the lives of two strangers.

Her speculation was set against a backdrop of rearview creepshots, with a few barely there scribbles added to blot out actual facial features. Even as an entire privacy invading narrative was being spun unknowingly around them.

#PlanePrivacyInvasion would be a more fitting hashtag. Or #MoralVacuumAt35000ft

And yet our youthful surveillance society started with a far loftier idea associated with it: Citizen journalism.

Once we’re all armed with powerful smartphones and ubiquitously fast Internet there will be no limits to the genuinely important reportage that will flow, we were told.

There will be no way for the powerful to withhold the truth from the people.

At least that was the nirvana we were sold.

What did we get? Something that looks much closer to mass manipulation. A tsunami of ad stalking, intentionally fake news and social media-enabled demagogues expertly appropriating these very same tools by gamifying mind-less, ethically nil algorithms.

Meanwhile, masses of ordinary people + ubiquitous smartphones + omnipresent social media feeds seems, for the most part, to be resulting in a kind of mainstream attention deficit disorder.

Yes, there is citizen journalism — such as people recording and broadcasting everyday experiences of aggression, racism and sexism, for example. Experiences that might otherwise go unreported, and which are definitely underreported.

That is certainly important.

But there are also these telling moments of #hashtaggable ethical blackout. As a result of what? Let’s call it the lure of ‘citizen clickbait’ — as people use their devices and feeds to mimic the worst kind of tabloid celebrity gossip ‘journalism’ by turning their attention and high tech tools on strangers, with (apparently) no major motivation beyond the simple fact that they can. Because technology is enabling them.

Social norms and common courtesy should kick in and prevent this. But social media is pushing in an unequal and opposite direction, encouraging users to turn anything — even strangers’ lives — into raw material to be repackaged as ‘content’ and flung out for voyeuristic entertainment.

It’s life reflecting commerce. But a particularly insidious form of commerce that does not accept editorial let alone ethical responsibility, has few (if any) moral standards, and relies, for continued function, upon stripping away society’s collective sense of privacy in order that these self-styled ‘sharing’ (‘taking’ is closer to the mark) platforms can swell in size and profit.

But it’s even worse than that. Social media as a data-mining, ad-targeting enterprise relies upon eroding our belief in privacy. So these platforms worry away at that by trying to disrupt our understanding of what privacy means. Because if you were to consider what another person thinks or feels — even for a millisecond — you might not post whatever piece of ‘content’ you had in mind.

For the platforms it’s far better if you just forget to think.

Facebook’s business is all about applying engineering ingenuity to eradicate the thoughtful friction of personal and societal conscience.

That’s why, for instance, it uses facial recognition technology to automate content identification — meaning there’s almost no opportunity for individual conscience to kick in and pipe up to quietly suggest that publicly tagging others in a piece of content isn’t actually the right thing to do.

Because it’s polite to ask permission first.

But Facebook’s antisocial automation pushes people away from thinking to ask for permission. There’s no button provided for that. The platform encourages us to forget all about the existence of common courtesies.

So we should not be at all surprised that such fundamental abuses of corporate power are themselves trickling down to infect the people who use and are exposed to these platforms’ skewed norms.

Viral episodes like #PlaneBae demonstrate that the same sense of entitlement to private information is being actively passed onto the users these platforms prey on and feed off — and is then getting beamed out, like radiation, to harm the people around them.

The damage is collective when societal norms are undermined.

#PlaneBae

Social media’s ubiquity means almost everyone works in marketing these days. Most people are marketing their own lives — posting photos of their pets, their kids, the latte they had this morning, the hipster gym where they work out — having been nudged to perform this unpaid labor by the platforms that profit from it.

The irony is that most of this work is being done for free. Only the platforms are being paid. Though there are some people making a very modern living; the new breed of ‘life sharers’ who willingly polish, package and post their professional existence as a brand of aspiration lifestyle marketing.

Social media’s gift to the world is that anyone can be a self-styled model now, and every passing moment a fashion shoot for hire — thanks to the largess of highly accessible social media platforms providing almost anyone who wants it with their own self-promoting shopwindow in the world. Plus all the promotional tools they could ever need.

Just step up to the glass and shoot.

And then your vacation beauty spot becomes just another backdrop for the next aspirational selfie. Although those aquamarine waters can’t be allowed to dampen or disrupt photo-coifed tresses, nor sand get in the camera kit. In any case, the makeup took hours to apply and there’s the next selfie to take…

What does the unchronicled life of these professional platform performers look like? A mess of preparation for projecting perfection, presumably, with life’s quotidian business stuffed higgledy piggledy into the margins — where they actually sweat and work to deliver the lie of a lifestyle dream.

Because these are also fakes — beautiful fakes, but fakes nonetheless.

We live in an age of entitled pretence. And while it may be totally fine for an individual to construct a fictional narrative that dresses up the substance of their existence, it’s certainly not okay to pull anyone else into your pantomime. Not without asking permission first.

But the problem is that social media is now so powerfully omnipresent its center of gravity is actively trying to pull everyone in — and its antisocial impacts frequently spill out and over the rest of us. And they rarely if ever ask for consent.

What about the people who don’t want their lives to be appropriated as digital windowdressing? Who weren’t asking for their identity to be held up for public consumption? Who don’t want to participate in this game at all — neither to personally profit from it, nor to have their privacy trampled by it?

The problem is the push and pull of platforms against privacy has become so aggressive, so virulent, that societal norms that protect and benefit us all — like empathy, like respect — are getting squeezed and sucked in.

The ugliness is especially visible in these ‘viral’ moments when other people’s lives are snatched and consumed voraciously on the hoof — as yet more content for rapacious feeds.

#PlaneBae

Think too of the fitness celebrity who posted a creepshot + commentary about a less slim person working out at their gym.

Or the YouTuber parents who monetize videos of their kids’ distress.

Or the men who post creepshots of women eating in public — and try to claim it’s an online art project rather than what it actually is: A privacy violation and misogynistic attack.

Or, on a public street in London one day, I saw a couple of giggling teenage girls watching a man at a bus stop who was clearly mentally unwell. Pulling out a smartphone, one girl hissed to the other: “We’ve got to put this on YouTube.”

For platforms built by technologists without thought for anything other than growth, everything is a potential spectacle. Everything is a potential post.

So they press on their users to think less. And they profit at society’s expense.

It’s only now, after social media has embedded itself everywhere, that platforms are being called out for their moral vacuum; for building systems that encourage abject mindlessness in users — and serve up content so bleak it represents a form of visual cancer.

#PlaneBae

Human have always told stories. Weaving our own narratives is both how we communicate and how we make sense of personal experience — creating order out of events that are often disorderly, random, even chaotic.

The human condition demands a degree of pattern-spotting for survival’s sake; so we can pick our individual path out of the gloom.

But platforms are exploiting that innate aspect of our character. And we, as individuals, need to get much, much better at spotting what they’re doing to us.

We need to recognize how they are manipulating us; what they are encouraging us to do — with each new feature nudge and dark pattern design choice.

We need to understand their underlying pull. The fact they profit by setting us as spies against each other. We need to wake up, personally and collectively, to social media’s antisocial impacts.

Perspective should not have to come at the expense of other people getting hurt.

This week the women whose privacy was thoughtlessly repackaged as public entertainment when she was branded and broadcast as #PlaneBae — and who has suffered harassment and yet more unwelcome attention as a direct result — gave a statement to Business Insider.

“#PlaneBae is not a romance — it is a digital-age cautionary tale about privacy, identity, ethics and consent,” she writes. “Please continue to respect my privacy, and my desire to remain anonymous.”

And as a strategy to push against the antisocial incursions of social media, remembering to respect people’s privacy is a great place to start.

Yes, open office plans are the worst

If you’re endlessly distracted by your co-workers in the gaping open office space you all share, you’re not alone. Compared to traditional office spaces, face-to-face interaction in open office spaces is down 70 percent with resulting slips in productivity, according to Harvard researchers in a new study published in Philosophical Transactions of the Royal Society B this month.

In the study, researchers followed two anonymous Fortune 500 companies during their transitions between a traditional office space to an open plan environment and used a sensor called a “sociometric badge” (think company ID on a lanyard) to record detailed information about the kind of interactions employees had in both spaces. The study collected information in two stages; first for several weeks before the renovation and the second for several weeks after.

While the concept behind open office spaces is to drive informal interaction and collaboration among employees, the study found that for both groups of employees monitored (52 for one company and 100 for the other company) face-to-face interactions dropped, the number of emails sent increased between 20 and 50 percent and company executives reported a qualitative drop in productivity.

“[Organizations] transform their office architectures into open spaces with the intention of creating more [face-to-face] interaction and thus a more vibrant work environment,” study’s authors, Ethan Bernstein and Stephen Turban, wrote. “[But] what they often get—as captured by a steady stream of news articles professing the death of the open office is an open expanse of proximal employees choosing to isolate themselves as best they can (e.g. by wearing large headphones) while appearing to be as busy as possible (since everyone can see them).”

While this study is far from the first to point fingers at open office space designs, the researchers claim this is the first study of its kind to collect qualitative data on this shift in working environment instead of relying primarily on employee surveys.

From their results, the researchers provide three cautionary tales:

  1. Open office spaces don’t actually promote interaction. Instead, they cause employees to seek privacy wherever they can find it.
  2. These open spaces might spell bad news for collective company intelligence or, in other words, an overstimulating office space creates a decrease in organizational productivity.
  3. Not all channels of interaction will be effected equally in an open layout change. While the number of emails sent in the study did increase, the study found that the richness of this interaction was not equal to that lost in face-to-face interactions.

Seems like it might be time to (first, find a quiet room) and go back to the drawing board with the open office design.

Facebook would make a martyr by banning Infowars

Alex Jones’ Infowars is a fake news-peddler. But Facebook deleting its Page could ignite a fire that consumes the network. Still, some critics are asking why it hasn’t done so already.

This week Facebook held an event with journalists to discuss how it combats fake news. The company’s recently appointed head of News Feed John Hegeman explained that, “I guess just for being false, that doesn’t violate the community standards. I think part of the fundamental thing here is that we created Facebook to be a place where different people can have a voice.”

In response, CNN’s Oliver Darcy tweeted: “I asked them why InfoWars is still allowed on the platform. I didn’t get a good answer.” BuzzFeed’s Charlie Warzel meanwhile wrote that allowing the Infowars Page to exist shows that “Facebook simply isn’t willing to make the hard choices necessary to tackle fake news.”

Facebook’s own Twitter account tried to rebuke Darcy by tweeting, “We see Pages on both the left and the right pumping out what they consider opinion or analysis – but others call fake news. We believe banning these Pages would be contrary to the basic principles of free speech.” But harm can be minimized without full-on censorship.

There is no doubt that Facebook hides behind political neutrality. It fears driving away conservative users for both business and stated mission reasons. That strategy is exploited by those like Jones who know that no matter how extreme and damaging their actions, they’ll benefit from equivocation that implies ‘both sides are guilty,’ with no regard for degree.

Instead of being banned from Facebook, Infowars and sites like it that constantly and purposely share dangerous hoaxes and conspiracy theories should be heavily down-ranked in the News Feed.

Effectively, they should be quarantined, so that when they or their followers share their links, no one else sees them.

“We don’t have a policy that stipulates that everything posted on Facebook must be true — you can imagine how hard that would be to enforce,” a Facebook spokesperson told TechCrunch. “But there’s a very real tension here. We work hard to find the right balance between encouraging free expression and promoting a safe and authentic community, and we believe that down-ranking inauthentic content strikes that balance. In other words, we allow people to post it as a form of expression, but we’re not going to show it at the top of News Feed.”

Facebook already reduces the future views of posts by roughly 80 percent when they’re established as false by its third-party fact checkers like Politifact and the Associated Press. For repeat offenders, I think that reduction in visibility should be closer to 100 percent of News Feed views. What Facebook does do to those whose posts are frequently labeled as false by its checkers is “remove their monetization and advertising privileges to cut off financial incentives, and dramatically reduce the distribution of all of their Page-level or domain-level content on Facebook.”

The company wouldn’t comment directly about whether Infowars has already been hit with that penalty, noting “We can’t disclose whether specific Pages or domains are receiving such a demotion (it becomes a privacy issue).” For any story fact checked as false, it shows related articles from legitimate publications to provide other perspectives on the topic, and notifies people who have shared it or are about to.

But that doesn’t solve for the initial surge of traffic. Unfortunately, Facebook’s limited array of fact checking partners are strapped with so much work, they can only get to so many BS stories quickly. That’s a strong endorsement for more funding to be dedicated to these organizations like Snopes, preferably by even keeled non-profits, though the risks of governments or Facebook chipping in might be worth it.

Given that fact-checking will likely never scale to be instantly responsive to all fake news in all languages, Facebook needs a more drastic option to curtail the spread of this democracy-harming content on its platform. That might mean a full loss of News Feed posting privileges for a certain period of time. That might mean that links re-shared by the supporters or agents of these pages get zero distribution in the feed.

But it shouldn’t mean their posts or Pages are deleted, or that their links can’t be opened unless they clearly violate Facebook’s core content policies.

Why downranking and quarantine? Because banning would only stoke conspiratorial curiosity about these inaccurate outlets. Trolls will use the bans as a badge of honor, saying, “Facebook deleted us because it knows what we say is true.”

They’ll claim they’ve been unfairly removed from the proxy for public discourse that exists because of the size of Facebook’s private platform.

What we’ll have on our hands is “but her emails!” 2.0

People who swallowed the propaganda of “her emails”, much of which was pushed by Alex Jones himself, assumed that Hillary Clinton’s deleted emails must have contained evidence of some unspeakable wrongdoing — something so bad it outweighed anything done by her opponent, even when the accusations against him had evidence and witnesses aplenty.

If Facebook deleted the Pages of Infowars and their ilk, it would be used as a rallying cry that Jones’ claims were actually clairvoyance. That he must have had even worse truths to tell about his enemies and so he had to be cut down. It would turn him into a martyr.

Those who benefit from Infowars’ bluster would use Facebook’s removal of its Page as evidence that it’s massively biased against conservatives. They’d push their political allies to vindictively regulate Facebook beyond what’s actually necessary. They’d call for people to delete their Facebook accounts and decamp to some other network that’s much more of a filter bubble than what some consider Facebook to already be. That would further divide the country and the world.

When someone has a terrible, contagious disease, we don’t execute them. We quarantine them. That’s what should happen here. The exception should be for posts that cause physical harm offline. That will require tough judgement calls, but knowing inciting mob violence for example should not be tolerated. Some of Infowars posts, such as those about Pizzagate that led to a shooting, might qualify for deletion by that standard.

Facebook is already trying to grapple with this after rumors and fake news spread through forwarded WhatsApp messages have led to crowds lynching people in India and attacks in Myanmar. Peer-to-peer chat lacks the same centralized actors to ban, though WhatsApp is now at least marking messages as forwarded, and it will need to do more. But for less threatening yet still blatantly false news, quarantining may be sufficient. This also leaves room for counterspeech, where disagreeing commenters can refute posts or share their own rebuttals.

Few people regularly visit the Facebook Pages they follow. They wait for the content to come to them through the News Feed posts of the Page, and their friends. Eliminating that virality vector would severely limit this fake news’ ability to spread without requiring the posts or Pages to be deleted, or the links to be rendered unopenable.

If Facebook wants to uphold a base level of free speech, it may be prudent to let the liars have their voice. However, Facebook is under no obligation to amplify that speech, and the fakers have no entitlement for their speech to be amplified.

Image Credit: Getty – Tom Williams/CQ Roll Call, Flickr Sean P. Anderson CC

Researchers find that filters don’t prevent porn

In a paper entitled Internet Filtering and Adolescent Exposure to Online Sexual Material, Oxford Internet Institute researchers Victoria Nash and Andrew Przybylski found that Internet filters rarely work to keep adolescents away from online porn.

“It’s important to consider the efficacy of Internet filtering,” said Dr, Nash. “Internet filtering tools are expensive to develop and maintain, and can easily ‘underblock’ due to the constant development of new ways of sharing content. Additionally, there are concerns about human rights violations – filtering can lead to ‘overblocking’, where young people are not able to access legitimate health and relationship information.”

This research follows the controversial news that the UK government was exploring a country-wide porn filter, a product that will most likely fail. The UK would join countries around the world who filter the public Internet for religious or political reasons.

The bottom line? Filters are expensive and they don’t work.

Given these substantial costs and limitations, it is noteworthy that there is little consistent evidence that filtering is effective at shielding young people from online sexual material. A pair of studies reporting on data collected in 2005, before the rise of smartphones and tablets, provides tentative evidence that Internet filtering might reduce the relative risk of young people countering sexual material. A more recent study, analyzing data collected a decade after these papers, provided strong evidence that caregivers’ use of Internet filtering technologies did not reduce children’s exposure to a range of aversive online experiences including, but not limited to, encountering sexual content that made them feel uncomfortable.21 Given studies on this topic are few in number and the findings are decidedly mixed, the evidence base supporting the widespread use of Internet filtering is currently weak.

The researchers “found that Internet filtering tools are ineffective and in most cases [and] were an insignificant factor in whether young people had seen explicit sexual content.”

The study’s most interesting finding was that between 17 and 77 households “would need to use Internet filtering tools in order to prevent a single young person from accessing sexual content” and even then a filter “showed no statistically or practically significant protective effects.”

The study looked at 9,352 male and 9,357 female subjects from the EU and the UK and found that almost 50 percent of the subjects had some sort of Internet filter at home. Regardless of the filters installed subjects still saw approximately the same amount of porn.

“Many caregivers and policy makers consider Internet filters a useful technology for keeping young people safe online. Although this position might make intuitive sense, there is little empirical evidence that Internet filters provide an effective means to limit children’s and adolescents’ exposure to online sexual material. There are nontrivial economic, informational, and human rights costs associated with filtering that need to be balanced against any observed benefits,” wrote the researchers. “Given this, it is critical to know possible benefits can be balanced against their costs. Our studies were conducted to test this proposition, and our findings indicated that filtering does not play a practically significant protective role.”

Given the popularity – and lucrative nature – of filtering software this news should encourage parents and caregivers to look more closely and how and why they are filtering their home Internet. Ultimately, they might find, supervision is more important than software.

Goodwall gets $10.8M to expand its ‘LinkedIn for students’

Goodwall, a US-focused student and graduate professional network which aims to connect young people with college and employment opportunities, has closed a $10.8 million Series A funding raise.

The round was led by Randstad Innovation Fund, a strategic corporate VC fund that focuses on recruitment, and Swiss private equity firm Manixer. Additional investors include Francis Clivaz, Zurich Cantonal Bank and Verve Capital Partners.

The 2014 founded startup says it will be using the new funding to grow the professional network, which has a core demographic of 14-24 year-olds, and more than one million members at this stage.

“Our main initiative with this round of funding is hiring new talent in New York to support our expansion,” says Taha Bawa, co-founder and CEO. “The funding will be used to grow our product team to provide better features for our two demographics: Highschool and college students. We are growing our sales team as well, to handle the demand that enterprises have shown in our talent.”

“The United States is our current focus and will continue to be the focus throughout 2018. We will be growing with our students and serving them in college,” he adds.

We intend to widen the appeal to the college/post-grad segment by focusing on driving value in terms of being found easily (via a mobile-first experience) by the companies that are interesting to them, whether they be startups or larger companies, for internships or first jobs. Beyond this, as with high school students, we will provide current college students the ability to connect and support each other.”

Goodwall’s business model is based on generating revenue from colleges and enterprises looking to recruit students on the platform. For its target young people, the pull is an online platform where they can connect with fellow students and try to get ahead by showcasing their skills and experience, networking, and learning about education and employment opportunities.

Goodwall says it matches its college student and graduate users to employers for job and internship opportunities, while its high school students get connected to colleges and scholarships.

The startup is competing with traditional college and larger job boards but Bawa argues that its matching process offers an advantage to employers because it’s screening candidates so they get more relevant applications, rather than scores of irrelevant ones which they then have to sift themselves.

The platform generally offers employers a way to source, connect and engage with a pool of motivated students and graduates — with employers able to pay Goodwall to get their brand in front of the types of students or recruits they’re looking for.

“The typical Goodwall user is an English speaking, aspirational go-getter that is either college-bound or in college,” says Bawa. “Goodwall does not aim to only serve the 1% in terms of grades and achievements, though we have many students in this category, from Robotics Fairs winners to Olympic Champions. Rather we strive to serve all ambitious, hardworking students and bring their uniqueness to light via our comprehensive profiles.

“In high school these go-getters may not always be the best students academically, or at the college level, they may not necessarily be enrolled at top ranking institutions. Ultimately, these are the type of students we are looking to work with and the type of talent our partner universities and companies are looking to recruit.”

At the highschool level, Goodwall is also competing with scholarship and college matching websites but Bawa argues it also offers kids additional value — given the platform’s focus on building a community around achievements, connections and mutual support.

The network is also of course competing with LinkedIn — certainly at the older end of its age range. But because Goodwall offers tools for high school students it’s hoping to get in early and build a relationship that lasts right through college and users’ early career path, by acting as “the first resume they build”.

“We grow with them,” is how Bawa puts it. 

While the startup is taking VC funding now to focus on further building its network in the US, he confirms it would be open to an exit to a larger professional or student focused network in the future, saying: “We’d like to continue growing with our members.”

Commenting on the Series A in a statement, Paul Jacquin, managing partner at Randstad Innovation Fund, added: “We’re excited to support the Goodwall team in building a new segment with college and graduate demographics after their success in creating a unique and positive community to gain support, receive guidance and opportunities. The level of engagement on Goodwall has been impressive and unique in its community aspect. We are thrilled to bring the platform to its next chapter of growth.”