March 21, 2018 By Zak Mehan
What’s new in Fake News News? Plenty. For starters, YouTube was met with some stiff criticism last week when it unveiled a new plan to provide some clarity to viewers of controversial videos and conspiracy theories: link to the Wikipedia page. Wikipedia has long been considered a dubious source (ever tried to use it as a citation in an academic paper?), so this immediately turned some heads. And if that turned heads, the second revelation gave some serious whiplash: the folks at Wikipedia weren’t even told about its new role as a fake news fire extinguisher. Awkward.
But, Google (Youtube’s owner) points to one aspect of YouTube as being exemplary: The Breaking News section. Breaking News is a separate section on YouTube that collates video coverage from established news sources on breaking issues. It isn’t alone in the trend of pulling news out of the muddy current of the general content feed.
Facebook now allows users to choose to “See First” so that content from a specific page appears first in their News Feed, separate from general stories. And as we mentioned last week news is also getting its own home on Facebook’s Watch video platform. Not to be left out, Twitter is also moving more aggressively into breaking news, testing out an extension to its Happening Now sports platform that consolidates updates from verifiable sources. (Is it like Moments but only content from media organizations shows up? We’re not sure.)
What should be clear is that how these tremendous traffic drivers choose to deal with the fake news problem can and is having an impact on the publications themselves. Google is trying to mitigate this with major investments into journalism and Facebook by exploring including subscription options for newspapers on its site, but the change is far from over.
One month after Kylie Jenner wiped $1.3 billion off Snapchat’s market value with a single tweet, the company has suffered another major setback after an Instagram story by Rihanna sent the app’s stock price tumbling yet again. This time, however, the allegations against the app were much more serious in nature.
Last week, Snapchat ran an ad asking users if they’d rather “slap Rihanna” or “punch Chris Brown.” The backlash against the ad, which makes light of domestic violence by referencing Chris Brown’s 2009 assault of then-girlfriend Rihanna, was swift. The company quickly removed the ad on Monday and issued an apology, but Rihanna’s response caused stocks to fall by 4% and led to another apology.
The second apology provided key insight into how, exactly, an advertisement like this was ever approved. And surprisingly, an algorithm was not to blame this time around. Snapchat has had a reputation for promoting high-quality ads, which they’ve worked to maintain by depending largely on human reviewers to vet ads. Notably, with increased scrutiny surrounding the role social media advertising played in the 2016 election, companies like Facebook have also been inching closer to a human-based review system.
But if this incident teaches us anything it’s that there is no perfect way of reviewing content – while bots can be confused, humans are just as capable of displaying poor judgment. It seems for now that the best path forward, whether algorithm- or human-based will need to involve multiple layers of review to ensure content is vetted appropriately.
We won’t dig in too deep here because the story is still (juicily) unfolding, but we’d be neglecting our duty if we didn’t at least mention the scandal currently rocking Westminster, the Hill and Silicon Valley.
Over the weekend a whistleblower came forward alleging that Cambridge Analytica, a political consulting firm tied to prominent donors to President Donald Trump’s 2016 campaign, had obtained private data from 50 million Facebook users without their consent.
The firm has touted psychographic profiling in the past, which was met with skepticism over how effective the firm could actually target users based on this data. The recent revelations have put that question back into the spotlight, as journalists scramble to assess the impact of Cambridge Analytica’s data use.
From our perspective, there is a novel feature to this story in the U.S.: bipartisanship. Democrats and some Republicans are both now calling Facebook to account over how it protects user data. Is the day of regulatory reckoning drawing closer?
Facebook and Google are eating your TV FastCompany
Long-Divided Federal Election Commission Unites On Digital Ad Transparency NPR
On eve of trial on Time Warner deal, AT&T, U.S. government lay out cases Reuters
Cheddar, the ‘CNBC for Millennials,’ Raises $22 Million for International Expansion WSJ
Here is some serious motivation for you.