Narrative: Fake News is Real
Each narrative page (like this) has a page describing and evaluating the narrative, followed by all the posts on the site tagged with that narrative. Scroll down beyond the introduction to see the posts.
Narrative: Fake News is Real (Jan 24, 2017)
Written: January 24, 2017
The title of this narrative is somewhat tongue in cheek, but if there’s anything we learned from the 2016 US presidential election, it’s that fake news is a real phenomenon, and it can have real and far-reaching effects. It could be seen as primarily a media phenomenon rather than a tech one, but of course there are multiple tech angles, from Facebook and Twitter as conduits for fake news to online advertising as the business model that’s supported much of it.
Fake news isn’t, of course, strictly new – whether you go back to the Yellow Journalism era or even the US Revolutionary War or all the way back to the 1400s, it’s been around for a very long time in one form or another. The difference today is that the existence of fake news is paired with and enabled by an increasing set of filter bubbles, which make it possible for people to live in an alternative reality devoid of facts, previously only available to those whose news diet consisted entirely of supermarket tabloids.
Naturally, no-one at Facebook or Twitter wants to enable this trend, and indeed both Facebook and Snapchat have recently taken steps to remedy it somewhat, but in their pursuit of maximum engagement these services have sometimes allowed click bait and fake news to thrive unchecked because users respond to it positively and it drives engagement and therefore ad dollars.
The big question at this point is what more tech companies can do to prevent the spread of fake news – Facebook’s strategy is very sensible, but is already running into opposition from some alt-right groups who see it as further bias against conservative news media. Outside of social networks, of course, most tech companies have relatively little sway one way or the other here. In most cases, these companies will simply have to watch others – notably the news media – do their best to counter this alarming but at the same time old and familiar problem.
See also the Facebook’s Power narrative, which is in some ways closely related to this one.
Facebook is testing adding a new button (an “i” in a circle) on articles in the News Feed on the platform, which will bring up additional context relating to the article, including a brief summary description of the publication from Wikipedia (if its profile merits an entry), related articles, and where the piece is being read and shared. All of this is intended to serve as a set of subtle signals about the reputability of the publication and the content of the article, without explicitly rating it in the way the much more robust (but therefore also less frequently available) fact checking initiative Facebook announced earlier in the year. The main problem I see with this approach is that the button itself doesn’t highlight any particular articles – the reader has to proactively decide to find out whether there might be interesting information hidden behind it, something many readers won’t be inclined to do, especially if they’re the credulous type most likely to fall for fake news in the first place. As such, this is an interesting additional set of tools, but not one that’s likely to make a meaningful difference in combating fake news on Facebook.
Facebook, Google, and Twitter Struggle to Contain Fake News in the Wake of Las Vegas Shooting (Oct 3, 2017)
I had this in my list of items to cover yesterday but it was a busy day for other news and I’d already covered a couple of Facebook stories, so I decided to hold it over to today given that it was likely to continue to be newsworthy. This BuzzFeed piece does a good job rounding up some of the issues with Facebook, Google, and Twitter properties in the wake of the awful shooting in Las Vegas on Sunday night. Each of these platforms struggled in some way to filter fake news and uninformed speculation from accurate, reliable, news reporting in the wake of the shooting. Each eventually responded to the issue and fixed things, but not before many people saw (and reported on) some of the early misleading results. And it does feel as though some of the issues they saw were easily avoidable by limiting which sites might be considered legitimate sources of news ahead of time, or at the very least requiring new sites claiming to break news to pass some sort of human review before being cited. Normally, I’d say this would blow over quickly and wouldn’t matter that much, but in the current political context around Facebook, Google, and so on, it’ll probably take on broader meaning.
Bing Adds Fact Check Summaries to News Search Results (Sep 18, 2017)
I saw this story first thing this morning and originally eliminated it as a candidate for inclusion on the site because it felt so marginal – it sometimes seems as though Google is the overwhelming leader in search and Bing such an also-ran that it doesn’t merit covering. But the reality is that Bing has somewhere over 20% market share in search in the US through a combination of apathy from users of Microsoft operating systems or browsers and active preference, so it’s not as marginal as it might seem, for all that Google gets massively more attention in this space. At any rate, the news is that Bing is adding a little fact checking feature to its news search results, but in a somewhat unsatisfactory way. Rather than flagging potentially false news itself, it will instead highlight the conclusions of fact checking articles from sites like Snopes when they happen to appear in search results. That’s a pretty tame and potentially not very helpful way to flag fake news, and I’d hope that Microsoft eventually goes a little further and puts links to fact-checking sites directly in the preview for dodgy news articles. Google’s version of the feature goes a little further but putting the fact check article at the top of the listings for at least some searches, and that seems like the right way to go here.
via The Verge
Twitter is Reportedly Testing a Fake News Button (Jun 29, 2017)
The Washington Post reports that Twitter is testing a button that would allow users to flag tweets or links in tweets which appear to be false or misleading, although it says there’s no guarantee that the feature will ever launch, and Twitter itself says it’s not going to launch anything along these lines soon. On the face of it, this is a logical step for Twitter, which has been one of the biggest vehicles for the rapid spread of fake news over the last year or two, even though its much smaller scale means that Facebook still arguably has a bigger impact, especially given its tendency to reinforce people’s existing biases. But on the other hand, given how the phrase “fake news” has lost all meaning and come to take on distinct partisan overtones, there’s enormous potential for misuse and controversy, and if Twitter does launch something along these lines, it’s going to need either a massive team of its own or several big partners with significant resources to handle the refereeing that will be needed. That alone may prevent Twitter from ever launching the feature, needed though it may be. In addition, given that Twitter has arguably bent its own rules on acceptable content a little for public figures such as President Trump (and candidate Trump before him), there are some big questions about whether tweets from the President and others would be subject to the same rules as those from other users.
★ Google Makes Tweaks to Search to Combat Fake News (Apr 25, 2017)
This is the second fake news-combatting announcement this week, after Facebook’s announcement about teaching users how to spot fake news yesterday. This is one of the broadest and most direct steps Google has taken in this area, and will specifically flag particular news articles or other sites with an additional link to a fact checking site such as Snopes or PolitiFact with a brief summary of who is making a claim and whether those sites consider it to be true. This is somewhat similar to Facebook’s effort to flag fake news, but the big difference is that it will be done algorithmically through special markup those sites will use, which will be picked by Google’s crawlers. That should mean that at least in some cases Google will flag something as false long before Facebook will, and I’d hope that Facebook would move to do something similar over time too.