Narrative: Fake News is Real
Each narrative page (like this) has a page describing and evaluating the narrative, followed by all the posts on the site tagged with that narrative. Scroll down beyond the introduction to see the posts.
Facebook seems to be taking its responsibility to help police fake news ever more seriously, and today announced another step in that effort: showing users a popup card at the top of their feed which offers to teach them how to spot fake news. I’d love to think this could make a meaningful difference in people’s ability to discern truth from error, but realistically the kind of people who most need this training will be least likely to click on it, in part at least because Facebook’s previous efforts in this area have been seen as partisan rather than neutral by those most likely to read, believe, and share fake news. But it’s good to see Facebook trying, and it may at least give some more moderate users pause before they share fake news on the site.
Human curation feels like an interesting way to solve a problem with an algorithm, and it’s striking that Google pays 10,000 people to check its search results for quality in the first place. As I’ve said previously, the specific problem with “snippets” in search is better solved by eliminating them for obscure or poorly covered topics, but the issue with false results is certainly broader than just snippets. It sounds like this approach is helping, but it doesn’t feel very scalable.
via USA Today
This is the second of two fake news stories this morning (the first concerned Facebook), and this one doesn’t look so good for Google. Google has long pull excerpts out of search results in order to provide what it deems to be the answers to questions, as a way to get people to what they’re looking for faster. For undisputed facts, like what time the Super Bowl starts or how old a movie star is, that’s very useful and unobjectionable. But the problem is that Google has algorithms designed to find these answers for almost any question people might ask, and in an era of fake news, some questions and the ostensible answers are driven entirely by conspiracy theories and false reporting, which means that the right answer doesn’t even appear anywhere online. So it’s snippets serve up answers exclusively from fake news and conspiracy sites, as if these were incontrovertible, lending them an air of considerable authority, and causing many users to take those answers as gospel. The simple solution here is for Google to back way off from this snippets approach and limit it to questions that are both frequently asked and also answered by a wide range of sites including reputable ones. I don’t know whether Google will take that approach, but it’s going to be very hard for it to solve the problem in any other way.
via The Outline
Facebook has started to flag fake news stories – Recode (Mar 6, 2017)
This was part of Facebook’s plan for dealing with fake news, announced back in December, so there’s no huge surprise here. But Recode picks up on several points worth noting, most importantly that because Facebook is relying on third party fact checkers, vetting fake news stories can often take quite some time, even when they come from a publication known to publish only false news stories. That’s problematic because by the time the “disputed” label is attached, many people will have seen and believed the story, and attaching it a week after it first surfaces will likely have little impact, especially on a high profile and popular story. It really feels like Facebook needs a separate label for entire fake news publications which is applied automatically to its links – that would be straightforward and far more useful, and could still be done in cooperation with fact checking organizations. But if Snopes and Politifact are going to be really useful, they have to move much faster on this stuff. Here’s hoping Facebook becomes less hesitant and pushes its partners to act more quickly, so that this tool can become really useful.
Though Facebook bears the brunt of criticism among the tech industry’s largest players for its role in spreading and/or failing to stem the spread of fake news, it’s worth noting that others play their roles too. Though Google search has been mentioned quite a bit, YouTube hasn’t been mentioned nearly as much, and yet this article argues there’s tons of fake news video content on YouTube, which goes essentially un-policed by the site. YouTube itself responds that it only curates legitimate news sources for its official channels, but of course many of the creators of this fake news content are making money off that content through YouTube’s ad model. Since Google shut down its ad platform on third party sites which focused on fake news, it’s arguable that it should apply the same policy here too, something it so far doesn’t seem willing to do.
This is a great idea, and I hope we’ll see a lot more of this kind of innovation around news – we need it. One of the things I’m most struck by almost daily is the different universes that I’m a part of on Twitter and Facebook – during the day, I’m surrounded by mostly very liberal perspectives among the coastal tech people I follow on Twitter, and in the evenings at weekend I spend more time on Facebook, where the people I’m connected to tend to be more conservative. But I suspect many of us inhabit mostly one or other of these worlds, or tend to shut out those perspectives which are different from our own on social media, tending to reinforce our perceptions and prejudices. Not everyone will go for this kind of experiment – some may choose to continue to see a narrower view of the world, but we could all benefit from putting ourselves in others’ shoes and seeing the news through other lenses than our own.
Mark Zuckerberg Pens a Personal and Facebook Manifesto (Feb 16, 2017)
Mark Zuckerberg has posted a combination personal and Facebook manifesto to the site, and has also been speaking to a variety of reporters about it over the last day or so. The manifesto is long and covers a ton of ground, some of it about the state of the world but much of it at least indirectly and often quite directly about Facebook and its role in such a world. In some ways, this builds on comments Zuckerberg made at the F8 developer conference last year, and it mostly stays at a similar high level, talking about grand ideas and issues at the 30,000 foot level rather than naming particular politicians or being more specific. To the extent that Zuckerberg is talking about how to use Facebook as a force for good in the world, this is admirable at least to a point. He clearly now both recognizes and is willing to admit to a greater extent than previously the role Facebook has played in some of the negative trends (and I believe this piece contains his first proactive use of the phrase “fake news”), and wants to help fix them, though much of his commentary on what’s going wrong spreads the blame more broadly. I’m also a little concerned that, although many of the problems Facebook creates stem from the service’s massive and increasing power over our lives, the solutions he proposes mostly seem to be about increasing Facebook’s power rather than finding ways to limit it. To some extent, that’s natural given who he is, but it suggests an ongoing unwillingness to recognize the increasing mediation of our world by big forces like Facebook and Google and the negative impact that can have. Still, it’s good to see more open communication on issues like this from a major tech leader – I’d love to see more of this kind of thing (as I wrote last summer in this piece).
Google and Facebook to help French newsrooms combat ‘fake news’ ahead of presidential election – VentureBeat (Feb 6, 2017)
If only these companies had made such a concerted effort to combat fake news in the US a year ago rather than only really springing into action in the very late stages of last year’s presidential campaign (and in Facebook’s case, mostly after it was over). It appears both companies are taking their duty to put accuracy above ad revenue a bit more seriously in France than they did in the US, a sign of increased realism about the power that each company has in shaping the news people see.
The topic of fake news and the related topic of filter bubbles has been one BuzzFeed has been particularly strong on in recent months (abuse on Twitter is another). This analysis is fascinating, and shows how even the experience of watching video on Facebook can be colored by the outlets a user chooses to follow. This isn’t quite the same as Facebook’s algorithms showing users different things – in this experiment, the user consciously chose to watch either a Fox News or Fusion live video stream. But it’s a great illustration of how users on Facebook can have completely different experiences even when engaging with the same underlying content.
This is one of two bits of news from Facebook today (the other concerns metrics), this one about dealing with fake news (though that’s a term Facebook continues to eschew in favor of talking about genuineness and authentic communication). Facebook is tweaking its algorithms again to provide better feeds with fewer sensationalist or inaccurate news reports, for example. It looks like this is mostly about ordering within the feed rather than whether something appears there at all, however, which is a nice way of avoiding perceptions of outright censorship, though of course the lower something appears in the feed, the less likely people are to see it. It’s good to see that Facebook continues to tweak its strategy for dealing with fake news, and as with previous moves around news it’ll be very interesting to see how it’s perceived by users and publications.