Washington Post Culls Ad Tech Vendors Over Site Slowing (Apr 19, 2017)
There’s some good reporting here about publishers starting to pull their content back from Facebook’s Instant Articles. When it first launched, I think publishers were at the very least keen to experiment with it, and in many cases felt they had little choice but to participate out of fear that non-IA content would be deprioritized by Facebook’s News Feed algorithms. That publishers (including the New York Times) are starting to pull back is a sign both that the format is underperforming badly and that content owners have confidence that they can buck Facebook’s first party platform without negative consequences. That’s a good counterpoint to all the stories about Facebook’s power and how little choice content owners have about publishing to Facebook natively. It remains to be seen whether these publishers will see the same monetization and traffic now as they did before IA debuted, because if that’s the comparison organizations are making they may be disappointed. But all this also explains why Facebook has been working so much harder lately to cater to news publishers in particular, with its Journalism Project, new calls to action and subscription (though not paid subscription) options, and listening tours. It’s clearly worried that it’s losing the battle here and needs to do more.
This is the second fake news-combatting announcement this week, after Facebook’s announcement about teaching users how to spot fake news yesterday. This is one of the broadest and most direct steps Google has taken in this area, and will specifically flag particular news articles or other sites with an additional link to a fact checking site such as Snopes or PolitiFact with a brief summary of who is making a claim and whether those sites consider it to be true. This is somewhat similar to Facebook’s effort to flag fake news, but the big difference is that it will be done algorithmically through special markup those sites will use, which will be picked by Google’s crawlers. That should mean that at least in some cases Google will flag something as false long before Facebook will, and I’d hope that Facebook would move to do something similar over time too.
Facebook seems to be taking its responsibility to help police fake news ever more seriously, and today announced another step in that effort: showing users a popup card at the top of their feed which offers to teach them how to spot fake news. I’d love to think this could make a meaningful difference in people’s ability to discern truth from error, but realistically the kind of people who most need this training will be least likely to click on it, in part at least because Facebook’s previous efforts in this area have been seen as partisan rather than neutral by those most likely to read, believe, and share fake news. But it’s good to see Facebook trying, and it may at least give some more moderate users pause before they share fake news on the site.
Facebook is on a big listening tour for local media — and publishers are actually happy – Mashable (Mar 6, 2017)
When Facebook announced its Journalism Project a few weeks ago (and hired Campbell Brown to take a leadership role within it), it said all the right words about wanting to partner with news organizations and help them be successful. But the problem with platforms like Facebook and Google is those promising words have often rung hollow as they’ve subsequently pursued initiatives and products which ended up threatening rather than helping the media industry, and news sites in particular. It’s heartening, then, to see that Facebook seems to be engaging in a fairly genuine way with news organizations, and actually listening to them and their concerns. This article also suggests that these organizations are responding positively to some of the new ad options Facebook is introducing (though of course it remains to be seen how Facebook users respond to things like a higher ad load in Instant Articles and mid-roll video ads). It’s early days still, but there are at least some signs that Facebook means what it says about partnering in healthier ways with content partners.
This is the second of two fake news stories this morning (the first concerned Facebook), and this one doesn’t look so good for Google. Google has long pull excerpts out of search results in order to provide what it deems to be the answers to questions, as a way to get people to what they’re looking for faster. For undisputed facts, like what time the Super Bowl starts or how old a movie star is, that’s very useful and unobjectionable. But the problem is that Google has algorithms designed to find these answers for almost any question people might ask, and in an era of fake news, some questions and the ostensible answers are driven entirely by conspiracy theories and false reporting, which means that the right answer doesn’t even appear anywhere online. So it’s snippets serve up answers exclusively from fake news and conspiracy sites, as if these were incontrovertible, lending them an air of considerable authority, and causing many users to take those answers as gospel. The simple solution here is for Google to back way off from this snippets approach and limit it to questions that are both frequently asked and also answered by a wide range of sites including reputable ones. I don’t know whether Google will take that approach, but it’s going to be very hard for it to solve the problem in any other way.
via The Outline
Facebook has started to flag fake news stories – Recode (Mar 6, 2017)
This was part of Facebook’s plan for dealing with fake news, announced back in December, so there’s no huge surprise here. But Recode picks up on several points worth noting, most importantly that because Facebook is relying on third party fact checkers, vetting fake news stories can often take quite some time, even when they come from a publication known to publish only false news stories. That’s problematic because by the time the “disputed” label is attached, many people will have seen and believed the story, and attaching it a week after it first surfaces will likely have little impact, especially on a high profile and popular story. It really feels like Facebook needs a separate label for entire fake news publications which is applied automatically to its links – that would be straightforward and far more useful, and could still be done in cooperation with fact checking organizations. But if Snopes and Politifact are going to be really useful, they have to move much faster on this stuff. Here’s hoping Facebook becomes less hesitant and pushes its partners to act more quickly, so that this tool can become really useful.
Though Facebook bears the brunt of criticism among the tech industry’s largest players for its role in spreading and/or failing to stem the spread of fake news, it’s worth noting that others play their roles too. Though Google search has been mentioned quite a bit, YouTube hasn’t been mentioned nearly as much, and yet this article argues there’s tons of fake news video content on YouTube, which goes essentially un-policed by the site. YouTube itself responds that it only curates legitimate news sources for its official channels, but of course many of the creators of this fake news content are making money off that content through YouTube’s ad model. Since Google shut down its ad platform on third party sites which focused on fake news, it’s arguable that it should apply the same policy here too, something it so far doesn’t seem willing to do.
This is a great idea, and I hope we’ll see a lot more of this kind of innovation around news – we need it. One of the things I’m most struck by almost daily is the different universes that I’m a part of on Twitter and Facebook – during the day, I’m surrounded by mostly very liberal perspectives among the coastal tech people I follow on Twitter, and in the evenings at weekend I spend more time on Facebook, where the people I’m connected to tend to be more conservative. But I suspect many of us inhabit mostly one or other of these worlds, or tend to shut out those perspectives which are different from our own on social media, tending to reinforce our perceptions and prejudices. Not everyone will go for this kind of experiment – some may choose to continue to see a narrower view of the world, but we could all benefit from putting ourselves in others’ shoes and seeing the news through other lenses than our own.
Google makes it easier to see and share publishers’ real URLs from AMP pages – Search Engine Land (Feb 6, 2017)
One of the biggest frustrations publishers have had with Google’s AMP format is that it takes over the URL of the site where the content originates. Given that many URLs are shared in shortened form in Twitter clients and similar venues, this often means all the viewer sees is a google.com domain, which can be confusing. This tweak to the AMP settings doesn’t solve the fundamental problem that AMP pages use Google URLs, but offers a workaround of sorts allowing users to share the canonical original URL for the publication instead. That’s a start, but the domain issue and other reasons not to like AMP and other similar formats like Facebook Instant Articles remain.
Google and Facebook to help French newsrooms combat ‘fake news’ ahead of presidential election – VentureBeat (Feb 6, 2017)
If only these companies had made such a concerted effort to combat fake news in the US a year ago rather than only really springing into action in the very late stages of last year’s presidential campaign (and in Facebook’s case, mostly after it was over). It appears both companies are taking their duty to put accuracy above ad revenue a bit more seriously in France than they did in the US, a sign of increased realism about the power that each company has in shaping the news people see.
The topic of fake news and the related topic of filter bubbles has been one BuzzFeed has been particularly strong on in recent months (abuse on Twitter is another). This analysis is fascinating, and shows how even the experience of watching video on Facebook can be colored by the outlets a user chooses to follow. This isn’t quite the same as Facebook’s algorithms showing users different things – in this experiment, the user consciously chose to watch either a Fox News or Fusion live video stream. But it’s a great illustration of how users on Facebook can have completely different experiences even when engaging with the same underlying content.
This is one of two bits of news from Facebook today (the other concerns metrics), this one about dealing with fake news (though that’s a term Facebook continues to eschew in favor of talking about genuineness and authentic communication). Facebook is tweaking its algorithms again to provide better feeds with fewer sensationalist or inaccurate news reports, for example. It looks like this is mostly about ordering within the feed rather than whether something appears there at all, however, which is a nice way of avoiding perceptions of outright censorship, though of course the lower something appears in the feed, the less likely people are to see it. It’s good to see that Facebook continues to tweak its strategy for dealing with fake news, and as with previous moves around news it’ll be very interesting to see how it’s perceived by users and publications.
Continuing Our Updates to Trending – Facebook (Jan 25, 2017)
It’s a big day for Facebook news – I’ve already covered the new Facebook Stories feature and ads in Messenger, both of which are being tested. This is the only one that’s been publicly announced by Facebook, however, and it concerns Trending Topics, which appear on the desktop site. The changes are subtle but important – each topic will now come with a headline and a base URL such as foxnews.com, topics will be identified based on broad engagement by multiple publications and not just one, and the same topics will be shown to everyone in the same region rather than personalized. Though Facebook doesn’t explicitly say so (perhaps because it fears a backlash, perhaps because it would be a further acknowledgement of a thorny issue), but all of these can be seen as partial solutions to the fake news issue. Citing specific headlines and publications allows users to see the source and make a judgment about whether it’s a reliable one, prioritizing broad engagement will surface those stories that are widely covered rather than being promoted by a single biased source, and showing the same topics to all users could be seen as an attempt to break through the filter bubble. These all seem like smart changes, assuming Facebook can deliver better on these promises than some of its abortive previous changes to Trending Topics.
This is the latest in a string of occasions when Facebook has blocked specific content or an entire account on the basis of a supposed violation of its terms, only to reverse itself. But in this case, it’s a bit different – RT is a highly controversial Russian state-funded news outlet at a time when Russian interference in the US electoral process is a hot topic. The account’s privileges were quickly reinstated in this case, but there now appears to have been no legitimate reason to withdraw them in the first place, raising questions about who at Facebook made the decision to suspend the account and why. At a time when Facebook is trying to be more responsible about policing fake news and also working more closely with news organizations, this kind of thing won’t inspire a lot of confidence either among news organizations or among those inclined to belief Facebook’s fake news clampdown has a partisan bent.
Though the narrative about Facebook copying Snapchat is generally fairly accurate, and Facebook has largely used Instagram as its vehicle for this cloning recently, this doesn’t feel like part of that narrative. Yes, there’s a carousel of sorts within Snapchat’s Discover section, but that’s really where the similarity ends. This is news- and importantly article-centric, while most of the Discover content is lifestyle-centric and highly visual (video and photos). And this is about bundling content from a single news publisher, potentially around a topic, which also feels quite distinct. This is also all directly related to Facebook’s other announcement today about news, which explicitly referenced this testing.
This news (FB’s own blog post here) should obviously be taken together with the hiring of Campbell Brown as head of news partnerships at Facebook, announced last week. It’s easy to see this as being about the whole fake news story, and there’s an element of that, but this goes much further than that. What’s interesting is the number of value judgments in Facebook’s own post about this – it isn’t neutral here when it comes to fostering news sites, and local news in particular. That’s clearly in its interests, but it goes further than that too. It’s also very sensibly looking at business models beyond display ads for monetizing news content on Facebook, something the industry needs as Facebook becomes the place where many of their readers consume their content.
This is yet another step in Facebook’s evolving vision of its identity. Campbell Brown isn’t going to be producing news for Facebook, but rather working with news organizations that use Facebook, but it’s a recognition that news is a huge content category on the service, and that many people get their news through Facebook. It will be very interesting to see how this role pans out in detail, and whether it feels Facebook is really helping news organizations, especially when set against recent moves to combat fake news.