Topic: Fake news

Each post below is tagged with
  • Company/Division names
  • Topics
  • and
  • Narratives
  • as appropriate.
    Facebook Cites Fake News Flagging Progress, Sandberg Discusses Russian Ads (Oct 12, 2017)

    Facebook’s COO Sheryl Sandberg was interviewed today by Axios’s Mike Allen on the subject of Russian election interference and other topics, while Facebook also issued some data about the effectiveness of its program to flag fake news on the platform. At the same time, the Washington Post reports that Facebook has removed a set of data from its site and tools which allowed for analysis of Russian-related postings.

    The Sandberg interview shed little additional light on the topic, with the only real news that Facebook is sharing both the ads bought by Russian-backed entities and additional details associated with them with Congress, which in turn may share them publicly. However, she was also asked whether Facebook was a media company, a characterization she pushed back on, leading to several articles from actual media companies arguing that she’s wrong. There continues to be something of an ulterior motive for these stories given the tense relationship between Facebook and the media, but I continue to believe that these characterizations are wrong. To my mind, Facebook is much more like a cable TV company than a TV programmer, putting together a package of content for users but not mostly doing that programming itself, while selling ads that appear within the programming. I don’t think most would argue that cable TV companies are media companies or that they’re responsible for the specific content of the programming, while they are responsible for establishing general policies and rules about what will run on their platforms.

    The data Facebook shared on its fake news flagging effort suggests that a fake news label applied after fact checking from third parties effectively reduces sharing and views once it’s applied, but the problem has always been that it takes several days for this to happen, which means most of the views have already happened by the time it takes effect. It shared the data with its fact checking partners as a way to incentivize them to do better (something they’ve been asking for) but without massive new resources from Facebook or elsewhere, it’s not clear how those organizations will be able to work faster or cover more ground. That, in turn, will continue to limit the effectiveness of the program.

    Lastly, Facebook says the data it has pulled from its site with regard to Russian accounts should never have been available in the first place, and its disappearance therefore reflects the squashing of a bug rather than a decision to pull otherwise public information. Whether you believe that or not likely depends on your view of Facebook’s overall level of transparency in relation to the Russia story, which has clearly been limited. It appears Facebook at a corporate level is still desperate to control the flow of information about Russian influence on the platform, which likely isn’t helping its PR effort here – better to be as transparent as possible so that all possible bad news can come out quickly rather than continuing to trickle out.

    via Recode (Sandberg)BuzzFeed (fake news), Washington Post (data removal)

    Facebook Tests Providing Additional Context for Articles in News Feed (Oct 5, 2017)

    Facebook is testing adding a new button (an “i” in a circle) on articles in the News Feed on the platform, which will bring up additional context relating to the article, including a brief summary description of the publication from Wikipedia (if its profile merits an entry), related articles, and where the piece is being read and shared. All of this is intended to serve as a set of subtle signals about the reputability of the publication and the content of the article, without explicitly rating it in the way the much more robust (but therefore also less frequently available) fact checking initiative Facebook announced earlier in the year. The main problem I see with this approach is that the button itself doesn’t highlight any particular articles – the reader has to proactively decide to find out whether there might be interesting information hidden behind it, something many readers won’t be inclined to do, especially if they’re the credulous type most likely to fall for fake news in the first place. As such, this is an interesting additional set of tools, but not one that’s likely to make a meaningful difference in combating fake news on Facebook.

    via Facebook

    Facebook, Google, and Twitter Struggle to Contain Fake News in the Wake of Las Vegas Shooting (Oct 3, 2017)

    I had this in my list of items to cover yesterday but it was a busy day for other news and I’d already covered a couple of Facebook stories, so I decided to hold it over to today given that it was likely to continue to be newsworthy. This BuzzFeed piece does a good job rounding up some of the issues with Facebook, Google, and Twitter properties in the wake of the awful shooting in Las Vegas on Sunday night. Each of these platforms struggled in some way to filter fake news and uninformed speculation from accurate, reliable, news reporting in the wake of the shooting. Each eventually responded to the issue and fixed things, but not before many people saw (and reported on) some of the early misleading results. And it does feel as though some of the issues they saw were easily avoidable by limiting which sites might be considered legitimate sources of news ahead of time, or at the very least requiring new sites claiming to break news to pass some sort of human review before being cited. Normally, I’d say this would blow over quickly and wouldn’t matter that much, but in the current political context around Facebook, Google, and so on, it’ll probably take on broader meaning.

    via BuzzFeed

    Bing Adds Fact Check Summaries to News Search Results (Sep 18, 2017)

    I saw this story first thing this morning and originally eliminated it as a candidate for inclusion on the site because it felt so marginal – it sometimes seems as though Google is the overwhelming leader in search and Bing such an also-ran that it doesn’t merit covering. But the reality is that Bing has somewhere over 20% market share in search in the US through a combination of apathy from users of Microsoft operating systems or browsers and active preference, so it’s not as marginal as it might seem, for all that Google gets massively more attention in this space. At any rate, the news is that Bing is adding a little fact checking feature to its news search results, but in a somewhat unsatisfactory way. Rather than flagging potentially false news itself, it will instead highlight the conclusions of fact checking articles from sites like Snopes when they happen to appear in search results. That’s a pretty tame and potentially not very helpful way to flag fake news, and I’d hope that Microsoft eventually goes a little further and puts links to fact-checking sites directly in the preview for dodgy news articles. Google’s version of the feature goes a little further but putting the fact check article at the top of the listings for at least some searches, and that seems like the right way to go here.

    via The Verge

    Facebook Fact-Checking Partners Say Lack of Data Sharing Impedes Work (Sep 7, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Facebook Says It Has Uncovered Evidence of Russian Ad-Buying Operations (Sep 6, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Facebook Disables Advertising for Sites That Repeatedly Share Fake News Links (Aug 28, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Twitter is Reportedly Testing a Fake News Button (Jun 29, 2017)

    The Washington Post reports that Twitter is testing a button that would allow users to flag tweets or links in tweets which appear to be false or misleading, although it says there’s no guarantee that the feature will ever launch, and Twitter itself says it’s not going to launch anything along these lines soon. On the face of it, this is a logical step for Twitter, which has been one of the biggest vehicles for the rapid spread of fake news over the last year or two, even though its much smaller scale means that Facebook still arguably has a bigger impact, especially given its tendency to reinforce people’s existing biases. But on the other hand, given how the phrase “fake news” has lost all meaning and come to take on distinct partisan overtones, there’s enormous potential for misuse and controversy, and if Twitter does launch something along these lines, it’s going to need either a massive team of its own or several big partners with significant resources to handle the refereeing that will be needed. That alone may prevent Twitter from ever launching the feature, needed though it may be. In addition, given that Twitter has arguably bent its own rules on acceptable content a little for public figures such as President Trump (and candidate Trump before him), there are some big questions about whether tweets from the President and others would be subject to the same rules as those from other users.

    via The Washington Post

    News Corp Launches Tool to Help Brands Avoid Advertising Against Undesirable Content (May 2, 2017)

    I commented on a piece a little while ago about how Chase had a member of staff manually go through the sites where its ads were appearing and pare that number back in order to ensure that its ads were appearing on high quality sites against reputable content. That effort paid off for Chase, which saw essentially the same end results from its advertising on 5,000 as it had on 400,000, but it’s a heck of a lot of work to go through. So a service that offers to help brands work through similar decisions without having to do all the work is bound to be appealing, and that’s what News Corp is offering through its Storyful product. News Corp itself isn’t a major destination for online advertising, but the Murdoch empire in total certainly is, thanks to 21st Century Fox and its significant TV ad revenue (21CF alone had around 75% as much ad revenue in 2016 as Facebook did in the US). But this Storyful product is an expansion of News Corp’s ad offerings into a new area which isn’t directly tied to its online ad platform (or those of 21CF). So although the article sets this up as part of a broader war between Murdoch and Google, this is pretty peripheral stuff. But it’s clearly aimed at taking advantage of some of Google’s recent troubles in this area.

    via Bloomberg

    ★ Google Makes Tweaks to Search to Combat Fake News (Apr 25, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Google expands fact-checking effort to all searches worldwide – Search Engine Land (Apr 7, 2017)

    This is the second fake news-combatting announcement this week, after Facebook’s announcement about teaching users how to spot fake news yesterday. This is one of the broadest and most direct steps Google has taken in this area, and will specifically flag particular news articles or other sites with an additional link to a fact checking site such as Snopes or PolitiFact with a brief summary of who is making a claim and whether those sites consider it to be true. This is somewhat similar to Facebook’s effort to flag fake news, but the big difference is that it will be done algorithmically through special markup those sites will use, which will be picked by Google’s crawlers. That should mean that at least in some cases Google will flag something as false long before Facebook will, and I’d hope that Facebook would move to do something similar over time too.

    via Search Engine Land (Google’s blog post here)

    Facebook Wants To Teach You How To Spot Fake News On Facebook – BuzzFeed (Apr 6, 2017)

    Facebook seems to be taking its responsibility to help police fake news ever more seriously, and today announced another step in that effort: showing users a popup card at the top of their feed which offers to teach them how to spot fake news. I’d love to think this could make a meaningful difference in people’s ability to discern truth from error, but realistically the kind of people who most need this training will be least likely to click on it, in part at least because Facebook’s previous efforts in this area have been seen as partisan rather than neutral by those most likely to read, believe, and share fake news. But it’s good to see Facebook trying, and it may at least give some more moderate users pause before they share fake news on the site.

    via BuzzFeed

    Google starts flagging offensive content in search results – USA Today (Mar 16, 2017)

    Human curation feels like an interesting way to solve a problem with an algorithm, and it’s striking that Google pays 10,000 people to check its search results for quality in the first place. As I’ve said previously, the specific problem with “snippets” in search is better solved by eliminating them for obscure or poorly covered topics, but the issue with false results is certainly broader than just snippets. It sounds like this approach is helping, but it doesn’t feel very scalable.

    via USA Today

    Google’s featured snippets are worse than fake news – The Outline (Mar 6, 2017)

    This is the second of two fake news stories this morning (the first concerned Facebook), and this one doesn’t look so good for Google. Google has long pull excerpts out of search results in order to provide what it deems to be the answers to questions, as a way to get people to what they’re looking for faster. For undisputed facts, like what time the Super Bowl starts or how old a movie star is, that’s very useful and unobjectionable. But the problem is that Google has algorithms designed to find these answers for almost any question people might ask, and in an era of fake news, some questions and the ostensible answers are driven entirely by conspiracy theories and false reporting, which means that the right answer doesn’t even appear anywhere online. So it’s snippets serve up answers exclusively from fake news and conspiracy sites, as if these were incontrovertible, lending them an air of considerable authority, and causing many users to take those answers as gospel. The simple solution here is for Google to back way off from this snippets approach and limit it to questions that are both frequently asked and also answered by a wide range of sites including reputable ones. I don’t know whether Google will take that approach, but it’s going to be very hard for it to solve the problem in any other way.

    via The Outline

    Facebook has started to flag fake news stories – Recode (Mar 6, 2017)

    This was part of Facebook’s plan for dealing with fake news, announced back in December, so there’s no huge surprise here. But Recode picks up on several points worth noting, most importantly that because Facebook is relying on third party fact checkers, vetting fake news stories can often take quite some time, even when they come from a publication known to publish only false news stories. That’s problematic because by the time the “disputed” label is attached, many people will have seen and believed the story, and attaching it a week after it first surfaces will likely have little impact, especially on a high profile and popular story. It really feels like Facebook needs a separate label for entire fake news publications which is applied automatically to its links – that would be straightforward and far more useful, and could still be done in cooperation with fact checking organizations. But if Snopes and Politifact are going to be really useful, they have to move much faster on this stuff. Here’s hoping Facebook becomes less hesitant and pushes its partners to act more quickly, so that this tool can become really useful.

    via Recode

    How YouTube Serves As The Content Engine Of The Internet’s Dark Side – BuzzFeed (Feb 27, 2017)

    Though Facebook bears the brunt of criticism among the tech industry’s largest players for its role in spreading and/or failing to stem the spread of fake news, it’s worth noting that others play their roles too. Though Google search has been mentioned quite a bit, YouTube hasn’t been mentioned nearly as much, and yet this article argues there’s tons of fake news video content on YouTube, which goes essentially un-policed by the site. YouTube itself responds that it only curates legitimate news sources for its official channels, but of course many of the creators of this fake news content are making money off that content through YouTube’s ad model. Since Google shut down its ad platform on third party sites which focused on fake news, it’s arguable that it should apply the same policy here too, something it so far doesn’t seem willing to do.

    via BuzzFeed

    Mark Zuckerberg Pens a Personal and Facebook Manifesto (Feb 16, 2017)

    Mark Zuckerberg has posted a combination personal and Facebook manifesto to the site, and has also been speaking to a variety of reporters about it over the last day or so. The manifesto is long and covers a ton of ground, some of it about the state of the world but much of it at least indirectly and often quite directly about Facebook and its role in such a world. In some ways, this builds on comments Zuckerberg made at the F8 developer conference last year, and it mostly stays at a similar high level, talking about grand ideas and issues at the 30,000 foot level rather than naming particular politicians or being more specific. To the extent that Zuckerberg is talking about how to use Facebook as a force for good in the world, this is admirable at least to a point. He clearly now both recognizes and is willing to admit to a greater extent than previously the role Facebook has played in some of the negative trends (and I believe this piece contains his first proactive use of the phrase “fake news”), and wants to help fix them, though much of his commentary on what’s going wrong spreads the blame more broadly. I’m also a little concerned that, although many of the problems Facebook creates stem from the service’s massive and increasing power over our lives, the solutions he proposes mostly seem to be about increasing Facebook’s power rather than finding ways to limit it. To some extent, that’s natural given who he is, but it suggests an ongoing unwillingness to recognize the increasing mediation of our world by big forces like Facebook and Google and the negative impact that can have. Still, it’s good to see more open communication on issues like this from a major tech leader – I’d love to see more of this kind of thing (as I wrote last summer in this piece).

    via Facebook

    Google and Facebook to help French newsrooms combat ‘fake news’ ahead of presidential election – VentureBeat (Feb 6, 2017)

    If only these companies had made such a concerted effort to combat fake news in the US a year ago rather than only really springing into action in the very late stages of last year’s presidential campaign (and in Facebook’s case, mostly after it was over). It appears both companies are taking their duty to put accuracy above ad revenue a bit more seriously in France than they did in the US, a sign of increased realism about the power that each company has in shaping the news people see.

    via VentureBeat

    This Is What Facebook’s Filter Bubble Actually Looks Like – BuzzFeed (Feb 3, 2017)

    The topic of fake news and the related topic of filter bubbles has been one BuzzFeed has been particularly strong on in recent months (abuse on Twitter is another). This analysis is fascinating, and shows how even the experience of watching video on Facebook can be colored by the outlets a user chooses to follow. This isn’t quite the same as Facebook’s algorithms showing users different things – in this experiment, the user consciously chose to watch either a Fox News or Fusion live video stream. But it’s a great illustration of how users on Facebook can have completely different experiences even when engaging with the same underlying content.

    via BuzzFeed

    New Signals to Show You More Authentic and Timely Stories – Facebook (Jan 31, 2017)

    This is one of two bits of news from Facebook today (the other concerns metrics), this one about dealing with fake news (though that’s a term Facebook continues to eschew in favor of talking about genuineness and authentic communication). Facebook is tweaking its algorithms again to provide better feeds with fewer sensationalist or inaccurate news reports, for example. It looks like this is mostly about ordering within the feed rather than whether something appears there at all, however, which is a nice way of avoiding perceptions of outright censorship, though of course the lower something appears in the feed, the less likely people are to see it. It’s good to see that Facebook continues to tweak its strategy for dealing with fake news, and as with previous moves around news it’ll be very interesting to see how it’s perceived by users and publications.

    via Facebook