Topic: Fake news

Each post below is tagged with
  • Company/Division names
  • Topics
  • and
  • Narratives
  • as appropriate.
    Bing Adds Fact Check Summaries to News Search Results (Sep 18, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Facebook Fact-Checking Partners Say Lack of Data Sharing Impedes Work (Sep 7, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Facebook Says It Has Uncovered Evidence of Russian Ad-Buying Operations (Sep 6, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Facebook Disables Advertising for Sites That Repeatedly Share Fake News Links (Aug 28, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Twitter is Reportedly Testing a Fake News Button (Jun 29, 2017)

    The Washington Post reports that Twitter is testing a button that would allow users to flag tweets or links in tweets which appear to be false or misleading, although it says there’s no guarantee that the feature will ever launch, and Twitter itself says it’s not going to launch anything along these lines soon. On the face of it, this is a logical step for Twitter, which has been one of the biggest vehicles for the rapid spread of fake news over the last year or two, even though its much smaller scale means that Facebook still arguably has a bigger impact, especially given its tendency to reinforce people’s existing biases. But on the other hand, given how the phrase “fake news” has lost all meaning and come to take on distinct partisan overtones, there’s enormous potential for misuse and controversy, and if Twitter does launch something along these lines, it’s going to need either a massive team of its own or several big partners with significant resources to handle the refereeing that will be needed. That alone may prevent Twitter from ever launching the feature, needed though it may be. In addition, given that Twitter has arguably bent its own rules on acceptable content a little for public figures such as President Trump (and candidate Trump before him), there are some big questions about whether tweets from the President and others would be subject to the same rules as those from other users.

    via The Washington Post

    News Corp Launches Tool to Help Brands Avoid Advertising Against Undesirable Content (May 2, 2017)

    I commented on a piece a little while ago about how Chase had a member of staff manually go through the sites where its ads were appearing and pare that number back in order to ensure that its ads were appearing on high quality sites against reputable content. That effort paid off for Chase, which saw essentially the same end results from its advertising on 5,000 as it had on 400,000, but it’s a heck of a lot of work to go through. So a service that offers to help brands work through similar decisions without having to do all the work is bound to be appealing, and that’s what News Corp is offering through its Storyful product. News Corp itself isn’t a major destination for online advertising, but the Murdoch empire in total certainly is, thanks to 21st Century Fox and its significant TV ad revenue (21CF alone had around 75% as much ad revenue in 2016 as Facebook did in the US). But this Storyful product is an expansion of News Corp’s ad offerings into a new area which isn’t directly tied to its online ad platform (or those of 21CF). So although the article sets this up as part of a broader war between Murdoch and Google, this is pretty peripheral stuff. But it’s clearly aimed at taking advantage of some of Google’s recent troubles in this area.

    via Bloomberg

    ★ Google Makes Tweaks to Search to Combat Fake News (Apr 25, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Google expands fact-checking effort to all searches worldwide – Search Engine Land (Apr 7, 2017)

    This is the second fake news-combatting announcement this week, after Facebook’s announcement about teaching users how to spot fake news yesterday. This is one of the broadest and most direct steps Google has taken in this area, and will specifically flag particular news articles or other sites with an additional link to a fact checking site such as Snopes or PolitiFact with a brief summary of who is making a claim and whether those sites consider it to be true. This is somewhat similar to Facebook’s effort to flag fake news, but the big difference is that it will be done algorithmically through special markup those sites will use, which will be picked by Google’s crawlers. That should mean that at least in some cases Google will flag something as false long before Facebook will, and I’d hope that Facebook would move to do something similar over time too.

    via Search Engine Land (Google’s blog post here)

    Facebook Wants To Teach You How To Spot Fake News On Facebook – BuzzFeed (Apr 6, 2017)

    Facebook seems to be taking its responsibility to help police fake news ever more seriously, and today announced another step in that effort: showing users a popup card at the top of their feed which offers to teach them how to spot fake news. I’d love to think this could make a meaningful difference in people’s ability to discern truth from error, but realistically the kind of people who most need this training will be least likely to click on it, in part at least because Facebook’s previous efforts in this area have been seen as partisan rather than neutral by those most likely to read, believe, and share fake news. But it’s good to see Facebook trying, and it may at least give some more moderate users pause before they share fake news on the site.

    via BuzzFeed

    Google starts flagging offensive content in search results – USA Today (Mar 16, 2017)

    Human curation feels like an interesting way to solve a problem with an algorithm, and it’s striking that Google pays 10,000 people to check its search results for quality in the first place. As I’ve said previously, the specific problem with “snippets” in search is better solved by eliminating them for obscure or poorly covered topics, but the issue with false results is certainly broader than just snippets. It sounds like this approach is helping, but it doesn’t feel very scalable.

    via USA Today

    Google’s featured snippets are worse than fake news – The Outline (Mar 6, 2017)

    This is the second of two fake news stories this morning (the first concerned Facebook), and this one doesn’t look so good for Google. Google has long pull excerpts out of search results in order to provide what it deems to be the answers to questions, as a way to get people to what they’re looking for faster. For undisputed facts, like what time the Super Bowl starts or how old a movie star is, that’s very useful and unobjectionable. But the problem is that Google has algorithms designed to find these answers for almost any question people might ask, and in an era of fake news, some questions and the ostensible answers are driven entirely by conspiracy theories and false reporting, which means that the right answer doesn’t even appear anywhere online. So it’s snippets serve up answers exclusively from fake news and conspiracy sites, as if these were incontrovertible, lending them an air of considerable authority, and causing many users to take those answers as gospel. The simple solution here is for Google to back way off from this snippets approach and limit it to questions that are both frequently asked and also answered by a wide range of sites including reputable ones. I don’t know whether Google will take that approach, but it’s going to be very hard for it to solve the problem in any other way.

    via The Outline

    Facebook has started to flag fake news stories – Recode (Mar 6, 2017)

    This was part of Facebook’s plan for dealing with fake news, announced back in December, so there’s no huge surprise here. But Recode picks up on several points worth noting, most importantly that because Facebook is relying on third party fact checkers, vetting fake news stories can often take quite some time, even when they come from a publication known to publish only false news stories. That’s problematic because by the time the “disputed” label is attached, many people will have seen and believed the story, and attaching it a week after it first surfaces will likely have little impact, especially on a high profile and popular story. It really feels like Facebook needs a separate label for entire fake news publications which is applied automatically to its links – that would be straightforward and far more useful, and could still be done in cooperation with fact checking organizations. But if Snopes and Politifact are going to be really useful, they have to move much faster on this stuff. Here’s hoping Facebook becomes less hesitant and pushes its partners to act more quickly, so that this tool can become really useful.

    via Recode

    How YouTube Serves As The Content Engine Of The Internet’s Dark Side – BuzzFeed (Feb 27, 2017)

    Though Facebook bears the brunt of criticism among the tech industry’s largest players for its role in spreading and/or failing to stem the spread of fake news, it’s worth noting that others play their roles too. Though Google search has been mentioned quite a bit, YouTube hasn’t been mentioned nearly as much, and yet this article argues there’s tons of fake news video content on YouTube, which goes essentially un-policed by the site. YouTube itself responds that it only curates legitimate news sources for its official channels, but of course many of the creators of this fake news content are making money off that content through YouTube’s ad model. Since Google shut down its ad platform on third party sites which focused on fake news, it’s arguable that it should apply the same policy here too, something it so far doesn’t seem willing to do.

    via BuzzFeed

    Mark Zuckerberg Pens a Personal and Facebook Manifesto (Feb 16, 2017)

    Mark Zuckerberg has posted a combination personal and Facebook manifesto to the site, and has also been speaking to a variety of reporters about it over the last day or so. The manifesto is long and covers a ton of ground, some of it about the state of the world but much of it at least indirectly and often quite directly about Facebook and its role in such a world. In some ways, this builds on comments Zuckerberg made at the F8 developer conference last year, and it mostly stays at a similar high level, talking about grand ideas and issues at the 30,000 foot level rather than naming particular politicians or being more specific. To the extent that Zuckerberg is talking about how to use Facebook as a force for good in the world, this is admirable at least to a point. He clearly now both recognizes and is willing to admit to a greater extent than previously the role Facebook has played in some of the negative trends (and I believe this piece contains his first proactive use of the phrase “fake news”), and wants to help fix them, though much of his commentary on what’s going wrong spreads the blame more broadly. I’m also a little concerned that, although many of the problems Facebook creates stem from the service’s massive and increasing power over our lives, the solutions he proposes mostly seem to be about increasing Facebook’s power rather than finding ways to limit it. To some extent, that’s natural given who he is, but it suggests an ongoing unwillingness to recognize the increasing mediation of our world by big forces like Facebook and Google and the negative impact that can have. Still, it’s good to see more open communication on issues like this from a major tech leader – I’d love to see more of this kind of thing (as I wrote last summer in this piece).

    via Facebook

    Google and Facebook to help French newsrooms combat ‘fake news’ ahead of presidential election – VentureBeat (Feb 6, 2017)

    If only these companies had made such a concerted effort to combat fake news in the US a year ago rather than only really springing into action in the very late stages of last year’s presidential campaign (and in Facebook’s case, mostly after it was over). It appears both companies are taking their duty to put accuracy above ad revenue a bit more seriously in France than they did in the US, a sign of increased realism about the power that each company has in shaping the news people see.

    via VentureBeat

    This Is What Facebook’s Filter Bubble Actually Looks Like – BuzzFeed (Feb 3, 2017)

    The topic of fake news and the related topic of filter bubbles has been one BuzzFeed has been particularly strong on in recent months (abuse on Twitter is another). This analysis is fascinating, and shows how even the experience of watching video on Facebook can be colored by the outlets a user chooses to follow. This isn’t quite the same as Facebook’s algorithms showing users different things – in this experiment, the user consciously chose to watch either a Fox News or Fusion live video stream. But it’s a great illustration of how users on Facebook can have completely different experiences even when engaging with the same underlying content.

    via BuzzFeed

    New Signals to Show You More Authentic and Timely Stories – Facebook (Jan 31, 2017)

    This is one of two bits of news from Facebook today (the other concerns metrics), this one about dealing with fake news (though that’s a term Facebook continues to eschew in favor of talking about genuineness and authentic communication). Facebook is tweaking its algorithms again to provide better feeds with fewer sensationalist or inaccurate news reports, for example. It looks like this is mostly about ordering within the feed rather than whether something appears there at all, however, which is a nice way of avoiding perceptions of outright censorship, though of course the lower something appears in the feed, the less likely people are to see it. It’s good to see that Facebook continues to tweak its strategy for dealing with fake news, and as with previous moves around news it’ll be very interesting to see how it’s perceived by users and publications.

    via Facebook

    Continuing Our Updates to Trending – Facebook (Jan 25, 2017)

    It’s a big day for Facebook news – I’ve already covered the new Facebook Stories feature and ads in Messenger, both of which are being tested. This is the only one that’s been publicly announced by Facebook, however, and it concerns Trending Topics, which appear on the desktop site. The changes are subtle but important – each topic will now come with a headline and a base URL such as foxnews.com, topics will be identified based on broad engagement by multiple publications and not just one, and the same topics will be shown to everyone in the same region rather than personalized. Though Facebook doesn’t explicitly say so (perhaps because it fears a backlash, perhaps because it would be a further acknowledgement of a thorny issue), but all of these can be seen as partial solutions to the fake news issue. Citing specific headlines and publications allows users to see the source and make a judgment about whether it’s a reliable one, prioritizing broad engagement will surface those stories that are widely covered rather than being promoted by a single biased source, and showing the same topics to all users could be seen as an attempt to break through the filter bubble. These all seem like smart changes, assuming Facebook can deliver better on these promises than some of its abortive previous changes to Trending Topics.

    via Facebook (more on Techmeme)

    Google’s 2016 Bad Ads Report: 1.7 billion ads removed, including fake news ads – Search Engine Land (Jan 25, 2017)

    The quality of online advertising continues to be one of the big challenges for any company making a business out of selling ads. Between scams, predatory practices, and more recently fake news (and fake news sites), there are lots of ways online advertising can be abused, and Google reports each year on how it’s clamped down on some of this behavior (the report itself is here). Fake news doesn’t actually get a mention in the report directly – the closest link is sites pretending to be news sites for clicks, and then attempting to sell something such as weight loss products. But we do know that Google also shut down advertising on some fake news sites that were using its ad products in 2017. Draining scammers and predators of funds from Google goes a long way to breaking their business model, so we need to see more of this kind of thing.

    via Search Engine Land

    How Facebook actually isolates us – CNN (Jan 23, 2017)

    This isn’t a new idea – it’s been around at least since Eli Pariser’s Filter Bubble was published in 2012. But this study dives a little deeper and provides a scientific foundation for the claims made. However, it also demonstrates how much of the filtering and bubble behavior on sites like Facebook is really tapping into deeper human tendencies like confirmation bias, of which content shared through the mechanism of a social network is a massive enabler. Though the article doesn’t mention Facebook once beyond the headline, the study itself was focused on Facebook, so these findings are specifically about that specific network, though the patterns would largely apply to others too. Because so many of these features are grounded in fundamental human behaviors, they’re very tough to change too, so although Facebook may share some blame for enabling rather than challenging those tendencies, it’s going to be very tough to change them unless Facebook makes a very deliberate attempt to break up the filter bubbles and actively challenge users with new information that contradicts their existing views, which seems very unlikely.

    via How Facebook actually isolates us – CNN