Facebook’s COO Sheryl Sandberg was interviewed today by Axios’s Mike Allen on the subject of Russian election interference and other topics, while Facebook also issued some data about the effectiveness of its program to flag fake news on the platform. At the same time, the Washington Post reports that Facebook has removed a set of data from its site and tools which allowed for analysis of Russian-related postings.
The Sandberg interview shed little additional light on the topic, with the only real news that Facebook is sharing both the ads bought by Russian-backed entities and additional details associated with them with Congress, which in turn may share them publicly. However, she was also asked whether Facebook was a media company, a characterization she pushed back on, leading to several articles from actual media companies arguing that she’s wrong. There continues to be something of an ulterior motive for these stories given the tense relationship between Facebook and the media, but I continue to believe that these characterizations are wrong. To my mind, Facebook is much more like a cable TV company than a TV programmer, putting together a package of content for users but not mostly doing that programming itself, while selling ads that appear within the programming. I don’t think most would argue that cable TV companies are media companies or that they’re responsible for the specific content of the programming, while they are responsible for establishing general policies and rules about what will run on their platforms.
The data Facebook shared on its fake news flagging effort suggests that a fake news label applied after fact checking from third parties effectively reduces sharing and views once it’s applied, but the problem has always been that it takes several days for this to happen, which means most of the views have already happened by the time it takes effect. It shared the data with its fact checking partners as a way to incentivize them to do better (something they’ve been asking for) but without massive new resources from Facebook or elsewhere, it’s not clear how those organizations will be able to work faster or cover more ground. That, in turn, will continue to limit the effectiveness of the program.
Lastly, Facebook says the data it has pulled from its site with regard to Russian accounts should never have been available in the first place, and its disappearance therefore reflects the squashing of a bug rather than a decision to pull otherwise public information. Whether you believe that or not likely depends on your view of Facebook’s overall level of transparency in relation to the Russia story, which has clearly been limited. It appears Facebook at a corporate level is still desperate to control the flow of information about Russian influence on the platform, which likely isn’t helping its PR effort here – better to be as transparent as possible so that all possible bad news can come out quickly rather than continuing to trickle out.
Google has apparently now, like Facebook and Twitter, found at least some spending by actors tied to the Russian government on its platforms, including YouTube and Gmail, and the Washington Post says the amounts spent were in the tens of thousands of dollars. However, the New York Times reports that the actual amount definitely spent by entities connected to the Kremlin was much smaller, at around $4,700, while there is another $53,000 that was spent by Russian entities which have not yet been proven to have a connection to the government. Unlike the money spent on Facebook, of course, ads on Google’s platforms have far less potential to drive viral activity, meaning that the direct reach of the ads was likely much of the total reach, and that amount of money wouldn’t have bought much of that. Google doesn’t seem to have commented on the record about any of this yet, but my guess is that the Times story was pushed by Google PR to provide context on the Post one. But this does draw Google further into the mire that’s already engulfing Facebook and to a lesser extent Twitter, something of which we saw further evidence over the weekend.
I’m actually tying three stories together here, only two of them referenced in the headline. The first is news that Facebook is tightening the review process for ads that seek to target by politics, religion, ethnicity, or social issues, requiring human approval before these ads can be shown to users. Secondly, Facebook’s Chief Security Officer, Alex Stamos, went on something of a Twitter ant on Saturday in which he complained about what he described as overly-simplistic coverage of complex issues by the media. And thirdly, CBS had an interview on Sunday with the Trump campaign’s digital director, who claims that it worked in very direct and sophisticated ways with Facebook to do micro-targeting of its ads, including having Trump-sympathetic members of the Facebook staff working directly with the campaign in its offices.
The ad review change is a sensible one in response to recent revelations about how these tools were used in the past, but is likely to catch lots of entirely innocent activity too – e.g. someone targeting members of a particular religion with products or services relevant to them – and will likely slow down the approval process for those ads. It will also slow down the approval process for political ads during campaigns, when the volume of ads tends to rise dramatically, and the review team will need to be augmented significantly. That delay could prove costly as campaigns become more nimble in responding to news in real time and want to target ads immediately. We won’t know the impact of that until next year, as mid-term campaigns ramp up.
The Stamos rant garners some sympathy from me, because I agree that some of what’s been in the press has assumed that Facebook should have been aware of these attempts to game its systems at a time when the US government and security agencies hadn’t yet addressed the issues at all in public. But the rant is also indicative of what appears to be a split between the security and engineering teams at Facebook, which clearly want to speak out more, and the PR and broader senior management team, which seem to want to say as little as possible – several reporters I follow on Twitter responded to the thread with frustration over the fact that Facebook hasn’t made people available to talk about the details here.
Lastly, the CBS story doesn’t seem to have been picked up widely and may be partly exaggeration on the part of the source, but there’s no doubt that the Trump campaign did use the tools Facebook offers extremely effectively during the campaign, and that it played an important role in the outcome. What’s important here is that its uses were all legitimate, in contrast to the use of Facebook by Russian actors claiming to represent US interests, but the effects and even techniques used were in many ways similar. Even as Facebook clamps down on one type of influence, the broad patterns will remain similar, and as long as foreign actors can find US-based channels willing to act as fronts, it’s going to be extremely difficult to shut down this type of activity entirely.
Facebook still hasn’t shared all of the details of the ads bought by Russian agents on Facebook over the last few years with Congress, and hasn’t really shared any of the details with the general public. However, some of the details have emerged regardless, and one researcher has used that information to do some analysis of the reach of some of the posts on the accounts controlled by entities tied to the Kremlin. What he found is that the organic reach of those posts has been enormous, much larger than the numbers reached by the ads themselves alone as reported by Facebook, suggesting that Facebook is using the narrowest possible definitions of reach in its reporting and thereby downplaying the impact.
Until Facebook does release the full details of the Russian operations, we can’t know the true reach for sure, and this analysis is merely indicative of organic reach achieved by half a dozen of the biggest accounts we do know about. But it’s clear that the operation was both sophisticated and very effective in reaching large numbers of people, leveraging many of the same techniques used by legitimate news organizations and others on Facebook. Given that these techniques are all available to anyone who uses Facebook, the only way they could have been stopped is if there was clear evidence that the accounts behind them were “inauthentic” (to use Facebook’s terminology) way earlier in the process. And given that neither it nor the US government were actively investigating that possibility during the election, that was never likely to happen. It’s also not clear how Facebook would go about policing this kind of thing going forward.
Facebook has made yet another announcement in what’s rapidly becoming the saga of Russian ad-buying on the platform and the ongoing fallout from it. This time around, it says it’s going to share the details of the 3000 suspicious ads placed on the platform with the US Congress, and it’s also going to hire a thousand additional people for its ad review team to ensure that inappropriate ads don’t get through. The rest of the announcement focuses mostly on fleshing out promises made over the last couple of weeks, though there’s still relatively little transparency on what’s actually going to change and/or when in some cases. Over the weekend, Mark Zuckerberg also personally apologized for any role Facebook may have had in sowing divisions in the world and promised to work to make things better in future, as part of a post relating to the Yom Kippur Jewish holiday. It’s clear that he’s taken all of this far more seriously and increasingly personally as well over recent months, though many still want him and Facebook to do far more to increase transparency over how Facebook has been used for ill and how it will change as a result.
In Twitter’s statement on Russian meddling in last year’s elections, it mentioned that Facebook had shared with it data on the accounts it had previously reported, and it now appears Facebook has shared similar data with Google as well, as it investigates its own role in all of this. The three companies have been the main focus – so far – of US congressional investigations into the use of online advertising and platforms to influence the outcome of last year’s elections, so it’s natural that the companies would share whatever data they have with each other. Twitter, though, was reprimanded (rightly or wrongly) by at least two members of Congress this week over seemingly relying too heavily on Facebook’s prior work rather than performing its own extensive search of past activity, and it seems Google is doing rather more of its own digging, though there’s no word so far on what it’s found. Both Google and Facebook have been widely criticized over their roles in allowing problematic activities to take place on their platforms, but I continue to argue that the cost of policing such activity at such a level as to eliminate it 100% would be disproportionately expensive in time and money.
Following Facebook’s public statement about Russian interference in the US elections last week, Twitter has now made a similar statement addressing both that specific issue and broader issues around political meddling, the use of bots on Twitter, and spam and other misuses of its platform. It appears Twitter found some of the same Russian-linked accounts which bought ads on Facebook in 2016 on its own platform, though they didn’t buy ads, while government-linked news outlet Russia Today bought over $200,000 worth of ads in 2016 including some that related directly to elections. Bots continue to be a big problem on Twitter, though one the company claims it’s getting better at managing. Twitter’s head of public policy spoke to the Senate committee investigating Russian influence this morning, and Twitter has promised to disclose more about these activities going forward as well as supporting efforts to increase regulation and transparency around election advertising, something Facebook has also said it supports. In the grand scheme of things, the actual activity discovered and reported by both platforms from an ad spending perspective continues to be very small, but that it’s happened at all in the overall context of an increasingly clear pattern of election manipulation by the Russian government and its surrogates is obviously concerning.
Update: Recode reports that at least one Senator, Democrat Mark Warner, says that Twitter’s presentation before his committee today was inadequate, lacking in detail, and seemed overly derivative based on Facebook’s investigation rather than its own work (the latter goes against the sense you get from reading Twitter’s own post on this, for what it’s worth).
The Russian communications regulator has told Facebook that it needs to begin storing data on Russian users in the country or face a ban, something which happened to LinkedIn last year. The relevant law was passed back in 2015 but it seems the Russian government has given specific tech companies some time to comply because of the investment necessary to make it happen, though it’s not setting a deadline of next year for compliance. This is the kind of thing that could quickly get expensive for Facebook if more companies jump on board – there have certainly been rumblings about data storage in a number of European countries already. My guess is that Facebook will choose to comply given that it likely has many users in the country and won’t want to lose them, but it will also worry about setting a precedent. Facebook’s ad targeting tool suggests a potential reach of 160 million for the country, whereas the official population is just 144 million, so it’s hard to know exactly how many users Facebook has there, but it’s likely in the tens of millions at least.
There were at least three separate articles today highlighting the way in which Facebook is increasingly embroiled in a messy set of political stories. The Washington Post reported that President Obama was instrumental late last year in convincing CEO Mark Zuckerberg to take the social network’s role in the election more seriously, and later reported that the ads which have been in the news for the last few weeks were sophisticated attempts to sow division over issues like the Black Lives Matter movement. BuzzFeed, meanwhile, reported that Steve Bannon at one point tried to plant a mole at Facebook, in an attempt to gain insight into its hiring process. Try as it might to extricate itself from this political quagmire, it seems there is little Facebook can do at the moment to escape it, as it keeps getting sucked deeper in. Clearly no-one at Facebook was involved in the Bannon effort, but it highlights the tensions between the political faction currently running the US government and Silicon Valley, while the other stories suggest Facebook was used unwittingly as a tool by foreign operatives looking to influence the election. That could be either exonerating or damning, depending on how you look at it – on the one hand, it suggests Zuckerberg’s original blasé attitude towards political influence on Facebook was genuine, but on the other it suggests no-one at Facebook took it seriously enough while the campaign was still ongoing to discover things that have only come to light more recently. I hope that as part of the changes announced last week, Facebook is now attempting to ferret out this type of activity more methodically, but as with so many things Facebook-related, it’s impossible to know for sure because of the general opaqueness of the way Facebook operates.
Mark Zuckerberg’s first big action on returning from paternity leave today was to make a statement via his company’s live platform about the ongoing issue of Russian ad buying to influence last year’s US presidential election and related issues, the text of which has now been posted to Zuckerberg’s Facebook page. The key news from the statement is that Facebook will make the ads in question available to the US Congress, something that it had previously not done out of concern for violating privacy laws. But Zuckerberg also addressed the broader issue of Facebook’s use as a tool to meddle in elections. To my mind, he was refreshingly honest in conceding that Facebook was never going to be able to eliminate this behavior, and would focus instead on the more realistic goal of making it harder. He promised to continue investigating what happened during the election last year and share as much as possible about the findings. He announced a change to how political ads are displayed on Facebook, making it clear which entities are showing ads to which users at any given point in time, something it had previously resisted doing, ostensibly again out of privacy concerns.
There are several other elements to today’s statement which are worth reading in full, but the key takeaway is that Facebook is taking the issues seriously and responding to them in a variety of ways. One of the most notable lines in the statement, though, is this: “We don’t check what people say before they say it, and frankly, I don’t think our society shouldn’t want us to. Freedom means you don’t have to ask permission first, and that by default you can say what you want.” That’s always been Facebook’s default position, and I think it’s the right one – the minute it gets into policing which content is and isn’t acceptable ahead of time, it’s in an increasingly powerful and dangerous role, and it has a sometimes poor track record of making those calls. (A current example is its banning of the Rohingya insurgent group in Myanmar, which is at the very least a highly political decision in light of the ongoing actions of the Burmese government.) My feeling is that election meddling and many other issues facing Facebook – including the recent problems with ad targeting – are 99.9% problems: in other words, if Facebook can stop 99.9% (or some other very large percentage) of that activity from happening, that should be good enough, because trying to solve 100% of them is likely to involve far more work and cost both in financial and freedom of speech terms than it’s worth.
Google Forced to Unbundle Services from Android and Open to Search Competitors in Russia (Apr 17, 2017)
The EU is currently taking action against Google over what it sees as anticompetitive practices including bundling of its own services and blocking competing ones from being pre-installed in Android. As such, this Russian case takes on more importance than it might otherwise have, because it presents one possible outcome of the EU case, which is forcing Google to unbundle its own services from Android and allow competing search engines like Yandex to be pre-installed. That’s certainly a possibility in the EU case too, and would mirror the action taken years ago against Microsoft over browsers in Windows. If that were to happen, I’m skeptical many people (or OEMs) would choose alternative search engines on an Android phone, but it would potentially threaten Google’s Android business model, which is entirely about the apps and services it runs on the device (and the advertising they enable). For what it’s worth, as I wrote in this piece at the time the EU action was announced, I still think it’s misguided.
US Charges Russian FSB Officers and Their Criminal Conspirators for Hacking Yahoo and Millions of Email Accounts (Mar 15, 2017)
The stories that broke immediately before this press conference and announcement from the US DoJ suggested only that Russian nationals were involved, but the formal announcement makes clear that these were Russian agents and not just citizen hackers. That’s a good reminder that state-sponsored attacks are among the biggest things all online service companies have to worry about in our day and age, whether the state behind the hacking is Russia, China, North Korea, or some other country. Yes, ordinary hackers will still try and occasionally succeed in breaching these systems, but state sponsorship can put massively more resourced behind a hack like this and often have more success. That, in turn, raises the bar for companies vulnerable to this kind of hacking in terms of their security defenses, but should also make users think about what information they’re entrusting to these systems.
Russia Requires Apple and Google to Remove LinkedIn From Local App Stores – The New York Times (Jan 6, 2017)
This comes hot on the heels of the Chinese New York Times app story earlier in the week, and there’s a danger of this becoming a trend. Apple and Google both tend to comply with local laws when it comes to this kind of thing, and that’s certainly a reasonable defense. But if oppressive regimes start to use the major app stores as a way to block content they don’t like, Apple and Google are going to find themselves on the receiving end of attacks from lots of civil liberties groups.