Company / division: Facebook
Facebook has announced that it’s now offering an option to order food through its app, as a result of integration with a range of delivery partners and individual restaurant chains. The “Order Food” option is so far buried deep in the “Explore” tab where few people are likely to find it, though I’m guessing users who make use of it frequently may see it move into their core navigation over time. The integration is basic, mostly launching an in-app browser aimed at the website for either a delivery service or the restaurant chain, with the user having to fill in all relevant information such as credit card, address, and other contact information if they don’t have an existing account. As such, this integration feels like it adds little value over and above the minimal utility of shopping for food within the Facebook app rather than a separate one. It’s a good reminder that, for all Facebook’s reach and power, its new feature launches are often pretty lacking and unlikely to gain much traction, at least in their original form.
Facebook’s COO Sheryl Sandberg was interviewed today by Axios’s Mike Allen on the subject of Russian election interference and other topics, while Facebook also issued some data about the effectiveness of its program to flag fake news on the platform. At the same time, the Washington Post reports that Facebook has removed a set of data from its site and tools which allowed for analysis of Russian-related postings.
The Sandberg interview shed little additional light on the topic, with the only real news that Facebook is sharing both the ads bought by Russian-backed entities and additional details associated with them with Congress, which in turn may share them publicly. However, she was also asked whether Facebook was a media company, a characterization she pushed back on, leading to several articles from actual media companies arguing that she’s wrong. There continues to be something of an ulterior motive for these stories given the tense relationship between Facebook and the media, but I continue to believe that these characterizations are wrong. To my mind, Facebook is much more like a cable TV company than a TV programmer, putting together a package of content for users but not mostly doing that programming itself, while selling ads that appear within the programming. I don’t think most would argue that cable TV companies are media companies or that they’re responsible for the specific content of the programming, while they are responsible for establishing general policies and rules about what will run on their platforms.
The data Facebook shared on its fake news flagging effort suggests that a fake news label applied after fact checking from third parties effectively reduces sharing and views once it’s applied, but the problem has always been that it takes several days for this to happen, which means most of the views have already happened by the time it takes effect. It shared the data with its fact checking partners as a way to incentivize them to do better (something they’ve been asking for) but without massive new resources from Facebook or elsewhere, it’s not clear how those organizations will be able to work faster or cover more ground. That, in turn, will continue to limit the effectiveness of the program.
Lastly, Facebook says the data it has pulled from its site with regard to Russian accounts should never have been available in the first place, and its disappearance therefore reflects the squashing of a bug rather than a decision to pull otherwise public information. Whether you believe that or not likely depends on your view of Facebook’s overall level of transparency in relation to the Russia story, which has clearly been limited. It appears Facebook at a corporate level is still desperate to control the flow of information about Russian influence on the platform, which likely isn’t helping its PR effort here – better to be as transparent as possible so that all possible bad news can come out quickly rather than continuing to trickle out.
Facebook’s Oculus today held its fourth developer event, Oculus Connect, and the biggest announcements revolved around standalone headsets. First, Oculus will launch the Oculus Go, a mobile-grade standalone VR unit, at $199 early next year; secondly, the company has made significant progress on its Santa Cruz project, which will result in a standalone PC-grade headset at a later date. The Oculus Go is a pretty compelling new entrant in the market, a competitor of sorts to Samsung’s Oculus-based Gear VR but without requiring a compatible smartphone, and with some feature benefits too. It’s more expensive than Gear VR and Google’s Daydream View, but still fairly reasonably priced. Santa Cruz will offer inside-out tracking and six degrees of freedom, meaning that it will allow a full range of motion and room and object detection without requiring external sensors to be installed in a room as the HTC Vive does. There’s no detail on pricing or exact availability for that product, but it sounds like it’ll be at least late next year before that’s out. With both products, Oculus reduces its dependence on partners – Samsung in mobile and PC for the Rift – over the long term, which is likely to push them further into the arms of other VR platforms, including Daydream in the case of Samsung and Microsoft’s Windows-based Mixed Reality VR platform in the case of PC OEMs.
On that latter point, though, another big announcement Oculus made today was making permanent the temporary $399 summer price point for the Oculus Rift bundle including controllers, something that’s seemed increasingly inevitable as Oculus extended the price promotion. As I pointed out in this piece I wrote for Techpinions a while back, that price point and similar pricing moves from HTC and Sony are making the opportunity Microsoft originally targeted for its VR partners disappear. It’s going to be very tough to sell a basic PC VR headset against the Oculus Rift bundle at the same price point.
The other announcements made largely relate to different bundles and new software. Oculus is updating its platform for the Rift, introducing some new experiences including a virtual desktop environment along the same lines as Microsoft’s recent announcements – something I’m still not convinced most people want from VR – as well as more social and entertainment experiences. It’s also creating a business bundle for Oculus designed for companies that want to deploy Rift and Rift-based experiences, which will come with a premium tier of support over and above a set of hardware.
The big new goal Facebook and Oculus announced at today’s event is getting 1 billion people into VR, something that’s miles away from today’s numbers, which are likely closer to one hundredth of that number. Certainly, bringing the price points down is part of getting there, as is creating experiences beyond hardcore gaming, but it really doesn’t feel like there’s much there yet, which may be OK because Facebook doesn’t seem to have put a timeline on that goal, which therefore remains more aspirational than concrete.
I’m actually tying three stories together here, only two of them referenced in the headline. The first is news that Facebook is tightening the review process for ads that seek to target by politics, religion, ethnicity, or social issues, requiring human approval before these ads can be shown to users. Secondly, Facebook’s Chief Security Officer, Alex Stamos, went on something of a Twitter ant on Saturday in which he complained about what he described as overly-simplistic coverage of complex issues by the media. And thirdly, CBS had an interview on Sunday with the Trump campaign’s digital director, who claims that it worked in very direct and sophisticated ways with Facebook to do micro-targeting of its ads, including having Trump-sympathetic members of the Facebook staff working directly with the campaign in its offices.
The ad review change is a sensible one in response to recent revelations about how these tools were used in the past, but is likely to catch lots of entirely innocent activity too – e.g. someone targeting members of a particular religion with products or services relevant to them – and will likely slow down the approval process for those ads. It will also slow down the approval process for political ads during campaigns, when the volume of ads tends to rise dramatically, and the review team will need to be augmented significantly. That delay could prove costly as campaigns become more nimble in responding to news in real time and want to target ads immediately. We won’t know the impact of that until next year, as mid-term campaigns ramp up.
The Stamos rant garners some sympathy from me, because I agree that some of what’s been in the press has assumed that Facebook should have been aware of these attempts to game its systems at a time when the US government and security agencies hadn’t yet addressed the issues at all in public. But the rant is also indicative of what appears to be a split between the security and engineering teams at Facebook, which clearly want to speak out more, and the PR and broader senior management team, which seem to want to say as little as possible – several reporters I follow on Twitter responded to the thread with frustration over the fact that Facebook hasn’t made people available to talk about the details here.
Lastly, the CBS story doesn’t seem to have been picked up widely and may be partly exaggeration on the part of the source, but there’s no doubt that the Trump campaign did use the tools Facebook offers extremely effectively during the campaign, and that it played an important role in the outcome. What’s important here is that its uses were all legitimate, in contrast to the use of Facebook by Russian actors claiming to represent US interests, but the effects and even techniques used were in many ways similar. Even as Facebook clamps down on one type of influence, the broad patterns will remain similar, and as long as foreign actors can find US-based channels willing to act as fronts, it’s going to be extremely difficult to shut down this type of activity entirely.
Facebook still hasn’t shared all of the details of the ads bought by Russian agents on Facebook over the last few years with Congress, and hasn’t really shared any of the details with the general public. However, some of the details have emerged regardless, and one researcher has used that information to do some analysis of the reach of some of the posts on the accounts controlled by entities tied to the Kremlin. What he found is that the organic reach of those posts has been enormous, much larger than the numbers reached by the ads themselves alone as reported by Facebook, suggesting that Facebook is using the narrowest possible definitions of reach in its reporting and thereby downplaying the impact.
Until Facebook does release the full details of the Russian operations, we can’t know the true reach for sure, and this analysis is merely indicative of organic reach achieved by half a dozen of the biggest accounts we do know about. But it’s clear that the operation was both sophisticated and very effective in reaching large numbers of people, leveraging many of the same techniques used by legitimate news organizations and others on Facebook. Given that these techniques are all available to anyone who uses Facebook, the only way they could have been stopped is if there was clear evidence that the accounts behind them were “inauthentic” (to use Facebook’s terminology) way earlier in the process. And given that neither it nor the US government were actively investigating that possibility during the election, that was never likely to happen. It’s also not clear how Facebook would go about policing this kind of thing going forward.
Facebook is testing adding a new button (an “i” in a circle) on articles in the News Feed on the platform, which will bring up additional context relating to the article, including a brief summary description of the publication from Wikipedia (if its profile merits an entry), related articles, and where the piece is being read and shared. All of this is intended to serve as a set of subtle signals about the reputability of the publication and the content of the article, without explicitly rating it in the way the much more robust (but therefore also less frequently available) fact checking initiative Facebook announced earlier in the year. The main problem I see with this approach is that the button itself doesn’t highlight any particular articles – the reader has to proactively decide to find out whether there might be interesting information hidden behind it, something many readers won’t be inclined to do, especially if they’re the credulous type most likely to fall for fake news in the first place. As such, this is an interesting additional set of tools, but not one that’s likely to make a meaningful difference in combating fake news on Facebook.
Instagram Offers Cross-Posting of Stories to Facebook (Oct 5, 2017)
Facebook seems determined to keep trying to make the Stories format a success in its core app, even as all the evidence shows that hardly anyone is using it. The latest push is a feature which enables users of Instagram’s Stories feature to cross-post a Story created in that app over to the Facebook equivalent as well. That will certainly provide a low-friction way to get people to create content for the Facebook Stories feature, and will therefore likely lead to at least a small increase in usage. But the big difference between Instagram and Facebook is often the size and nature of the audience. Yes, some people have big followings in both places and for them cross-posting will be natural and even useful, but for many others the appeal of Instagram is the smaller, more intimate audience they publish to there in contrast to the mishmash of people known well and not so well that clutter many people’s Facebook networks. As such, the appeal and usage of the feature is likely to be somewhat limited, for all the same reasons that Facebook’s Stories feature in general hasn’t taken off.
Third party social media metrics company Delmondo says that across a selection of Facebook Watch videos it measured, average watch time was 23 seconds. That’s a little higher than the 17 second average for videos in the News Feed, but not much. It’s notable, too, that 20 seconds is the minimum amount of time a video must run before a mid-roll ad can be shown to the user, and I wouldn’t be surprised if those mid-roll ads are a big reason why average watch times are around that level. I continue to believe that Facebook’s mid-roll focus is going to harm viewing and ultimately ad revenue for its video platform, and it badly needs to re-think that approach. It’s still early for Watch in particular, and it’s clearly more of a destination for video rather than something users stumble across accidentally as with the News Feed, but it needs to grow well beyond 23 seconds if it’s going to be worthwhile either for Facebook or its content creators longer term.
Facebook, Google, and Twitter Struggle to Contain Fake News in the Wake of Las Vegas Shooting (Oct 3, 2017)
I had this in my list of items to cover yesterday but it was a busy day for other news and I’d already covered a couple of Facebook stories, so I decided to hold it over to today given that it was likely to continue to be newsworthy. This BuzzFeed piece does a good job rounding up some of the issues with Facebook, Google, and Twitter properties in the wake of the awful shooting in Las Vegas on Sunday night. Each of these platforms struggled in some way to filter fake news and uninformed speculation from accurate, reliable, news reporting in the wake of the shooting. Each eventually responded to the issue and fixed things, but not before many people saw (and reported on) some of the early misleading results. And it does feel as though some of the issues they saw were easily avoidable by limiting which sites might be considered legitimate sources of news ahead of time, or at the very least requiring new sites claiming to break news to pass some sort of human review before being cited. Normally, I’d say this would blow over quickly and wouldn’t matter that much, but in the current political context around Facebook, Google, and so on, it’ll probably take on broader meaning.
Digiday has done some digging on the CPMs (payout rates for advertising) on Facebook’s video platform, and has found that on the whole they’re pretty low. One publisher suggested an average CPM of 15 cents, which is indeed low by industry standards, and that theme if not the exact amount was confirmed by others Digiday spoke to. One big challenge, though, in measuring CPMs or other industry-standard ad metrics is that many publishers publish videos which don’t use the mid-roll ad format alongside those that do, so the denominator may be skewing the results a little. But what seems clearer is that mid-roll ads perform far less well than the pre-roll ads YouTube has used very successfully, something I predicted in this piece a few weeks ago on Facebook’s big video pivot. I suspect Facebook will struggle to compensate creators adequately as long as mid-roll remains essentially the only way to monetize videos, and when it likely drives a high abandonment rate.
Facebook has made yet another announcement in what’s rapidly becoming the saga of Russian ad-buying on the platform and the ongoing fallout from it. This time around, it says it’s going to share the details of the 3000 suspicious ads placed on the platform with the US Congress, and it’s also going to hire a thousand additional people for its ad review team to ensure that inappropriate ads don’t get through. The rest of the announcement focuses mostly on fleshing out promises made over the last couple of weeks, though there’s still relatively little transparency on what’s actually going to change and/or when in some cases. Over the weekend, Mark Zuckerberg also personally apologized for any role Facebook may have had in sowing divisions in the world and promised to work to make things better in future, as part of a post relating to the Yom Kippur Jewish holiday. It’s clear that he’s taken all of this far more seriously and increasingly personally as well over recent months, though many still want him and Facebook to do far more to increase transparency over how Facebook has been used for ill and how it will change as a result.
Even though the shareholder lawsuit over plans to create a new class of shares at Facebook has been settled, some of the materials in the suit have just been unsealed and revealed some interesting tidbits. Business Insider has latched onto this particular one, which is that Mark Zuckerberg wanted to have Facebook employees work on projects for his Chan Zuckerberg Initiative foundation, but got pushback from other board members, notably Marc Andreessen. On the one hand, this feels like just the kind of thing some people worried about when Zuckerberg created the foundation while still running Facebook, risking a blurring of lines between the two. It feels clearly inappropriate for him to try to swing corporate resources behind his personal projects and it’s good that he was shot down. But I’m also minded of the many things that Google invested in during its pre-Alphabet days which were effectively personal passion projects for the two founders. Some of those things had at least tenuous connections to Google’s core business, but others felt much more disconnected, and yet the fact that there was no real shareholder oversight with most of them meant that they happened anyway (and the founders could at least argue they were potentially commercial projects, even if some of them were years from generating revenue). The fact that Zuckerberg created a separate foundation to pursue his projects makes the separation that should exist much clearer and thereby highlights these potential conflicts of interest much more clearly.
via Business Insider
In Twitter’s statement on Russian meddling in last year’s elections, it mentioned that Facebook had shared with it data on the accounts it had previously reported, and it now appears Facebook has shared similar data with Google as well, as it investigates its own role in all of this. The three companies have been the main focus – so far – of US congressional investigations into the use of online advertising and platforms to influence the outcome of last year’s elections, so it’s natural that the companies would share whatever data they have with each other. Twitter, though, was reprimanded (rightly or wrongly) by at least two members of Congress this week over seemingly relying too heavily on Facebook’s prior work rather than performing its own extensive search of past activity, and it seems Google is doing rather more of its own digging, though there’s no word so far on what it’s found. Both Google and Facebook have been widely criticized over their roles in allowing problematic activities to take place on their platforms, but I continue to argue that the cost of policing such activity at such a level as to eliminate it 100% would be disproportionately expensive in time and money.
The Street reports that Facebook’s soon-to-be-launched subscription offering for news publishers won’t have some key newspapers on board at launch, notably the New York Times, The Wall Street Journal, and The Financial Times. It will, though, apparently have the Tronc and Hearst Newspaper Groups, the Economist, and the Washington Post as launch partners. The former group, notably the Times and Journal (and parent company News Corp) have been among the most skeptical about all of Facebook’s news initiatives, and among the most distinctive brands in news, so it’s not a huge surprise that they won’t be on board, but it’s still a bit of a blow. I’d argue, though, that Facebook doesn’t need broad support from newspapers for this program in the same way as an aggregation app like Apple News does, simply because articles from those publishers will still be shared and in some cases posted on Facebook and in some cases carry Facebook ads, they just won’t be monetized through subscriptions. Since Facebook won’t be taking a cut anyway, that doesn’t actually matter all that much.
Facebook has provided an update on its efforts to prevent interference in the recent federal elections in Germany, and says that in the month before the election it removed tens of thousands of fake accounts in Germany (it had previously said something similar about the recent French elections). But Facebook’s post also tries to make clear a distinction which has often been lost in the context of Facebook’s influence on elections: that its algorithms and the way it treats news and other posts have not swayed elections, while its official and transparent tools for politicians and political parties absolutely have had multiple effects. That’s a theme in the post, in which Facebook talks about politicians using the platform to reach voters and a variety of tools the company set up to help voters understand the issues and so on, much as it has promoted voting and information about candidates and their policy positions in the US. I frequently see Twitter posts which suggest that Facebook claims not to influence elections at all, but that’s misleading – its point is that it has transparent and official tools for politicians and parties to legitimately influence voters, while it’s doing its best to crack down on illegitimate uses of its platforms for those purposes. However, this post reiterates a point from Mark Zuckerberg’s brief speech last week on the topic – that Facebook will never be perfect at this, and its goal is to make it harder to mislead through the platform.
Update/Related: Mark Zuckerberg later today posted about President Trump’s critical tweet this morning with more of the same message and something of an apology for his initial reaction to criticisms of Facebook’s role in the election.
The Russian communications regulator has told Facebook that it needs to begin storing data on Russian users in the country or face a ban, something which happened to LinkedIn last year. The relevant law was passed back in 2015 but it seems the Russian government has given specific tech companies some time to comply because of the investment necessary to make it happen, though it’s not setting a deadline of next year for compliance. This is the kind of thing that could quickly get expensive for Facebook if more companies jump on board – there have certainly been rumblings about data storage in a number of European countries already. My guess is that Facebook will choose to comply given that it likely has many users in the country and won’t want to lose them, but it will also worry about setting a precedent. Facebook’s ad targeting tool suggests a potential reach of 160 million for the country, whereas the official population is just 144 million, so it’s hard to know exactly how many users Facebook has there, but it’s likely in the tens of millions at least.
Facebook has signed up the biggest private company in the world – Walmart – as a customer for its Workplace enterprise product, something of a coup as it competes against other products which have been in the market for far longer. Of course, many of Walmart’s employees work in stores and not at desks and so likely will never need accounts on the corporate social network, so though its overall size is dazzling, the size of the deployment is likely to be rather smaller. But it appears that Facebook has been quietly making good progress signing up other businesses of different sizes including some big names like Telenor (Norway’s incumbent telecoms operator) and Starbucks. All of this is indicative of Facebook’s enormous power to leverage its dominance in consumer social networking into other fields, whether messaging, photo sharing, or even crossing over into the enterprise, based on its familiar brand, interfaces, and tools. That’s never been a guarantee of success in any market for Facebook, as its repeated failures to compete organically with Snapchat demonstrate, but it has given it a leg up in many areas over companies starting from scratch and continues to make it a formidable competitor for any company going up against it directly.
Facebook Signs Deal with NFL for Highlight Videos (Sep 26, 2017)
Given that the live TV rights for major US sports are pretty much all sewn up for years to come, the major online platforms have been relegated to pursuing other rights, including second-tier sports (and e-sports), sports rights outside the US, and meta content including highlights and sports-centric talk shows. The latest example of that comes from Facebook, which has paid the NFL for the right to show highlights to its users immediately after games end, as well as doing a deal for NFL-created shows for its new Watch tab for video. The highlights deal kicks in immediately and the overall contract is for two years. This feels like one of the more promising deals Facebook has signed – I’m really not convinced anyone wants to watch long-form sports (like pretty much all US sports with their massive ad loads) through a social network, but highlights seem much better suited to both mobile and social contexts, because they’re very shareable and digestible in small chunks. I already regularly see highlights from various sports in my Facebook feed, but they’re almost all videos from within articles hosted off Facebook – this deal would bring the content into the platform and therefore enable monetization through advertising. As I said yesterday in the context of YouTube’s enhancements, Facebook’s video ad tools are still very rudimentary in comparison, but at least it now has ways to show ads in videos. The challenge with highlights is going to be that they’re so short and so widely available, I wonder whether anyone will want to stick around beyond the mid-roll ad break.
China Effectively Blocks WhatsApp (Sep 25, 2017)
It appears that the Chinese government has effectively blocked WhatsApp entirely in the country through some fairly sophisticated and subtle means, making it more or less unusable for the population, an escalation over earlier partial censorship of certain content types. This is obviously a big blow to Facebook – we don’t know how many monthly active users WhatsApp has in China, and it’s clearly not as dominant there as in certain other markets given the popularity of the local messaging services, but it’s still an important market for WhatsApp given its popularity in Asia in general, and WhatsApp has also been about the only property Facebook has that’s operated in China even as the rest of the company has been shut out. The context here is the broader crackdown by the government against western tech companies and especially those that foster open communication at a time when the government is clamping down on dissent.
via New York Times
There were at least three separate articles today highlighting the way in which Facebook is increasingly embroiled in a messy set of political stories. The Washington Post reported that President Obama was instrumental late last year in convincing CEO Mark Zuckerberg to take the social network’s role in the election more seriously, and later reported that the ads which have been in the news for the last few weeks were sophisticated attempts to sow division over issues like the Black Lives Matter movement. BuzzFeed, meanwhile, reported that Steve Bannon at one point tried to plant a mole at Facebook, in an attempt to gain insight into its hiring process. Try as it might to extricate itself from this political quagmire, it seems there is little Facebook can do at the moment to escape it, as it keeps getting sucked deeper in. Clearly no-one at Facebook was involved in the Bannon effort, but it highlights the tensions between the political faction currently running the US government and Silicon Valley, while the other stories suggest Facebook was used unwittingly as a tool by foreign operatives looking to influence the election. That could be either exonerating or damning, depending on how you look at it – on the one hand, it suggests Zuckerberg’s original blasé attitude towards political influence on Facebook was genuine, but on the other it suggests no-one at Facebook took it seriously enough while the campaign was still ongoing to discover things that have only come to light more recently. I hope that as part of the changes announced last week, Facebook is now attempting to ferret out this type of activity more methodically, but as with so many things Facebook-related, it’s impossible to know for sure because of the general opaqueness of the way Facebook operates.