Facebook has announced that it’s making changes to the type of content that can be monetized on its site, introducing some serious limitations to which content ads will run against. On the one hand, this is clearly an echo of changes YouTube made earlier this year in response to the boycott and broader backlash against ads showing up next to undesirable content, and therefore a sop to advertisers. But on the other hand, it means content creators who may in some cases have built businesses out of creating content in some of the now unmonetizable categories will understandably be upset. Some of the bans on monetization are entirely common sense in nature, while others are likely to be more controversial, notably a ban on monetizing content about highly controversial issues, seemingly including news coverage of those issues. That’s one that Facebook is definitely going to want to clarify to avoid charges of censorship.
Apple Removes VPN Apps from Chinese Version of App Store (Jul 31, 2017)
Apple has removed VPN apps from the Chinese version of its App Store for iOS devices, in compliance with the Chinese government’s edict that VPNs have to be licensed to be able to operate. This is yet another example of the difficult line foreign tech companies have to walk in China, complying for the most part with local regulations, even those designed to enable censorship, while preserving freedom of speech in other markets around the world. This is a gray area that Apple hasn’t had to deal with as much as content-centric companies like Facebook and Google, both of which eventually exited China (one forced out, the other choosing to leave rather than submit to censorship requirements), but that’s been starting to change. In the past year and a half, we’ve seen some of Apple’s content offerings like iBooks, individual apps like the New York Times, and now categories such as VPNs blocked, while the government has also forced cloud service providers to work through local companies for data centers. As I’ve said before, so far Apple can simply say it’s complying with local laws and regulations as it does elsewhere, and that will provide some cover, although it hasn’t insulated it entirely from criticism over this latest move. This move in particular further reduces the ability of users with Chinese App Store accounts to get access to otherwise blocked news and information, but a recent crackdown on VPN use makes that challenging anyway. But so far the Chinese government hasn’t forced Apple to break any of its own cardinal rules, including protecting user privacy and security. If and when the Chinese government ever does cross that line, that will be the real test for Apple and could end up being very bad for its business in China. So far, thankfully, it hasn’t come to that. Also worth noting in this context: Russia has just passed legislation that bans the use of VPNs in the country, and although it’s a far less important market for Apple than China, the company will have to deal with some of the same issues there once the law kicks in this November.
Twitter is Reportedly Testing a Fake News Button (Jun 29, 2017)
The Washington Post reports that Twitter is testing a button that would allow users to flag tweets or links in tweets which appear to be false or misleading, although it says there’s no guarantee that the feature will ever launch, and Twitter itself says it’s not going to launch anything along these lines soon. On the face of it, this is a logical step for Twitter, which has been one of the biggest vehicles for the rapid spread of fake news over the last year or two, even though its much smaller scale means that Facebook still arguably has a bigger impact, especially given its tendency to reinforce people’s existing biases. But on the other hand, given how the phrase “fake news” has lost all meaning and come to take on distinct partisan overtones, there’s enormous potential for misuse and controversy, and if Twitter does launch something along these lines, it’s going to need either a massive team of its own or several big partners with significant resources to handle the refereeing that will be needed. That alone may prevent Twitter from ever launching the feature, needed though it may be. In addition, given that Twitter has arguably bent its own rules on acceptable content a little for public figures such as President Trump (and candidate Trump before him), there are some big questions about whether tweets from the President and others would be subject to the same rules as those from other users.
Instagram Uses AI to Filter Spam and Abusive Comments (Jun 29, 2017)
Instagram is announcing today that it’s now using artificial intelligence to filter spam and abusive comments in the app. Wired has a feature (also linked below) which dives deeper into the background here and makes clear that what Instagram is doing here builds on Facebook’s DeepText AI technology, and that Instagram has been working on it for some time. The spam filter works in nine languages, while the comment moderation technology only works in English for now, but both should clean up the Instagram experience. Importantly, though both spam and harassment are issues on Instagram, neither are as bad there because so many people have private accounts – I haven’t seen an official statement from Instagram on this but some research and testing suggests it’s likely between 30 and 50% of the total number of accounts that are private. Those accounts, in turn, are far less likely to receive either spam or abusive comments, since they’ve explicitly chosen to allow those who might comment to follow them. However, for the rest, and especially for celebrities, brands, and so on, these are likely far bigger issues, so cleaning them up in a way that doesn’t require the same massive investment in manual human moderation as Facebook’s core product is a good thing all around.
Facebook Solicits User Feedback on How to Tackle Issues Like Censorship and Terrorism (Jun 15, 2017)
A while back, Facebook said it would be soliciting user feedback on its policies for moderation and censorship around thorny issues like terrorism and freedom of speech, and it’s now putting a program in place to begin doing this in earnest. It has listed some of those thorny questions on its website and also launched its first debate, on terrorism, separately. On paper, getting user feedback on these issues seems a great way to absolve itself from the role of arbiter or gatekeeper of what’s allowed on Facebook – it’s also said in the past that it wants to be sensitive to local cultural norms around these things rather than having a single global policy, which seems sensible. But the most likely outcome is a range of views expressed and real division around some of these issues, which means Facebook will still have to come down on one side or the other, and will now do so explicitly going against the stated views of many of its users. This is definitely a double-edged sword. In addition, as we’ve seen from the recent FCC comment process around net neutrality, such large-scale public feedback projects are easily hijacked by groups, so Facebook will have to work hard to sift the wheat from the chaff here. On balance, I think this is a positive step, but I worry that it will be really tough for Facebook to execute on its vision here without dealing with some real challenges in implementation.
Back in December, four big US Internet companies signed a voluntary code of conduct with the EU under which they agreed to improve and accelerate the removal of hate speech from their platforms. Now, the EU is reporting good progress on those goals, with twice as high a percent of offending content removed, and Facebook and Twitter removing substantially more content within the first 24 hours, while YouTube slipped a little in this regard for reasons that aren’t clear. As Facebook has discovered, policing content is an expensive and labor-intensive task at the best of times, but having external standards set like this raises the stakes even further. The big risk in the EU and specific European countries is that this moves from voluntary codes of conduct to actual laws with significant consequences for non-compliance, so the big US companies are wise to do what they can to play nicely to try to ward off such outcomes.
Facebook Moderation Guidelines Leak (May 22, 2017)
Facebook Takes 3 Hours to Remove Video of Murder (Apr 17, 2017)
A Facebook user apparently committed a murder on Sunday and claimed to be in the process of committing several more while streaming on Facebook Live video, but Facebook failed to take the video down for three hours afterwards. This certainly isn’t the first time something gruesome has been live streamed on Facebook, and the company has dealt with past situations both poorly and inconsistently. On the one hand, it’s clearly against its policies to broadcast something as disturbing as this, so taking the videos down should be simple from a policy perspective. But in some cases, it’s been accused of taking down videos which – despite their content – were enormously newsworthy, and therefore engaging in censorship. In this case, it seems baffling that Facebook didn’t take the video down much sooner, but it raises much bigger issues about how to police live video, which by definition has often done its damage before anyone at Facebook is even aware of it. Given YouTube’s recent struggles with monitoring non-live video for inappropriate content, one can only imagine the challenges involved in monitoring video in real time. Certainly, Facebook needs better tools for flagging such content and faster response times when videos are flagged, at the very least.
Update: Facebook has now responded, and says it’s going to do exactly what I said in that last line: that is, improve its flagging tools and shorten response times. It also posted a complete timeline. Worth a read.
A new front has just opened up in the war between the Trump administration and the tech industry: Twitter is suing the government after it attempted to compel Twitter to reveal the identity of the people behind the @Alt_USCIS Twitter account. That account is allegedly maintained by employees of the US Citizenship and Immigration Service and has been highly critical of the Trump administration and its policies on immigration. In and of itself, that seems like no legal justification at all for unmasking the account’s owners, and that’s why Twitter is pushing back on free speech grounds. But the legal hook here may be that the account is using the name of the agency in its Twitter handle, and as such might just possibly be in contravention of trademark or copyright law, or anti-impersonation regulations. Regardless of the reasoning, this sets up yet another fight between the tech industry and the administration, though in fairness Twitter had resisted some earlier attempts by the Obama administration to get at the people behind accounts as well. It’s also an important test of one of the key tenets of Twitter’s value proposition as a free speech platform.
Though the NetEase tie-up is the main “new news” here, the broader story is that there are still important barriers to Google getting back into China (just as there are for Facebook), the thorniest of which is whether Google sacrifices its stance on censorship in order to re-enter the market. That was the primary reason it left back in 2010, and yet the Chinese government’s approach hasn’t really changed in the interim. Unlike Facebook, which is prevented by the government from operating in China at all, Google chose to leave China of its own volition, and the main barrier to re-entry would be deciding to go back in despite the moral quandaries inherent in such a choice. This is where Apple’s history in China is interesting – as first and foremost a hardware company, it has been able to run the core part of its business just as it does elsewhere, with any censorship applying to narrow slices of its overall business, such as individual apps in the App Store or the iBooks store as a whole. For Google and Facebook, however, access to information is their central value proposition, and so sacrificing the completeness of that offering to censorship is a much bigger concession.
via The Information
This is a great in-depth take on Facebook’s efforts to get back into China following the 2009 moves that saw it effectively blocked from operating in the country. The phrase I saw repeated most frequently in the article? Some version of “[Facebook executive] declined to to be interviewed,” which is indicative of just how carefully Facebook is treading in China – it would clearly like to get back in and compete for those billion-plus potential users along with the local social networks, and has even suggested that it’s willing to put up with a certain amount of censorship, but doesn’t yet seem to feel like the time is right. There would certainly be a big backlash against any censorship-based re-entry, especially if it felt like Facebook was willingly complicit rather than doing the bare minimum to comply, just as Google and Yahoo faced criticism over their activities in China in the past. This is definitely a double-edged sword for Facebook, though it’s not even clear at this point that it would be allowed back in even if it decided to give it a try. The whole piece is worth a read – lots of interesting detail here, much of which is also applicable to other big US tech companies that would like to be more active in China (or already are).
Snapchat Discover Takes a Hard Line on Misleading and Explicit Images – The New York Times (Jan 23, 2017)
There’s a certain irony in the fact that Snapchat is now trying to remove some of the lewder images from its Discover tab, when its early reputation (somewhat undeservedly) was that of an app that existed specifically so that users could send each other such images of themselves. But this is the sort of thing we see as apps and services that have been allowed to run relatively unfettered begin to ramp up efforts to court advertisers in preparation for an IPO, which is exactly what Snap is doing. Cleaning up the Discover tab should provide some more comfort to advertisers about the context in which their ads will be seen, though there’s nothing in these guidelines about racy images that are relevant to the Stories behind them, which I’d say many of the images I see in the Discover tab arguably are. The other side of this effort could be increased user controls around the content they see on the Discover tab, since some users would prefer not to see those images or the Stories behind them at all – balancing the needs of publishers, advertisers, and users is always the hardest balancing act for any ad-backed business.