Following the Rose McGowan account suspension I mentioned in an item yesterday, a number of prominent women on Twitter have organized a boycott of the platform which is taking place today (Friday). I’m linking below to an item from USA Today which covered the boycott as it being organized, but the challenge today is knowing how effective the boycott has been, because by definition it’s about silence rather than speaking out. Other women, meanwhile, have chosen to speak out about the issues today instead, which makes for a more immediately visible form of protest (Update: this New York Times piece summarizes the different views being expressed on this question). One would hope that these protests, whatever their form, would prompt Twitter to look more seriously at the serious issues being debated, but its lack of past progress on this issue makes me skeptical that that will happen.
via USA Today
Twitter has published its latest Transparency Report, which covers the first half of 2017 and mostly relates to requests for intervention in content posted on Twitter by government entities. Amongst the other data in there, though, Twitter has also reported specific numbers on the accounts removed from the service over promotion of terrorism, a total that has reached over 900,000 accounts since August 2015. Importantly, the vast majority of those accounts were taken down not because of any report by a government agency but because Twitter’s own in-house tools flagged the accounts, often before they even began tweeting. That represents good progress over the last couple of years in this particular area, but Twitter remains poor in policing abuse in general on its site, as several reports from BuzzFeed and other news outlets have shown. In relation to that issue, it’s notable that “abusive behavior” is the category of government-reported content with by far the lowest action rate from Twitter of all those it reports at just 12% of reports acted on, versus 40% for copyright issues, 63% for trademark infringement, and 92% for promotion of terrorism. That may in part be because government representatives often have thin skins and those opposing them may be considered by Twitter to be in need of special free speech protections, but I wouldn’t be surprised if that 12% was representative of the proportion of overall abuse reports that get acted on by Twitter.
Twitter has a blog post up and apparently also spoke to reporters about its efforts to curb abuse and harassment on the platform. The company released data about the improvements it’s made over the past year and the positive effects it says these are having, such as acting on ten times as many abusive accounts, removing twice the number of repeat offenders, and so on. But there’s nothing in the new data or the blog post about why so many reports still get dealt with as false positives, as reported by BuzzFeed earlier in the week. And there’s no real transparency about how the decisions are made, by whom, or what exactly the guidelines are. Twitter clearly is making progress here – the numbers show that – but the fact that BuzzFeed had no trouble quickly finding cases where it’s still falling short suggests it’s far from done here yet. And though Twitter is clearly taking the problem more seriously today than it was even six months ago, before this current effort began, it’s still too often defensive and closed rather than transparent and honest in talking about why abuse and harassment are still such issues. At root, it feels like Twitter is still erring too much on the side of maximum freedom of speech rather than on the side of protecting users from abuse, while much behavior by Twitter users is utterly unacceptable and yet likely goes unreported simply because it’s not directed at a specific individual.
Twitter is Still Mishandling Abuse and Harassment Reports (Jul 18, 2017)
This article dropped on Friday evening as I was logging off for the week, so I’m only getting to it now. But this article was something of a bombshell, detailing not just the scale of harassment, assault, and other misbehavior by men against women in venture capital, but also naming specific names including some who hadn’t been accused previously. There really seems to have been a tipping point in the last few weeks on this topic, where far more women are now willing to speak out about their bad experiences and name their abusers and harassers. That, in turn, has suddenly exposed many man within venture capital and their past bad actions. This was a much needed change, and although the venture capital world and companies like Uber remain single small pockets in which the real state of things is finally being revealed, I can easily see this movement spreading and penetrating much of the rest of the tech industry. Justice Brandeis’ famous quote about sunlight (publicity) being the best disinfectant seems apt here: the more of these cases come to light, the more some of the perpetrators (like Justin Caldbeck and Dave McClure) will be moved out of roles or dumped by their employers altogether. None of this represents an overnight change, but it does feel like things are finally moving in the right direction, and those who have been protected out of a combination of fear on the part of would-be accusers and collusion on the part of colleagues are finally being exposed to some real consequences. There’s clearly a long way still to go, but breaking the wall of silence feels like a big step forward. Increasing diversity still feels like one of the most obvious ways to prevent this issue in future – at many companies, the overwhelming gender dominance of men is clearly a big part of the cultural problem, even though women seem to have protected some of those accused as well, either covering up bad behavior or dealing with it too quietly (as in the case of 500 Startups). Update: on Monday, per Axios, Dave McClure was asked to resign completely from 500 Startups, and did so, a step which should arguably have taken rather sooner.
This is probably about as much as Facebook can be expected to do on an issue such as this – there’s no easy definition for revenge porn as such, and therefore no way to train a computer to look for it, so the only way Facebook can police it is to match images being shared with ones it’s been told about in the past. That’s obviously far from solving the issue, but it’s a start and should help with cases where the same images are being shared over and over.
via The Verge
Just when Facebook seems to be making progress with news organizations, it does something like this: reporting the BBC to the police for “sharing” child pornography in an effort to push Facebook to take the content down. The BBC’s reporting here is just vague enough that it’s possible that the images that weren’t taken down despite being reported really don’t contravene Facebook’s policies, but this certainly isn’t a good look for Facebook, which should be doing everything it can to stamp out child pornography and images of child abuse on the site, rather than obstructing investigations into it. And it certainly shouldn’t be doing ridiculous things like reporting journalists to the police under such circumstances.
via BBC News
I’ve been very critical of Twitter over its poor response to abuse and harassment on the platform, so I don’t think they should get a free pass now just because they’ve finally decided to do something about it. However, kudos to them for finally acting on these issues after the years of bizarre prevarication on this point – they’ve now moving quickly, as promised (here are two other steps taken in the last few weeks). These latest changes are actually some of the best they’ve announced during this period, because they actually remove content proactively from your feed based on algorithms. This has always seemed like it would have to be a big part of the answer – human curation was never going to be able to deal with the volumes involved here. But another positive change is more feedback on abuse reports users submit, which has been largely missing from the app itself so far. There’s still a risk of false positives and Twitter definitely needs mechanisms for appeal and reinstatement where those occur, but it does finally feel like Twitter is making meaningful progress here.
Unlike last week’s changes, which were mostly about changes in the user interface of non-abusive users, this change is directed specifically at limiting the reach of abusive users, which feels like a more important and urgent priority. The limits are only temporary – no-one is getting kicked off the platform for this abusive behavior, merely having their reach limited for 12 hours or so in the cases so far. I wonder if – by analogy to an iPhone lock screen – the lockout period will be longer after each offense until eventually the user is banned; that’s something Twitter doesn’t seem to have commented on yet publicly. But it’s also not clear that there’s an appeal mechanism, which is a bit worrying because Facebook, Twitter, and others have sometimes blocked innocuous users either by mistake or through mis-application (or over-zealous application) of policy. I’m all for Twitter cracking down on abuse – it should be a key priority – but it needs to happen in a way that’s transparent and appealable. So there’s definitely progress here, but we still need more.
This is one of those times when the word “finally” seems the apt response. Twitter has denied and stalled its way around the abuse issue, and never seems to have taken it nearly seriously enough, but the promise last week that it was finally ready to start moving faster seems to be bearing at least some fruit. And as I said last week, it’s presumably not a coincidence that Twitter’s Q4 results are out on Thursday – I’m sure the company would like to defuse the abuse issue a little and focus on other things on its earnings call. The changes announced today are positive, but I see at least two flaws: firstly, there’s no real transparency over the rules used to designate tweets or replies as either unsafe or “less relevant”. I understand the desire not to spell out exactly what filters are used to avoid malefactors gaming the system, but this is likely to trigger lots of complaining when an opaque algorithm gets things wrong. Secondly, and in a bigger picture sense, this is all still about presentation and not about actually policing the platform for true abuse – so many reports of abuse and harassment have gone entirely unheeded by Twitter, and none of this will address that fundamental issue.
One of the most baffling aspects of Twitter’s inertia over the past couple of years has been its refusal to take the issue of abuse and harassment on the platform seriously. This late-night tweet storm by the company’s VP of Engineering suddenly revealed that Twitter had decided to take the issue much more seriously and move much more quickly to resolve it (“days and hours not weeks and months”). When essentially everyone outside Twitter HQ has recognized that this was an issue that needed swift resolution for years now, why did it take Twitter so long, and what has changed now? One answer is that Twitter’s earnings are coming up next week, and making a strong statement about this now helps neutralize awkward questions about it then, even if Twitter hasn’t announced anything more concrete. Another is that Twitter is responding to the thorny questions about President Trump’s usage of Twitter and calls for him to be booted off the service by dealing with abuse more broadly. And perhaps some people at Twitter that have wanted to move faster on this issue but been blocked by Jack Dorsey or others finally managed to break through whatever barriers existed. Regardless, it’s good news assuming some meaningful change does come out of all this, but it still says nothing good about Twitter’s internal culture that it took this long to get to this point.