Uber Penalized by FTC for Misrepresenting Privacy Practices (Aug 15, 2017)
Based on observations of the new method in the wild, Marketing Land says Facebook appears to be testing showing people ads on Facebook based on the physical retail stores they have recently visited, leveraging location data from the Facebook app. If people already think that being retargeted on Facebook based on shopping on other sites is creepy, this is going to blow their minds, especially because many people may not realize that Facebook is even able to track their location when they’re not actively using the app. That background location tracking is used to power some services in the app, and in the iOS privacy settings, Facebook can be set only to use location while in the app, but there doesn’t seem to be a similar option on Android, where all I can see is a single on-off location toggle per app at an OS level. None of this should surprise us, however: the name of the game in advertising is targeting, and the more available the better as far as these companies are concerned. As long as there’s some disclosure somewhere of what’s being gathered and why, and consumers have an opt-out option, they’ll feel they’re covered. But between Snapchat’s recent moves in the opposite direction and this testing by Facebook, it feels like we may be about to wade into our first real set of privacy concerns around major social networks in several years, after companies pulled back significantly a few years back following something of a backlash. Users have been like the proverbial frogs in boiling water since, with the erosion of privacy so subtle and incremental as to never present a single step big enough to warrant objections, but I suspect that may be about to change.
via Marketing Land
iRobot CEO Backtracks on Roomba Data Sale Comments (Jul 28, 2017)
Roomba Owner iRobot Talks About Selling Home Mapping Data (Jul 24, 2017)
One of Amazon’s big missteps with its launch of calling and messaging features through its Alexa assistant was the assumption that its users would be happy to receive calls and messages from anyone who had their number, without the ability to block or screen those contacts first. It’s now issued a partial fix, which allows users to block others from calling or messaging them, but still doesn’t appear to have moved to a double-opt-in model under which a user would have to accept someone’s request to connect first before communication can occur. That means it still opens users up to calls and messages from exes and others in way many won’t be comfortable with. That’s how this should have worked from the beginning and the model Amazon should be adopting now.
Google Expands In-Store Sales Attribution to More Ad Types (May 23, 2017)
Microsoft Hires Head of Privacy and Data Security from FTC (Apr 28, 2017)
This is a great move from Microsoft, which has been at the forefront of recent legal cases over data privacy and security, as it reinforces its commitment to these issues at a time when threats to both security and privacy are increasing. Putting a high profile individual explicitly in charge of this area is a great symbolic move, but if done right should also ensure that these issues are examined in every aspect of Microsoft’s business. So far, Apple has been arguably the strongest champion for privacy as a guiding force among the major tech companies, but this move could see Microsoft become a more prominent advocate too. Worth noting: Brill won’t start at Microsoft until the summer.
Google Develops Federated Machine Learning Method Which Keeps Personal Data on Devices (Apr 6, 2017)
This is an interesting new development from Google, which says it has created a new method for machine learning which combines cloud and local elements in a way which keeps personal data on devices but feeds back the things it learns from training to the cloud, such that many devices operating independently can collectively improve the techniques they’re all working on. This would be better for user privacy as well as efficiency and speed, which would be great for users, and importantly Google is already testing this approach on a commercial product, its Gboard Android keyboard. It’s unusual to see Google focusing on a device-level approach to machine learning, as it’s typically majored on cloud-based approaches, whereas it’s been Apple which has been more focused on device-based techniques. Interestingly, some have suggested that Apple’s approach limits its effectiveness in AI and machine learning, whereas this new technique from Google suggests a sort of best of both worlds is possible. That’s not to say Apple will adopt the same approach, and indeed it has favored differential privacy as a solution to using data from individual devices without attributing it to specific users. But this is both a counterpoint to the usual narrative about Google sacrificing privacy to data gathering and AI capabilities and to the narrative about device-based AI approaches being inherently inferior.
FCC and FTC Heads Outline Policy on Internet Privacy (Apr 5, 2017)
In an op-ed in the Post this morning, the chair of the FCC and acting chair of the FTC write up their views on the internet privacy debate that’s been roaring in online tech publications over the last few weeks. As I’ve said previously (and discussed in depth in last week’s News Roundup podcast), the reaction on this topic has been overblown, and understanding poor, though the major players on the other side haven’t really helped themselves. The major ISPs only began communicating on the topic after the congressional vote was over, and only now are the FCC and FTC chairs communicating clearly about the issue. But the reality is that this issue of internet privacy can only really be resolved by new regulation from the FTC, which will end up once again having responsibility for online privacy as it did until 2015.
EFF withdraws Verizon spyware claims – CNET (Mar 31, 2017)
This is an example of the hysteria we’re all being subjected to around the recent overturning of privacy rules regarding ISPs by the US Congress, and the dangerous places it can lead. The EFF, a consumer rights group particularly concerned with privacy, first wrote and then essentially entirely withdrew a post hyperventilating about a new app Verizon is testing on one obscure smartphone, once it gave Verizon a chance to respond and it provided an entirely reasonable response. In and of itself, this story isn’t that important, but it is symptomatic of a lot of the overblown rhetoric we’ve seen in the past week about carriers selling browser histories. The reality is that, because the new rules never actually went into effect, this week’s congressional action changed absolutely nothing from the status quo. And carriers no more have any intention of literally selling anyone’s browser history than Google or anyone else does – what they may do is use your browsing history to target advertising or their own products, just as Google, Facebook, and many other entities already do. Reasonable people can disagree on whether that’s a good thing or not, but it’s a fact of life for all of us already if we use these services. To pretend that what’s happened this week is the beginning of what EFF calls the privacy apocalypse is a total disservice to everyone involved, a form of crying wolf which is likely to make it much harder to get real attention onto real issues in the future.
After the London terror attack, a top U.K. official says Facebook needs to open up WhatsApp – Recode (Mar 27, 2017)
This is a worrying (though not altogether unexpected) resurfacing of the arguments from early 2016, when the FBI was trying to get into an iPhone owned by one of the San Bernardino shooters. In this case, UK Home Secretary Amber Rudd (whose role has no direct counterpart in the US, but is responsible for domestic law enforcement and counter-terrorism among many other things) has made calls for WhatsApp to “open up” and specifically referred to encryption. That’s because WhatsApp was allegedly one of the apps used by the terrorist behind last week’s attack in London, though there’s no evidence yet that he used it to plan the attack or coordinate with others. The bigger issue, as with last year’s Apple-FBI fight, is of course that once the government can get in, there’s no guarantee others won’t use the same methods, whether that’s because of hacks like the one that hit Cellebrite a few weeks ago, or exposures of government tools like the Wikileaks CIA hack. Encryption is a fact of life at this point, and essential for secure communication and protection of privacy for millions of law-abiding users, and no government back door can solve the law enforcement problem without also compromising that essential function. And the Rudd quote in the closing paragraph of this story suggests she doesn’t actually understand the FBI-Apple situation at all, which is not surprising from a government official but worrisome nonetheless.
Google introduced its own location-sharing feature last week, but Facebook’s is far more limited – it works within the context of a Messenger interaction, and only for an hour at a time, which feels a good bit less prone to accidental over-sharing. It also feels more useful in the messaging context, where you’d be likely to share messages with someone about meeting up, than in a Maps app, which might mean dipping out of a conversation to check the location (even if it might be useful when meeting at a new spot). As I mentioned last week, it’s interesting to see location sharing making a comeback when both Google and Facebook had previously backed away from this kind of thing over privacy concerns – that suggests a certain confidence over privacy issues that wasn’t there a few years ago, although both companies still seem to be approaching this more narrowly than in the past.