Topic: Privacy

Each post below is tagged with
  • Company/Division names
  • Topics
  • and
  • Narratives
  • as appropriate.
    Gmail Will Stop Scanning Emails to Target Ads Due to Enterprise Confusion (Jun 23, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Amazon Provides Partial Fix for Alexa Voice Calling Privacy Issue (Jun 13, 2017)

    One of Amazon’s big missteps with its launch of calling and messaging features through its Alexa assistant was the assumption that its users would be happy to receive calls and messages from anyone who had their number, without the ability to block or screen those contacts first. It’s now issued a partial fix, which allows users to block others from calling or messaging them, but still doesn’t appear to have moved to a double-opt-in model under which a user would have to accept someone’s request to connect first before communication can occur. That means it still opens users up to calls and messages from exes and others in way many won’t be comfortable with. That’s how this should have worked from the beginning and the model Amazon should be adopting now.

    via GeekWire

    Google Expands In-Store Sales Attribution to More Ad Types (May 23, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    ★ EU Fines Facebook 110m Euros Over Misleading WhatsApp Deal Info (May 18, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Microsoft Hires Head of Privacy and Data Security from FTC (Apr 28, 2017)

    This is a great move from Microsoft, which has been at the forefront of recent legal cases over data privacy and security, as it reinforces its commitment to these issues at a time when threats to both security and privacy are increasing. Putting a high profile individual explicitly in charge of this area is a great symbolic move, but if done right should also ensure that these issues are examined in every aspect of Microsoft’s business. So far, Apple has been arguably the strongest champion for privacy as a guiding force among the major tech companies, but this move could see Microsoft become a more prominent advocate too. Worth noting: Brill won’t start at Microsoft until the summer.

    via Axios

    Bose Denies Wiretapping or Personally Identifying Users Through Tracking App (Apr 22, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Google Develops Federated Machine Learning Method Which Keeps Personal Data on Devices (Apr 6, 2017)

    This is an interesting new development from Google, which says it has created a new method for machine learning which combines cloud and local elements in a way which keeps personal data on devices but feeds back the things it learns from training to the cloud, such that many devices operating independently can collectively improve the techniques they’re all working on. This would be better for user privacy as well as efficiency and speed, which would be great for users, and importantly Google is already testing this approach on a commercial product, its Gboard Android keyboard. It’s unusual to see Google focusing on a device-level approach to machine learning, as it’s typically majored on cloud-based approaches, whereas it’s been Apple which has been more focused on device-based techniques. Interestingly, some have suggested that Apple’s approach limits its effectiveness in AI and machine learning, whereas this new technique from Google suggests a sort of best of both worlds is possible. That’s not to say Apple will adopt the same approach, and indeed it has favored differential privacy as a solution to using data from individual devices without attributing it to specific users. But this is both a counterpoint to the usual narrative about Google sacrificing privacy to data gathering and AI capabilities and to the narrative about device-based AI approaches being inherently inferior.

    via Google

    FCC and FTC Heads Outline Policy on Internet Privacy (Apr 5, 2017)

    In an op-ed in the Post this morning, the chair of the FCC and acting chair of the FTC write up their views on the internet privacy debate that’s been roaring in online tech publications over the last few weeks. As I’ve said previously (and discussed in depth in last week’s News Roundup podcast), the reaction on this topic has been overblown, and understanding poor, though the major players on the other side haven’t really helped themselves. The major ISPs only began communicating on the topic after the congressional vote was over, and only now are the FCC and FTC chairs communicating clearly about the issue. But the reality is that this issue of internet privacy can only really be resolved by new regulation from the FTC, which will end up once again having responsibility for online privacy as it did until 2015.

    via FCC and FTC Chairs’ Editorial in The Washington Post

    EFF withdraws Verizon spyware claims – CNET (Mar 31, 2017)

    This is an example of the hysteria we’re all being subjected to around the recent overturning of privacy rules regarding ISPs by the US Congress, and the dangerous places it can lead. The EFF, a consumer rights group particularly concerned with privacy, first wrote and then essentially entirely withdrew a post hyperventilating about a new app Verizon is testing on one obscure smartphone, once it gave Verizon a chance to respond and it provided an entirely reasonable response. In and of itself, this story isn’t that important, but it is symptomatic of a lot of the overblown rhetoric we’ve seen in the past week about carriers selling browser histories. The reality is that, because the new rules never actually went into effect, this week’s congressional action changed absolutely nothing from the status quo. And carriers no more have any intention of literally selling anyone’s browser history than Google or anyone else does – what they may do is use your browsing history to target advertising or their own products, just as Google, Facebook, and many other entities already do. Reasonable people can disagree on whether that’s a good thing or not, but it’s a fact of life for all of us already if we use these services. To pretend that what’s happened this week is the beginning of what EFF calls the privacy apocalypse is a total disservice to everyone involved, a form of crying wolf which is likely to make it much harder to get real attention onto real issues in the future.

    via CNET (EFF’s withdrawn post here)

    After the London terror attack, a top U.K. official says Facebook needs to open up WhatsApp – Recode (Mar 27, 2017)

    This is a worrying (though not altogether unexpected) resurfacing of the arguments from early 2016, when the FBI was trying to get into an iPhone owned by one of the San Bernardino shooters. In this case, UK Home Secretary Amber Rudd (whose role has no direct counterpart in the US, but is responsible for domestic law enforcement and counter-terrorism among many other things) has made calls for WhatsApp to “open up” and specifically referred to encryption. That’s because WhatsApp was allegedly one of the apps used by the terrorist behind last week’s attack in London, though there’s no evidence yet that he used it to plan the attack or coordinate with others. The bigger issue, as with last year’s Apple-FBI fight, is of course that once the government can get in, there’s no guarantee others won’t use the same methods, whether that’s because of hacks like the one that hit Cellebrite a few weeks ago, or exposures of government tools like the Wikileaks CIA hack. Encryption is a fact of life at this point, and essential for secure communication and protection of privacy for millions of law-abiding users, and no government back door can solve the law enforcement problem without also compromising that essential function. And the Rudd quote in the closing paragraph of this story suggests she doesn’t actually understand the FBI-Apple situation at all, which is not surprising from a government official but worrisome nonetheless.

    via Recode

    Facebook Messenger update helps you keep tabs on your friend’s location – Mashable (Mar 27, 2017)

    Google introduced its own location-sharing feature last week, but Facebook’s is far more limited – it works within the context of a Messenger interaction, and only for an hour at a time, which feels a good bit less prone to accidental over-sharing. It also feels more useful in the messaging context, where you’d be likely to share messages with someone about meeting up, than in a Maps app, which might mean dipping out of a conversation to check the location (even if it might be useful when meeting at a new spot). As I mentioned last week, it’s interesting to see location sharing making a comeback when both Google and Facebook had previously backed away from this kind of thing over privacy concerns – that suggests a certain confidence over privacy issues that wasn’t there a few years ago, although both companies still seem to be approaching this more narrowly than in the past.

    via Mashable

    Google Maps will let you share your location with friends and family for a specific period of time – TechCrunch (Mar 22, 2017)

    Location sharing is one of those really thorny privacy issues, and Google has gone back and forth on it over time precisely for this reason. In this case, it’s now opening the feature back up, though now in the Google Maps mobile app, and with some sensible limits, such as time- and person-based sharing. I can see a lot of utility in sharing my location with someone temporarily if we’re planning to meet up or if I’m on my way home and want to share an ETA. On the other hand, sharing that information with friends or family members means sharing it with Google too, and presumably also means your Google Maps app has to be running and tracking your location in the background, which has battery implications. For some people, those will be non-issues, but for others they make it less palatable to use these features. And of course the more openly you share your location (and the more companies track it) the more ways there are for hackers (and law enforcement) to access it too.

    via TechCrunch

    ISPs say your Web browsing and app usage history isn’t “sensitive” – Ars Technica (Mar 20, 2017)

    CTIA, which is the industry association that represents the largest US wireless carriers, is arguing before the FCC that it shouldn’t be subjected to new rules on sharing data it collects on its users. The carriers have argued that Google and other online service providers aren’t subject to the same rules (those companies are regulated primarily by the FTC rather than the FCC) and so for consistency’s sake the carriers should be treated the same way. This is really about a technical definition of the word “sensitive” – clearly the kind of data being talked about here is indeed enormously sensitive, but the real question is how disclosure of that data is regulated. This matters because, for example, AT&T as a fiber broadband carrier in certain parts of the country has offered a service discount for customers who consent to tracking of their web browsing history and so on, something which it argues Google does all the time without explicitly asking for users’ permission to do. What the carriers are arguing here is that it should be allowed to continue to do this kind of thing without having to ask users to opt in first. The carriers look likely to win given the current hands-off policy stance of the FCC, which means more erosion of user privacy for users, but the proper approach would be for the FTC and FCC to work together to craft a set of consistent rules that would apply to all players that get access to similar data, rather than each regulating in a vacuum.

    via Ars Technica

    Want to use Google, kid? Now there’s an app for that – Mashable (Mar 15, 2017)

    If you’re a parent of kids under 13, you’ve likely encountered the COPPA law, even if you might not know it by that name, because your kids will have found it impossible to sign up for an online service or account without either lying about their age or going through a very involved process. As a result, I suspect many kids either do lie about their age (perhaps with their parents’ support) or piggyback off a parent’s account, neither of which is ideal. Google now has a service that lets kids legitimately sign up for their own account even if they’re under 13, as part of a family account tightly controlled and supervised by parents. That feels like a great solution, and it looks like these accounts can effectively graduate when the kids reach an appropriate age. I wish more companies would think about how to help parents help their kids use technology, and this feels like a good step. Of course, this does mean that Google is now capturing information about your kids for a future profile, even if that data collection is limited in unspecified ways.

    via Mashable

    Apple Joins Group of Companies Supporting Google in Foreign Email Privacy Case – Mac Rumors (Mar 14, 2017)

    Given the way other big tech companies had weighed in on the related Microsoft case over the past few years, it was a little odd that more hadn’t sprung to Google’s defense in this one, but it’s good to see that they are now doing so. These cases have far-reaching consequences not just for user privacy but for the ability of US companies to do business in overseas markets, and those companies need to defend themselves vigorously. The final outcome of both cases is therefore worth watching closely.

    via Mac Rumors

    Google’s Allo app can reveal to your friends what you’ve searched – Recode (Mar 14, 2017)

    Now that I’ve finally got around to writing this up, it appears Google has patched the specific issue highlighted in this piece, but it’s still worth talking about for a couple of different reasons. For one, anytime you bring a virtual assistant into an existing conversation between two or more human beings, there’s a tension between the bot knowing as much as possible about each participant and using that to be helpful on the one hand, and avoiding exposing personal information about the participants on the other. Google appears to have screwed that up here in a way that could have been damaging or embarrassing for users, though it has now been patched. Secondly, this kind of thing can only happen when you collect and keep enormous amounts of data on your users in the first place – a company that neither collects nor retains such data in a profile could never expose it. It’s clear that Google didn’t intentionally do so here, but it was able to do so anyway because of its business model. Competitors such as Apple might argue that not collecting such data, or keeping it secured on a device rather than in the cloud, would make it impossible for a cloud service to share it with others. We’re going to have to work through lots more of these scenarios in the years to come, and the competition between companies that strictly preserve privacy and those that use personal data to improve services will be a critical facet of that evolution.

    via Recode

    Apple hires Jonathan Zdziarski, an active forensics consultant & security researcher in the iOS community – 9to5Mac (Mar 14, 2017)

    Zdziarski was in the news a lot a year ago, when Apple was fighting the FBI over the iPhone used by the San Bernardino shooter, because he was frequently quoted and cited as an expert who backed Apple’s stance. As such, it’s not altogether surprising that he should end up at Apple – he’s been both one of its staunchest supporters around some security and privacy issues and someone who has discovered vulnerabilities in its code. On the one hand, that makes him a useful person to have inside the company – this hire feels a lot like Apple’s hire of Anand Shimpi, another prominent outside expert who was brought inside – but Apple will lose the benefit of having a vocal independent advocate on these issues. It’s also interesting to note Zdziarski’s comments about his hiring and why he’s joining Apple – he cites its privacy stance, which is of course closely tied to security concerns, as a strong motivating factor.

    via 9to5Mac

    Zuckerberg manifesto removes reference to Facebook monitoring ‘private channels’ – Business Insider (Feb 17, 2017)

    Kudos to Mashable, which first noticed that one paragraph in a 6,000-word manifesto had been changed from the original to the final version (I covered the manifesto itself yesterday). And kudos, too, to Business Insider for following up with Facebook to find out why it was removed. The official explanation is that the paragraph talked too specifically about a capability Facebook hasn’t finalized yet, but it’s at least as likely that Facebook worried it would cause major privacy concerns. The paragraph in question talked about using AI to detect terrorists in private channels, which rather flies in the face of Facebook’s commitment to encryption and protecting privacy. As with much else in the letter, I think it was likely intended to be mostly aspirational rather than specific, but the original paragraph was rather tone deaf about how such an idea would be received even in such high-level terms.

    via Business Insider

    Vizio to Pay Fines Over Unlawful Tracking and Selling of User Data (Feb 7, 2017)

    It turns out Vizio has been collecting extremely granular data on users of its smart TVs, and then matching its IP data with offline data about individuals and households (essentially everything short of actual names). And it’s done all this without making users properly aware that this was what it was doing. The data related to everything consumers watched on the TVs, whether the content came through Vizio’s own smart TV apps or merely through one of its inputs from another box or antenna. Something I’d forgotten was that Vizio filed an S-1 in preparation to go public back in 2015 – it never actually went public because Chinese player LeEco decided to acquire them (a deal due to close shortly). Aside from talking about how many TVs the company sells, the S-1 makes a big deal of of the “up to 100 billion viewing data points daily” it collects from 8 million TVs, and touts its InScape data services, which package up this data for advertisers, although it says this data is “anonymized”, which feels like an alternative fact at this point. The risk factors in the filing even mention possible regulatory threats to such data gathering, so it’s probably fair to say that Vizio shared more information with its potential investors about the data it was collecting than it did with end users. To settle the case, Vizio has to pay a total of $3.7m in fines to the FTC and the state of New Jersey (whose AG brought the suit with the FTC), discontinue the practice, and disclose it to consumers. I can’t wait to see how it manages that last point – imagine turning on your Vizio TV one day and seeing a message pop up about the fact that it’s been tracking your every pixel for the last several years. Assuming that’s done right, it could be the most damaging part of it this for Vizio, which made over $3 billion in revenue in its most recently reported financial years. Meanwhile, yet another headache for LeEco to manage.

    via Federal Trade Commission

    Court Rules Google Has to Hand Over Data in Contradiction to Recent Microsoft Ruling – The Register (Feb 4, 2017)

    The recent ruling in the ongoing case involving Microsoft and customer data stored outside the US had at least temporarily provided some reassurance that the big tech companies’ stance on this issue would be upheld in court. However, a new court in a different part of the US has now ruled the other way, though its rationale for ruling differently is that Google manages its data and data centers differently from Microsoft. This is a blow to the big tech companies who’ve fought to keep their overseas data centers (and the data held there on non-US customers) off limits for US law enforcement, but the Microsoft case was likely to go to the Supreme Court anyway. Hopefully, the court will rule in such a way that provides clarity not just in the Microsoft case but more broadly on this question.

    via Register