Narrative: Declining Privacy & Security
Each narrative page (like this) has a page describing and evaluating the narrative, followed by all the posts on the site tagged with that narrative. Scroll down beyond the introduction to see the posts.
Narrative: Declining Privacy and Security (Jan 24, 2017)
Written: January 24, 2017
In the online era, privacy and security, things which were previously safeguarded in physical ways (shutters and blinds, locks and keys) can no longer be so protected. Instead, so much of what we’d like to keep private or secure doesn’t even exist physically, but only digitally, and access to it can take place without anyone ever entering our homes or approaching our person. In addition, business models for many of the products and services we use daily rely on us giving away a portion of our privacy.
Is it inevitable that both our privacy and security will be eroded in the online era? Yes, in a number of different ways, not least that much information which was previously held discretely in separate locations or databases can now be far more easily aggregated for a much more complete picture of who we are. But also because no digital system is foolproof, and highly motivated actors will always be trying to breach security and obtain information that either has inherent value or can be leveraged to deliver that value elsewhere.
None of this means we have to simply resign ourselves to our fates – we still have decisions to make about which services we will and won’t use based on both their attitudes towards and effectiveness in protecting our privacy and security. We can decide which information we willingly yield up, opt out of tracking and targeting, and vote with our feet and wallets when companies and their services let us down. We can choose products and services which choose not to track us, or which do such tracking at a local rather than global level, while protecting that local data effectively.
At the same time, many of us – especially in the younger generations – are much less concerned about privacy than others, and accept as a fact of life that some measure of privacy must be yielded up as a trade for free or cheap content or communication services. Some make those tradeoffs consciously, and some make them blissfully unaware that that’s what they’re doing, but we all make decisions about those tradeoffs one way or another.
A reviewer at Android Police reports that he discovered the Google Home Mini unit he was testing was recording nearly everything he said while in its vicinity, because the device erroneously thought he was holding down the button which acts as an alternative to its wake word. Google has now pushed a software patch which disables that button entirely for the time being, to ensure that doesn’t happen to others. Given that many people already feel uncomfortable with the idea of an always-listening device in their home, the idea that it could be recording and transmitting to Google’s servers everything that’s being said because of a bug will not instill confidence. This is something of a nightmare scenario for these devices, and the fact that Google turned off a feature of the device to fix it indicates just how seriously it’s taking the issue. Reviews of the Mini have dribbled out here and there and have mostly been positive, while this is the first mention I’ve seen of this issue, but it’s certainly not a great start for the Mini.
via Android Police
This issue has been covered in various places over the past couple of weeks, but this is the first bit of real criticism I’ve seen of Apple’s approach here, and I thought it was worth diving into briefly. In iOS 11, the Control Center users reach by sliding up from the bottom of the screen on most iPhones has what appear to be on/off toggles for Bluetooth and WiFi, but in reality these toggles don’t actually turn those radios all the way off. Rather, they leave both radios in a more limited mode in which they still operate in certain ways and in fact will reactivate each morning at 5am. This is a change Apple hasn’t communicated proactively to users in any way, and represents a fairly big shift from how things have worked in the past.
The EFF piece linked below suggests this presents security risks given past Bluetooth vulnerabilities, though it doesn’t actually suggest any specific vulnerabilities Apple might be exposing users to in iOS, which like most mobile operating systems handles Bluetooth pairing requests pretty carefully. Apple’s reasoning for the change is sound – leaving these radios in this in-between state enables key Apple functions like Handoff of activity between devices, the Instant Hotspot feature, and others – but the implementation of the change feels un-Apple-like, in that it’s unintuitive and overrides user preferences in a couple of different ways. Apple could have made similar changes in a more transparent and user-friendly way, and avoided some of the criticism it’s now getting.
The Yahoo breach reported before its acquisition by Verizon closed, and which had been said to affect 1 billion accounts, is now reported to have affected all 3 billion accounts Yahoo had. That could be a bit of a misleading number, given that there’s no way Yahoo had 3 billion separate customers – many of these accounts were likely dormant and duplicates of other accounts, so the actual number of people affected is likely far smaller, and the number who will have had sensitive information shared even smaller. But it’s still a staggering number. However, I’d bet that with the ongoing chatter about the Equifax hack (including the former CEO’s testimony in Congress this week), as well as the broad political story around tech companies and Russian election meddling, this will blow over really quickly and the additional fallout for Verizon and/or the Yahoo brand will be minimal. That may be sad, but no less true for that.
Bloomberg’s Apple and Google reporters have teamed up for a story about Google building new tools to help secure the accounts of high profile users or others with higher exposure to attempted hacking. This is apparently a response to some previously reported hacks of prominent users’ Gmail accounts and will combine new ways to secure logins with restrictions on third party app integrations and other features designed to close potential entries for hackers. The feature has a name – Advanced Protection Program – and will be marketed to executives and politicians among others, suggesting that it will be a fee-based service, likely an add-on to corporate deployments of Google’s G Suite. All of this feels very topical in the midst of all the reporting about Russian meddling in last year’s US elections, and although that’s mostly currently focused on ad buying and influence through social networks rather than hacking, it’s all obviously connected, with widespread allegations that the Russians were feeding documents from various hacks to Wikileaks, for example.
As new versions of Apple’s operating systems and new iPhone hardware roll out, Apple has updated its website’s privacy section to reflect some of the recent changes and especially to deal with questions users may have about the Face ID feature on the upcoming iPhone X. The site starts with big picture statements about Apple’s commitment to privacy, starting with the assertion that “At Apple, we believe privacy is a fundamental human right” and moves on to more detailed descriptions of Apple’s approach to privacy. In a nutshell, the policy described there is that Apple isn’t interested in your personal data, enables you to determine with whom to share it, and also provides tools for you to protect your information and devices. Apple also addresses its use of differential privacy, which has been in the news lately for a couple of different reasons, including a recent study which asserted that it’s weaker as a privacy protection than Apple says, but also because of changes to Safari data gathering in macOS High Sierra.
For Apple, the key is that it has no reason to infringe on its users’ privacy, because its business model is best served by protecting that privacy rather than gathering data on its users. That’s a meaningful differentiator for at least some Apple customers, and reinforcing these values will be important to them, but for many other customers Apple, Google, Microsoft, and other companies’ privacy policies are not a matter of significant moment. That could of course change in time as these companies have potential access to more and more personal data including health data, but for now the surveys I’ve seen suggest that trust levels are broadly similar between big companies and most people don’t avoid companies like Google because of their business models and approach to data gathering.
Though the headline on the Recode piece linked below says Apple is facing questions from the US Senate on its new Face ID feature, the reality is that the questions are coming from one Senator: former comedian Al Franken, who’s always taken an interest in tech issues and tends to use them to raise his public profile. A number of the questions he’s posing have already been addressed by Apple (including in its public announcement of the feature) while others suggest Franken thinks Apple is Google or some other company which regularly uses data on its customers to target advertising. All of which suggests he either hasn’t taken time to understand the feature properly, or is simply grandstanding, which frankly feels more likely. Apple’s stance on privacy and security is abundantly clear at this point, as demonstrated by its approach to the Touch ID feature (which Franken previously investigated in a similar way). None of that will stop people freaking out about the feature, and coincidentally or not the Economist magazine’s cover story this week is about the dangers of companies collecting facial data. But Apple is storing this data on the device in ways inaccessible to anyone but the user or for purposes other than those intended by Apple and approved by the user.