Topic: Machine learning

Each post below is tagged with
  • Company/Division names
  • Topics
  • and
  • Narratives
  • as appropriate.
    DeepMind, Still Running Separate from Google in the UK, Lost $162m in 2016 (Oct 6, 2017)

    Quartz reports that Alphabet’s DeepMind subsidiary, which is still registered as a separate private company in the UK and therefore has to report its own financials, lost $162 million in 2016, on revenues of just $40 million, all of which came from Google. It’s a quirk of accounting that DeepMind is still reporting as a separate company, but it gives some insight into the cost of running such a business, which is focused on cutting-edge AI  work, much of which is not ready for direct monetization in revenue generating products. Given that Alphabet as a whole spent over $15 billion on Research and Development in the past year, this is a tiny fraction of the total, and an operation the company can easily afford to keep going along these lines. Much of the losses, incidentally, stem from the $137 million the company spent on staff and related costs, of which I would guess a big chunk is stock-based compensation, which runs at $2 billion per quarter for Alphabet as a whole and $100-150 million per quarter in the Other Bets segment. And of course there are big chunks of Google itself working on AI as it relates to specific products too, so this is far from the scale of Alphabet’s overall investment in AI, which is increasingly filtered into everything Google does.

    via Quartz

    Facebook Opens AI Lab in Montreal (Sep 15, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Uber Details its Machine Learning Platform to Burnish AI Credentials (Sep 6, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Apple Machine Learning Researchers Publish Three Papers in In-House Journal (Aug 23, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Google Adds 30 Language Varieties to Voice Dictation, Cloud Speech API (Aug 14, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Apple Launches Machine Learning Journal (Jul 19, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Google Upgrades Feed, its Google Now Replacement (Jul 19, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Instagram Uses AI to Filter Spam and Abusive Comments (Jun 29, 2017)

    Instagram is announcing today that it’s now using artificial intelligence to filter spam and abusive comments in the app. Wired has a feature (also linked below) which dives deeper into the background here and makes clear that what Instagram is doing here builds on Facebook’s DeepText AI technology, and that Instagram has been working on it for some time. The spam filter works in nine languages, while the comment moderation technology only works in English for now, but both should clean up the Instagram experience. Importantly, though both spam and harassment are issues on Instagram, neither are as bad there because so many people have private accounts – I haven’t seen an official statement from Instagram on this but some research and testing suggests it’s likely between 30 and 50% of the total number of accounts that are private. Those accounts, in turn, are far less likely to receive either spam or abusive comments, since they’ve explicitly chosen to allow those who might comment to follow them. However, for the rest, and especially for celebrities, brands, and so on, these are likely far bigger issues, so cleaning them up in a way that doesn’t require the same massive investment in manual human moderation as Facebook’s core product is a good thing all around.

    via Instagram Blog (see also Wired feature)

    Facebook Publishes Research on Bots that Have Been Trained to Negotiate (Jun 14, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    New York Times Adopts Alphabet’s AI-Powered Content Moderation (Jun 13, 2017)

    Just a quick one here: I wrote about Alphabet company Jigsaw’s machine learning-based approach to online content moderation a while back. At the time, I said it was nice to see AI and machine learning being applied to humdrum every problems that actually needed solving, but back then this was merely a concept that Jigsaw was making available. So it’s great validation for the technology that the New York Times is actually adopting it in a modified, customized form it’s developed with Jigsaw. That should both improve comment moderation on the Times website while also giving the underlying technology a boost, presumably making other news organizations more likely to try it.

    via Poynter

    ★ Apple Acquires Dark Data Analysis Company Lattice Data, Reportedly for $200m (May 15, 2017)

    It emerged over the weekend that Apple has acquired Lattice Data, a company which specializes in analyzing unstructured data like text and images to create structured data (i.e. SQL database tables) which can then be analyzed by other computer programs or human beings. TechCrunch has a single source which puts the price paid at $200 million, and Apple has issued its usual generic statement confirming the acquisition but offering no further details. It’s worth briefly comparing the acquisition to Google’s of DeepMind in 2014: that buy was said to cost $500 million and was for 75 employees including several high profile AI experts, though it was unclear to outside observers exactly what it was working on, while this one reportedly brought 20 engineers to Apple and has several existing public applications and projects to point to. Lattice is the commercialized version of Stanford’s DeepDive project, which has already been used for a number of applications involving large existing but unstructured data sets. Lattice has a technique called Distant Supervision which it claims obviates the need for human training and instead relies on existing databases to establish links between items that can be used as a model for determining additional links in new data sets. It’s not clear to me whether the leader of the DeepDive team at Stanford, Christopher Ré, is joining Apple, but he was a MacArthur Genius Grant winner in 2015 and this video from MacArthur is a great summary of the work DeepDive does (there’s also a 30-minute talk by Ré on the DeepDive tech). Seeing Apple make an acquisition of this scale in AI is an indication that, despite not making lots of noise about its AI ambitions publicly, it really is serious about the field and wants to do better at parsing the data at its disposal to create new features and capabilities in its products. It’s entirely possible that we’ll never know exactly how this technology gets used at Apple, but it’s also possible that a year from now at WWDC we hear about some of the techniques Lattice has brought to Apple and applied to some of its products. Interestingly, the code for DeepDive and related projects is open source and available on GitHub, so I’m guessing Apple is acquiring the ability to make further advances in this area as much as the technology in its current form.

    via TechCrunch

    Samsung Gets Permission To Test Self-Driving Tech in a Hyundai in Korea (May 2, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    ★ Amazon Announces Echo Look, Which Adds a Camera and Fashion Advice to Echo for $20 (Apr 26, 2017)

    Amazon has announced a new device in its Echo family called the Echo Look, which assumes a different form factor, adding a still and video camera to features of the standard device for $20 more. For now, the focus is fashion advice: the camera can take full-length photos or videos of the user, acting like a full-length mirror at a basic level but also offering fashion advice through machine learning tools trained by fashion experts. I say for now, because once you have a camera in an Echo device it could be used for many other things too – indeed, when reports and pictures of this device first surfaced people assumed it was a security camera, and there’s really no reason why it couldn’t be. And several of these devices together could be very useful for motion sensing and other tasks as part of a smart home system over time too. But Amazon’s also smart to start specializing the Echo a little, with a particular focus on women, as I would guess a majority of sales of Echo devices to date have gone to men. I’d bet we’ll see other more specialized devices in time, but also other uses for this camera as it gets software updates. And this also starts to get at a real business model for Echo, which so far hasn’t done much to boost e-commerce sales but could now drive clothing revenue through sales of both third party apparel and Amazon’s own growing line. And what Amazon learns from the Look and its associated app can be fed back into the core clothes shopping experience too, improving recommendations in the process. But of course all this comes with downsides: not only do you have a device in your home that’s always listening, but you now have a device with a camera, which could feasibly be hacked remotely to take pictures or video of you. And Amazon will store the images it captures indefinitely, creating a further possible source of problems down the line.

    via The Verge

    Google Develops Federated Machine Learning Method Which Keeps Personal Data on Devices (Apr 6, 2017)

    This is an interesting new development from Google, which says it has created a new method for machine learning which combines cloud and local elements in a way which keeps personal data on devices but feeds back the things it learns from training to the cloud, such that many devices operating independently can collectively improve the techniques they’re all working on. This would be better for user privacy as well as efficiency and speed, which would be great for users, and importantly Google is already testing this approach on a commercial product, its Gboard Android keyboard. It’s unusual to see Google focusing on a device-level approach to machine learning, as it’s typically majored on cloud-based approaches, whereas it’s been Apple which has been more focused on device-based techniques. Interestingly, some have suggested that Apple’s approach limits its effectiveness in AI and machine learning, whereas this new technique from Google suggests a sort of best of both worlds is possible. That’s not to say Apple will adopt the same approach, and indeed it has favored differential privacy as a solution to using data from individual devices without attributing it to specific users. But this is both a counterpoint to the usual narrative about Google sacrificing privacy to data gathering and AI capabilities and to the narrative about device-based AI approaches being inherently inferior.

    via Google

    Google Shares Performance Characteristics for its Machine Learning Chip (Apr 5, 2017)

    It’s time to roll out that old Alan Kay maxim again: “those who are serious about software should make their own hardware”. Google started working on its own machine learning chip, which it calls a Tensor Processing Unit or TPU, a few years back, and has now shared some performance characteristics, suggesting that it’s more efficient and faster than CPUs and GPUs on the market today for machine learning tasks. While Nvidia and others have done very well out of selling GPU lines originally designed for computer graphics to companies doing machine learning work, Google is doing impressive work here too, and open sourcing the software framework it uses for machine learning. As I’ve said before, it’s extremely hard to definitively answer the question of who’s ahead in AI and machine learning, but Google consistently churns out evidence that it’s moving fast and doing very interesting things in the space.

    via Google Cloud Platform Blog

    Microsoft launches Sprinkles, a silly camera app powered by machine learning – TechCrunch (Apr 4, 2017)

    As I mentioned recently in the context of Microsoft’s Indian AI chatbot, the company appears to be in an experimental mood as regards AI, trying lots of things in lots of separate spaces, without pushing all that hard in any particular direction. There’s nothing wrong with experimentation, but there is a worry that Microsoft both spreads itself a little thin and risks diluting its brand, which has become more focused of late around productivity. There’s an argument to be made that this Sprinkles app fits its other, newer focus on creativity, but it’s probably a bit of a stretch given the minimal ties into any of its other offerings. On the consumer side, Microsoft’s biggest challenge continues to be not just producing compelling offerings but finding ways to monetize them.

    via TechCrunch

    Facebook Shows Users More Content Which Doesn’t Come From Your Friends – TechCrunch (Apr 3, 2017)

    Almost exactly two months ago, I wrote in my Techpinions column that Facebook’s next big opportunity was finally stepping beyond the idea of showing users only content shared by their friends, and using AI and machine learning to show them other content like content they’d previously engaged with. Doing this, I said, would dramatically expand the amount of interesting content that could be shown to users, thereby keeping them on the service for longer, and giving Facebook more time and places to show ads. And as I wrote almost exactly a year ago, this is just another consequence of Facebook becoming less of a social network and more of a content hub. Today, we’re seeing Facebook not only roll out a video tab (and a video app for TVs) with suggested videos, but also now testing a dedicated tab for recommended content of all kinds in its apps. This is yet another extension of Facebook’s increasing absorption of activity from across users’ lives into its various apps in an attempt to capture more of users’ time and advertisers’ dollars, and I suspect it’ll work pretty well if it’s managed right. Of course, it’s demonstrated several times lately that it’s somewhat lost its touch in that department, so it will need to proceed carefully in pushing forward in this area to avoid alienating users.

    via TechCrunch

    Apple GPU Supplier Imagination Tech Says Apple Plans to Build its Own GPU in 1-2 Years (Apr 3, 2017)

    This already feels likely to be one of the biggest news items of the week (incidentally, you can now use the Like button below to vote for this post if you agree – the posts that get the most votes are more likely to be included in my News Roundup Podcast at the end of the week). There have been ongoing reports that Apple would like to build more of its own in-house technology, and GPUs have seemed at least a candidate given that Apple was said for a while to be mulling an acquisition of the company, and has been bringing Imagination Tech employees on board since the deal didn’t go ahead. The GPU obviously has a number of existing applications, but GPU technology has increasingly been used for AI and machine learning, so that’s an obvious future direction, along with Apple’s reported investment in AR. Apple’s ownership of its A-series chips (and increasingly other chips like its M and W series) is a key source of competitive advantage, and the deeper it gets into other chip categories, the more it’s likely to extend that advantage in these areas. This is, of course, also a unique example of Apple making a direct statement about a future strategy (albeit via a third party): as Apple is IMG’s largest customer, it had to disclose the guidance from Apple because it’s so material to its future prospects – the company’s share price has dropped 62% as of when I’m writing this.

    via Imagination Technologies

    Google Announces Progress in Using Deep Learning to Detect Cancer (Mar 3, 2017)

    Yet another story about using either AI or deep learning (or both) to solve a real-world problem, from Google. This time, it’s an application miles away from any of Google’s current businesses (though perhaps a little relevant to some of the Other Bets), but the point is that Google is finding a very broad set of applications for its capabilities here, which can of course be applied back to lots of things which are relevant to the core Google business (as well as providing tangible human benefits if adopted by other organizations).

    via Google

    Google Cousin Develops Technology to Flag Toxic Online Comments – The New York Times (Feb 23, 2017)

    I love the term “Google cousin” to describe the non-Google companies under the Alphabet umbrella (though confusingly Jigsaw’s website makes it seem as if it’s actually part of Google despite no longer being called Google Ideas). The bigger point here is that this is a clever use of machine learning to solve a real problem, which I’m always a big fan of. Online comments can be horrible and very time consuming to moderate, and this API can be used by publishers to filter out the most “toxic” of those moments. Having said that, the sample comments Jigsaw shows to demonstrate the tool highlight just how inane most online comments are regardless of whether they’re actually toxic, calling into question for me at least whether they’re worth having at all. But this Perspective tool seems to be part of a broader push around technologies for increasing “safety” in various scenarios – that’s definitely the message you get at the Jigsaw website.

    via The New York Times