Topic: AI

Each post below is tagged with
  • Company/Division names
  • Topics
  • and
  • Narratives
  • as appropriate.
    ★ Apple Starts Manufacturing Its Siri Speaker (May 31, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Samsung’s Bixby Further Delayed in US to End of June (May 31, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    ARM Updates Main Processor Lines, Includes AI Optimizations (May 29, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Google Launches an AI Investment Program Separate from GV and CapitalG (May 26, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    ★ Apple is Developing a Dedicated AI Chip (May 26, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    ★ Google Makes Assistant and Home Announcements at I/O (May 17, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Google To Bring Assistant to iPhone, Let Users Create Photo Books (May 16, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    ★ Apple Acquires Dark Data Analysis Company Lattice Data, Reportedly for $200m (May 15, 2017)

    It emerged over the weekend that Apple has acquired Lattice Data, a company which specializes in analyzing unstructured data like text and images to create structured data (i.e. SQL database tables) which can then be analyzed by other computer programs or human beings. TechCrunch has a single source which puts the price paid at $200 million, and Apple has issued its usual generic statement confirming the acquisition but offering no further details. It’s worth briefly comparing the acquisition to Google’s of DeepMind in 2014: that buy was said to cost $500 million and was for 75 employees including several high profile AI experts, though it was unclear to outside observers exactly what it was working on, while this one reportedly brought 20 engineers to Apple and has several existing public applications and projects to point to. Lattice is the commercialized version of Stanford’s DeepDive project, which has already been used for a number of applications involving large existing but unstructured data sets. Lattice has a technique called Distant Supervision which it claims obviates the need for human training and instead relies on existing databases to establish links between items that can be used as a model for determining additional links in new data sets. It’s not clear to me whether the leader of the DeepDive team at Stanford, Christopher Ré, is joining Apple, but he was a MacArthur Genius Grant winner in 2015 and this video from MacArthur is a great summary of the work DeepDive does (there’s also a 30-minute talk by Ré on the DeepDive tech). Seeing Apple make an acquisition of this scale in AI is an indication that, despite not making lots of noise about its AI ambitions publicly, it really is serious about the field and wants to do better at parsing the data at its disposal to create new features and capabilities in its products. It’s entirely possible that we’ll never know exactly how this technology gets used at Apple, but it’s also possible that a year from now at WWDC we hear about some of the techniques Lattice has brought to Apple and applied to some of its products. Interestingly, the code for DeepDive and related projects is open source and available on GitHub, so I’m guessing Apple is acquiring the ability to make further advances in this area as much as the technology in its current form.

    via TechCrunch

    Microsoft and Harman Kardon Detail Cortana-Powered Invoke Speaker (May 8, 2017)

    Back in December, Microsoft announced its equivalent of Amazon’s Alexa platform for third parties in the form of its Cortana Skills Kit and Cortana Devices SDK. A week later, Harman Kardon announced its was working on a speaker that would feature Cortana, and said it would launch in 2017. Five months later, the two companies have provided a name (Invoke), pictures, and some capabilities for the device, but there’s still no specific launch date (beyond “Fall 2017”) or pricing. On paper, the Invoke looks a lot like Echo in both its design and its capabilities (it even has an Echo-like 7-mic array), and the main difference is that it will do Skype voice calls, which is something that’s been rumored for both Echo and Google Home but isn’t yet supported by either. One advantage Harman would have over Amazon or Google in this space is that it’s a speaker maker, so it may well have better audio quality in its version than those companies have in theirs, something that’s been a shortcoming in this category so far. And of course, it’s interesting given Samsung’s ownership of Harman Kardon that this speaker is running neither of the assistants Samsung itself supports – its own new Bixby assistant or the Google Assistant – though this partnership obviously began before the Samsung acquisition closed. Pricing is an interesting question: whereas Google and Amazon both have broader ecosystems which benefit from such a device and therefore justify subsidizing or selling it at cost, Harman obviously needs to make money on it, so it may end up being priced higher (as Apple’s version likely will be too). Lastly, we might see other ecosystem devices using Cortana announced at Microsoft’s Build developer conference this week.

    via Microsoft

    Samsung Gets Permission To Test Self-Driving Tech in a Hyundai in Korea (May 2, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    ★ Apple Siri Speaker Could Debut at WWDC in June (May 1, 2017)

    KGI, which as I’ve noted before has a decent track record on future Apple products, says there’s a 50/50 chance that Apple’s entry in the connected home speaker market could debut at WWDC next month. There’s scant detail in the report other than that Apple’s speaker will have better audio hardware than the Echo, which has been criticized as being sub-par as a speaker despite its effectiveness as a voice-activated assistant device. I would certainly expect such a device to combine Siri, AirPlay, HomeKit device control, and possibly some kind of WiFi connectivity, but it’s very unlikely Apple could do all that well and still make its usual margin at the $130-180 price point that the full Echo and Home devices sell for. It’s more likely this would be sold in the range of the larger Sonos speakers (which Apple has been selling in its stores for the last little while), which would mean $300-500. That puts it in a different category from what’s out there today, which wouldn’t be unusual for Apple but would put it well out of impulse buy territory for most people and limit sales quite a bit. One big question is whether Siri is yet good enough for such a speaker, and what upgrades Apple might have in store for Siri at WWDC this year to help it get there. As I’ve suggested in the past, Siri’s shortcomings are at least in part hardware-based: more often than not, the problem is wrongly interpreting what’s said because of the tiny mics being used for voice recognition, and a big device should help a great deal with that. But Siri can also be frustrating even when it does understand what you say, and its more conversational elements are still pretty limited, which could be a big shortcoming on a device without an alternative input mechanism. I’m sure Apple will have some other special sauce in mind so this isn’t just another Echo or Home but something a bit different. But there’s a good chance this ends up being yet another new product category for Apple which sells a few million a year and which critics therefore contend is a flop, while it quietly generates a decent amount of revenue and profit for Apple (see also the Apple Watch and AirPods).

    via 9to5Mac

    Alexa Gets Speech Synthesis Tools for Developers to Help it Sound More Human (May 1, 2017)

    Amazon is giving developers of Skills (apps) for Alexa new speech tools which should help them create interactions where the assistant sounds more human through the use of pauses, different intonation, and so forth. Amazon already uses these for Alexa’s first party capabilities, but third party developers haven’t had much control over how Alexa intones the responses in their Skills. This should be a useful additional developer tool for adding a bit more personality and value, but I wonder how many developers will bother – new platform tools like this are always a great test of how engaged developers are and how committed they are to creating the best possible experience rather than just testing something out. I’ve argued from the beginning that the absolute number of Skills available for Alexa (now at 12,000) is far less meaningful than the quality of those apps, and many of them are very basic or sub-par, likely from developers trying something out as a hobby without any meaningful commitment to sustaining or improving their apps. On the other hand, the smaller number of really serious apps for Alexa should benefit from these new tools.

    via TechCrunch

    ★ Amazon Announces Echo Look, Which Adds a Camera and Fashion Advice to Echo for $20 (Apr 26, 2017)

    Amazon has announced a new device in its Echo family called the Echo Look, which assumes a different form factor, adding a still and video camera to features of the standard device for $20 more. For now, the focus is fashion advice: the camera can take full-length photos or videos of the user, acting like a full-length mirror at a basic level but also offering fashion advice through machine learning tools trained by fashion experts. I say for now, because once you have a camera in an Echo device it could be used for many other things too – indeed, when reports and pictures of this device first surfaced people assumed it was a security camera, and there’s really no reason why it couldn’t be. And several of these devices together could be very useful for motion sensing and other tasks as part of a smart home system over time too. But Amazon’s also smart to start specializing the Echo a little, with a particular focus on women, as I would guess a majority of sales of Echo devices to date have gone to men. I’d bet we’ll see other more specialized devices in time, but also other uses for this camera as it gets software updates. And this also starts to get at a real business model for Echo, which so far hasn’t done much to boost e-commerce sales but could now drive clothing revenue through sales of both third party apparel and Amazon’s own growing line. And what Amazon learns from the Look and its associated app can be fed back into the core Amazon.com clothes shopping experience too, improving recommendations in the process. But of course all this comes with downsides: not only do you have a device in your home that’s always listening, but you now have a device with a camera, which could feasibly be hacked remotely to take pictures or video of you. And Amazon will store the images it captures indefinitely, creating a further possible source of problems down the line.

    via The Verge

    ★ Google Home Now Recognizes Multiple Users by Voice (Apr 20, 2017)

    This has been a long time coming – in fact, in just a few weeks it’ll be a year since Google debuted Home at its I/O developer conference and implied that it would have multi-user support, though of course it was missing when the device actually launched in the fall. And that’s been a big limitation of a device that’s supposed to get to know you as an individual. So the fact that Google Home now recognizes distinct users by voice is a big deal, and an important differentiator over Amazon Echo. I’ve just tried it with my unit and although it set up accounts for me and my daughter without problems the app conked out when I tried to add my wife, so the results are mixed (I suspect it may be because my wife’s account is a Google Apps account). It does recognize the two voices we set up and will now serve us up different responses, which is great. One big limitation, though, is that each user has to have a Google account and has to download the Google Home app onto their phone, which means it won’t recognize little kids who don’t have Google accounts. And given that it’s using voice recognition rather than, say, different trigger phrases, I can’t set up separate personal and work accounts. But for those who can use it, the Home will now be a much more useful device, serving up calendar information, music preferences and so on on an individualized basis rather than trying everyone in a home as the same person.

    via Google

    ★ Amazon Scales Alexa Back-End by Opening Lex Voice and Text Service to All Developers (Apr 19, 2017)

    So much of the focus of coverage of voice assistants and interfaces is on the dedicated consumer products which use them, and that’s natural: these are the most visible and measurable signs of a company’s success or failure in this space. And yet the scale of those dedicated voice product is still very small relative to smartphones, which carry their own voice assistants. And scale is vital if these products are to improve, because they require lots and lots of training to get better, and so the more users there are training them, the better they become. As such, I suspect the next phase of competition in this space is going to be about developer voice platforms at least as much as it is about first-party hardware and software, and we’re starting to see signs of this from the big companies in the space, including Google and Amazon. Today, Amazon announced that Lex, which is a back-end service that combines many of the technologies behind Alexa, is opening up to all developers. But critically, this isn’t just a voice platform – it supports text and voice processing, which means that many of the developers might use it in chat bots or other similar environments that have nothing to do with voice but still help train Amazon’s natural language processing tools. Google is doing similar things with its own voice processing technology, but it’s doubtful whether Apple will ever open its voice tools up in the same way. That’s not a huge deal, because it has massive scale in voice on smartphones alone, but it may make a bigger difference over time as these other platforms benefit not only from growing first party scale but increasing third party adoption and use too.

    via Amazon

    Baidu Changes Strategy for Autonomous Driving, Creating Open Platform (Apr 19, 2017)

    This content requires a subscription to Tech Narratives. Subscribe now by clicking on this link, or read more about subscriptions here.

    Samsung says Bixby voice assistant won’t ship with Galaxy S8 – Axios (Apr 11, 2017)

    This actually isn’t news, at least if you paid attention a couple of weeks ago when Business Insider UK reported (and I noted) that Korean would be the launch language for Bixby, and that American English would follow in May, with British English later in the year. However, it appears that Samsung provided a somewhat different steer to US press, telling them that the assistant would be available at launch on April 21st. News of the later US launch is now filtering out through US reps too, however, and will be received as bad news by those who pre-ordered the phone (apparently in large numbers) ahead of reviews and the release of this news. Given that Bixby is at least on paper one of the headline features, at least some of those early buyers will be disappointed, though the screen is another big selling point and that should perform as advertised with the caveats I mentioned in my first comment on the S8 and in the podcast episode I did on the Samsung announcements. Releasing Bixby late is better than releasing a buggy version not ready for launch, but the delay had better not be too long, nor the version it does release too unpolished. Both are risks at this point.

    via Axios

    Google Home is rolling out support for multiple users – The Next Web (Apr 10, 2017)

    This shouldn’t be a surprise to anybody – Google actually showed off what appeared to be multi-user support in its demo of Google Home at I/O last year, but then it turned out the finished product didn’t support it when released in the fall. A little while back, rumors began surfacing that it would add the feature soon (and Amazon too), but that hasn’t materialized yet. The screenshot shared here suggests it’s imminent at this point. This is important, because assistants have to be personal if they’re to be really useful, and most people live in homes with other people, whether family members or roommates, and so things like calendar, email, to-do lists aren’t much use to individuals unless they can be recognized and served up different results. That’s not easy to do, especially because these speakers tend to process voices before recognition takes place, which actually makes it harder to recognize the speaker, but the companies were bound to figure it out eventually. If Google does end up launching this before Amazon, this will be yet another performance advantage, even though its distribution disadvantage remains enormous.

    via The Next Web

    Google Home Vs. Amazon’s Alexa: 54 Questions, 1 Clear Winner – Forbes (Apr 7, 2017)

    This is a fun little comparison done by a user in the UK of the ability to the two major home smart speaker units to answer 54 questions. Google Home wins in the end, with 32.5 answered correctly, to 19.5 for Echo/Alexa. The questions were a mix of simple and challenging, and the user was in the UK and asked quite a few UK-specific questions, taking advantage of the fact that both devices recently launched there. But it’s a great illustration of both how Google has the existing skillset to do really well in this category, and also the fact that all these assistants have some way still to go to answer all the questions users might reasonably expect them to deal with.

    via Forbes

    Google Develops Federated Machine Learning Method Which Keeps Personal Data on Devices (Apr 6, 2017)

    This is an interesting new development from Google, which says it has created a new method for machine learning which combines cloud and local elements in a way which keeps personal data on devices but feeds back the things it learns from training to the cloud, such that many devices operating independently can collectively improve the techniques they’re all working on. This would be better for user privacy as well as efficiency and speed, which would be great for users, and importantly Google is already testing this approach on a commercial product, its Gboard Android keyboard. It’s unusual to see Google focusing on a device-level approach to machine learning, as it’s typically majored on cloud-based approaches, whereas it’s been Apple which has been more focused on device-based techniques. Interestingly, some have suggested that Apple’s approach limits its effectiveness in AI and machine learning, whereas this new technique from Google suggests a sort of best of both worlds is possible. That’s not to say Apple will adopt the same approach, and indeed it has favored differential privacy as a solution to using data from individual devices without attributing it to specific users. But this is both a counterpoint to the usual narrative about Google sacrificing privacy to data gathering and AI capabilities and to the narrative about device-based AI approaches being inherently inferior.

    via Google