Alexa Gets Speech Synthesis Tools for Developers to Help it Sound More Human (May 1, 2017)
Amazon is giving developers of Skills (apps) for Alexa new speech tools which should help them create interactions where the assistant sounds more human through the use of pauses, different intonation, and so forth. Amazon already uses these for Alexa’s first party capabilities, but third party developers haven’t had much control over how Alexa intones the responses in their Skills. This should be a useful additional developer tool for adding a bit more personality and value, but I wonder how many developers will bother – new platform tools like this are always a great test of how engaged developers are and how committed they are to creating the best possible experience rather than just testing something out. I’ve argued from the beginning that the absolute number of Skills available for Alexa (now at 12,000) is far less meaningful than the quality of those apps, and many of them are very basic or sub-par, likely from developers trying something out as a hobby without any meaningful commitment to sustaining or improving their apps. On the other hand, the smaller number of really serious apps for Alexa should benefit from these new tools.
The company, topic, and narrative tags below will take you to other posts with the same tags. The narrative link(s) will also take you to the narrative essay which provides additional context behind the post.
Vote for or share this post
Use the Like button below to vote for this post as one of the most important of the week. The posts voted most important are more likely to be included in the News Roundup podcast episode I do each week. Or use the sharing buttons to share a link to this post to social networks or other services.