'How to use APL on Alexa Multimodal Devices with Python?
I have created an Alexa skill in python which is working fine for my expected intents.
Now I want to add a functionality in that skill such that if my skill is opened/triggered on any device with a display (ex: Echo show) then it will show either my whole conversation on Display or the final part or some part of chat that I want to deliver on the display device.
I have gone through the official documentation of the Alexa skill it says something like that "you have to use APL", but it doesn't clearly mention how to do it in python from scratch, can you tell me the process to build the feature from scratch.
A step-by-step guide using the Alexa developer console for Python would be beneficial.
Solution 1:[1]
You can develop the screen directly within the developer console.
Then, when you respond to the request received by a user interaction, you have to include the directive to add this APL document.
(Only if the device support it: context.System.device.supportedInterfaces.Display
)
I recommend you to be familiar with the Alexa Skills kit SDK for python on the official documentation (it includes examples and sample skills).
And you should also follow the zero to hero video tutorials about developing an Alexa skills. It usually respond to 99% of all your actual & future questions.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | callmemath |