More than 2 years ago I wrote about a library I made up for integrating Salesforce and Amazon Echo, using its REST APIs and Apex: this is the original post.
I supported the library for a while hoping that the Ohana could took ownership of it but unfortunately this didn't happened.
With great surprise I met the next guest blogger, Harm Korten, who was developing his own version of the AlexaForce library.
I'm more than happy to give him place to his amazing library and hope that the time is now ripe to bring this library to the big audience!
Harm Korten is a Force.com fan from The Netherlands. His professional career in IT started in 2001 as a developer, but his interest in computers started well before that. He got introduced to Salesforce in 2005, working at one of the
first Dutch Salesforce.com partners, Vivens. He has been a Salesforce fan and advocate ever since.
Over the years, he has worked on countless Salesforce projects, at dozens of Salesforce end-user customers. Currently he is active at Appsolutely, a Dutch Salesforce partner, founded in 2017.
Find him on LinkedIn or follow his Salesforce (and other) adventures on his blog at harmkorten.nl.
In the first week of 2018, I ran into some of Enrico Murru’s work. Google offered his AlexaForce Git Repo (https://github.com/enreeco/alexa-force) as a suggestion to one of my many questions about integrating Amazon Alexa (https://developer.amazon.com/alexa) with Salesforce. It turned out Enrico had been working on this same thing, using the same technology stack, as I was at this moment.
An Alexa Skill SDK in APEX, only 2 years earlier!
Nerd reference
Up until this moment, besides Enrico’s proof of concept version of such an SDK, the only available technology stacks that would allow integration between Salesforce and Alexa were the Node.js and Java SDK’s. These could be hosted on Heroku and use Salesforce API’s to integrate.
Like Enrico, I wanted to build an on-platform (Force.com) Alexa Skill SDK. This common interest put us in contact. One of the results is this guest blog, not surprisingly, about AlexaForce. Not Enrico’s AlexaForce, but Harm’s AlexaForce. We apparently both came up with this very special name for the SDK (surprise, surprise) ;)
The basic idea about this Force.com SDK for Alexa, is to remove the necessity to work with Salesforce data through the Salesforce API. The Java or Node.js approach would have Amazon send requests to Heroku and from therefore require API communication with Salesforce.
With the AlexaForce SDK, Amazon will send the Alexa requests straight to Salesforce, allowing a developer to have full access to the Salesforce data, using SOQL, SOSL and APEX. The resulting architecture is depicted on the image below.
For more information about AlexaForce and how to use it, please visit https://github.com/HKOLWD/AlexaForce. You will find code samples and a detailed instruction there. For this article, I will elaborate on a specific Alexa Skill design approach, which is still in beta at Amazon: Dialogs.
Generally spoken, the most important part of an Alexa Skill, is its Interaction Model. The Interaction Model is defined in the Amazon Developer Portal when creating a new skill. The model will determine how comprehensive your skill will be as well as its user-friendliness, among other things.
An Alexa Skill model generally consists of Intents and Slots. The Intent holds what the user is trying to achieve, the Slots contain details about the specifics of the user’s intention. For example, the Intent could be ordering a pizza, the Slots could be the name and size of the pizza, the delivery location and desired delivery time.
One could build a model that just defines Intents, Slots, Slot Types and some sample utterances. This type of model would put a lot of the handling of the conversation between Alexa and the user in your (APEX) code. Prompting for information, checking and validating user input etc. would all be up to your code.
Here’s where Dialogs come in handy. With a Dialog (which is still in beta at the time of this writing) you put some of the conversation handling inside the Interaction Model. In other words, besides defining Intents, Slots and Utterances, you also define Alexa’s responses to the user. For example, the phrase Alexa would use to ask for a specific piece of information or how to confirm information given by the user.
From an AlexaForce perspective, you could simply tell Alexa to handle the next response using this Dialog definition inside the Interaction Model. This is done by having AlexaForce send a Dialog.Delegate directive to Alexa.
Imagine an Alexa Skill that takes support requests from the user and creates a Case in Salesforce based on the user’s request, a ServiceRequest (Intent) in this example.
Two important data points (Slots) need to be provided by the user:
A Dialog allows you to have Alexa collect the data points and have them confirmed autonomously. The APEX keeps delegating conversation handling to Alexa until all required Slots have been filled.
A Dialog has 3 states, STARTED, IN_PROGRESS and COMPLETED. When COMPLETED, you can be sure that Alexa has fully fulfilled the Intent as defined in your model, including all its required Slots. Below is a code sample that would implement this, returning true on Dialog completion.
The APEX takes over again when Alexa sends the dialog state ‘COMPLETED’. Once this happens, both the ServiceTopic and IssueDescription will be available (and confirmed by Alexa) to your APEX to create the Case.
This example would be even more powerful if you set up account linking. This would allow users to first log in to Salesforce (e.g. a Community) and therefore providing the developer with information about the Salesforce User while creating the Case.
All of the code for this example, including the model and full APEX can be found here: https://github.com/HKOLWD/AlexaForce/tree/master/samples/Dialog.Delegate.
I supported the library for a while hoping that the Ohana could took ownership of it but unfortunately this didn't happened.
With great surprise I met the next guest blogger, Harm Korten, who was developing his own version of the AlexaForce library.
I'm more than happy to give him place to his amazing library and hope that the time is now ripe to bring this library to the big audience!
Harm Korten is a Force.com fan from The Netherlands. His professional career in IT started in 2001 as a developer, but his interest in computers started well before that. He got introduced to Salesforce in 2005, working at one of the
first Dutch Salesforce.com partners, Vivens. He has been a Salesforce fan and advocate ever since.
Over the years, he has worked on countless Salesforce projects, at dozens of Salesforce end-user customers. Currently he is active at Appsolutely, a Dutch Salesforce partner, founded in 2017.
Find him on LinkedIn or follow his Salesforce (and other) adventures on his blog at harmkorten.nl.
Introduction
In the first week of 2018, I ran into some of Enrico Murru’s work. Google offered his AlexaForce Git Repo (https://github.com/enreeco/alexa-force) as a suggestion to one of my many questions about integrating Amazon Alexa (https://developer.amazon.com/alexa) with Salesforce. It turned out Enrico had been working on this same thing, using the same technology stack, as I was at this moment.
An Alexa Skill SDK in APEX, only 2 years earlier!
Up until this moment, besides Enrico’s proof of concept version of such an SDK, the only available technology stacks that would allow integration between Salesforce and Alexa were the Node.js and Java SDK’s. These could be hosted on Heroku and use Salesforce API’s to integrate.
Like Enrico, I wanted to build an on-platform (Force.com) Alexa Skill SDK. This common interest put us in contact. One of the results is this guest blog, not surprisingly, about AlexaForce. Not Enrico’s AlexaForce, but Harm’s AlexaForce. We apparently both came up with this very special name for the SDK (surprise, surprise) ;)
AlexaForce
The basic idea about this Force.com SDK for Alexa, is to remove the necessity to work with Salesforce data through the Salesforce API. The Java or Node.js approach would have Amazon send requests to Heroku and from therefore require API communication with Salesforce.
With the AlexaForce SDK, Amazon will send the Alexa requests straight to Salesforce, allowing a developer to have full access to the Salesforce data, using SOQL, SOSL and APEX. The resulting architecture is depicted on the image below.
For more information about AlexaForce and how to use it, please visit https://github.com/HKOLWD/AlexaForce. You will find code samples and a detailed instruction there. For this article, I will elaborate on a specific Alexa Skill design approach, which is still in beta at Amazon: Dialogs.
Dialogs
Generally spoken, the most important part of an Alexa Skill, is its Interaction Model. The Interaction Model is defined in the Amazon Developer Portal when creating a new skill. The model will determine how comprehensive your skill will be as well as its user-friendliness, among other things.
An Alexa Skill model generally consists of Intents and Slots. The Intent holds what the user is trying to achieve, the Slots contain details about the specifics of the user’s intention. For example, the Intent could be ordering a pizza, the Slots could be the name and size of the pizza, the delivery location and desired delivery time.
One could build a model that just defines Intents, Slots, Slot Types and some sample utterances. This type of model would put a lot of the handling of the conversation between Alexa and the user in your (APEX) code. Prompting for information, checking and validating user input etc. would all be up to your code.
Here’s where Dialogs come in handy. With a Dialog (which is still in beta at the time of this writing) you put some of the conversation handling inside the Interaction Model. In other words, besides defining Intents, Slots and Utterances, you also define Alexa’s responses to the user. For example, the phrase Alexa would use to ask for a specific piece of information or how to confirm information given by the user.
From an AlexaForce perspective, you could simply tell Alexa to handle the next response using this Dialog definition inside the Interaction Model. This is done by having AlexaForce send a Dialog.Delegate directive to Alexa.
Example
Imagine an Alexa Skill that takes support requests from the user and creates a Case in Salesforce based on the user’s request, a ServiceRequest (Intent) in this example.
Two important data points (Slots) need to be provided by the user:
- The topic of the request. Represented by ServiceTopic in this example.
- The description of the issue. Represented by IssueDescription in this example
A Dialog allows you to have Alexa collect the data points and have them confirmed autonomously. The APEX keeps delegating conversation handling to Alexa until all required Slots have been filled.
A Dialog has 3 states, STARTED, IN_PROGRESS and COMPLETED. When COMPLETED, you can be sure that Alexa has fully fulfilled the Intent as defined in your model, including all its required Slots. Below is a code sample that would implement this, returning true on Dialog completion.
if(req.dialogState == 'STARTED') { alexaforce.Model.AlexaDirective dir = new alexaforce.Model.AlexaDirective(); dir.type = 'Dialog.Delegate'; dirManager.setDirective(dir); return false; } else if(req.dialogState != 'COMPLETED') { alexaforce.Model.AlexaDirective dir = new alexaforce.Model.AlexaDirective(); dir.type = 'Dialog.Delegate'; dirManager.setDirective(dir); return false; } else { return true; }
The APEX takes over again when Alexa sends the dialog state ‘COMPLETED’. Once this happens, both the ServiceTopic and IssueDescription will be available (and confirmed by Alexa) to your APEX to create the Case.
This example would be even more powerful if you set up account linking. This would allow users to first log in to Salesforce (e.g. a Community) and therefore providing the developer with information about the Salesforce User while creating the Case.
All of the code for this example, including the model and full APEX can be found here: https://github.com/HKOLWD/AlexaForce/tree/master/samples/Dialog.Delegate.
No comments:
Post a Comment