Get Inspire insights from former attendees in our AMA discussion thread on Inspire Buzz. ACEs and other community members are on call all week to answer!

Engine Works

Under the hood of Alteryx: tips, tricks and how-tos.
BenMoss
ACE Emeritus
ACE Emeritus

 

You will be required to launch and use components of Amazon Web Services (AWS) as part of this blog, we cannot be held accountable for any costs that are associated with this, however, unless you are extremely reckless you should come out with a bill of $0.00 as Lambda (the service used) has a generous free-tier usage rate.

 

Nowadays, most of us (especially us in the tech industry who get excited by the idea of automating as much as possible) have invested in some sort of smart home technology. Personally, I have a couple of Alexas dotted around which largely act as wireless speakers over anything else.

 

Last week I decided to see if I could think of any work-related use cases for my Alexa devices. I then thought about the Alteryx Subscription API, more specifically, can I use Alexa to communicate with this API in order to trigger a workflow on our Alteryx Server; rather than me having to go through the tedious process of moving my mouse a couple of centimetres and hitting the run button (at this point I wonder whether moving my hand uses more energy than using my voice, anyway). 

 

Why would I want to do this? Good question, one simple answer is sheer laziness, another one is because we can, but you may have actual valid reasons within your business for this use case, which is perfect; hopefully I can show you how simple it is in this blog. From my perspective the why is simple, it's an opportunity to learn.

 

Things You'll Need

Fortunately, Amazon has built a method that allows you to easily share the ‘recipe’ of these skills, and in this blog I’m going to show you how we can take this recipe and use it against your Alteryx Server environment.

 

To start with you will need…

 

  • An Amazon Alexa account
  • An Amazon Web Services (AWS) account
  • The Alexa app installed on your smart phone
  • An Alexa would be helpful (though you can speak to Alexa through the app, so not required)
  • The .yaml and .json files stored in this git repository.
  • Alteryx Server (as the skill makes use of the Alteryx Subscription API in order to trigger workflows)

 

Components of an Alexa Skill

When you wish to create an Alexa skill there are two key components: the first is the skill interaction model, which is where you define the words and phrases that will be used in order to make use of the skill you have created.

 

The second component is the skill application logic which acts on the voice requests made by your users. In our case, the skill application logic is the piece of the puzzle which will take the users chosen datasource and trigger an update - we can see this as the ‘back-end’ to our skill. The skill application logic is typically coded in either node.js or python (we will use Python in our case), and can be hosted in the cloud either by Alexa (assuming you don’t need any additional packages), AWS (which we will use in our  case), or another server.

 

BenMoss_0-1614615801469.png

 

We would advise watching this tutorial which outlines this in a lot more detail than we have time for here!

 

Skill Interaction Model

We’re going to start with creating our skill interaction model, so at this point we need to log into the Alexa Developer Console, which has been set up in order to allow us to more easily create functions (known as Skills) like the example in this blog.

 

You can sign in with your Alexa account (if you have a device already you'll likely have one of these, if you don't you can register very simply).

 

BenMoss_1-1614615940022.png

 

Once registered and signed in, we can do the obvious and start developing our skill by hitting the 'Create Skill' button. You will then be prompted to choose your skill name, in this case I've used 'Speak to Alteryx'. This is the name that your skill will be given in the Alexa Skills Store (should you choose to publish your skill publicly) and in your app.

 

You will also be prompted as to whether you want to start with a pre-baked model or develop a custom model (we'll choose this option); and where you want to host your skill's back-end resources (i.e. where to host the processing) - we're going to choose to provision our own, which we will do using AWS Lambda.

 

BenMoss_2-1614616008714.png

 

When we move onto the next stage we'll be asked if we want to start from scratch (we do) or work with an existing skill (we don't, however this option is useful to newbies who want to understand how different types of skills can be set-up).

 

Your skill will now be created! At this point if you were to navigate to the Alexa app on your smart phone and sign in with the same user account as you've used to develop your skill, you'll notice that it appears in the 'More > Skills and Games > Your Skills' section).

 

 

BenMoss_3-1614616052171.png

 

Once your skill has been created, in the Alexa Developer Console you will land on a page with a whole host of resources designed to help you get started. If you’re using the console for the first time then I would definitely recommend taking some time to go through them.

 

At this point we have two options, one would be to generate our skill interaction model from scratch, specifying the different phrases we want our users to use. Alternatively, we can ‘cheat.’

 

It is possible to load a skill interaction model from a JSON file, and in this case, I’ve already built the skill interaction model to save you some effort, and we’re going to simply load this JSON file into the Alexa Developer Console.

 

Customizing the Model

The skill interaction model JSON file has a number of components, representing the different elements of an Alexa skill.

 

 

 

 

 

 

 

{
    "interactionModel": {
        "languageModel": {
            "invocationName": "alteryx commands",

 

 

 

 

 

 

 

One of the opening lines in the ‘invocationName’ key is given the value ‘alteryx commands’ in our model. This is the phrase our users will say to invoke our skill. If you want to edit this part of the JSON file, feel free!

 

The next key part of the JSON file is our ‘intents.’ I personally like to think of these as the different type of requests that you want to make. In our case we want to make one request which allows our user to specify that they want to run a workflow.

 

 

 

 

 

 

 

               {
                    "name": "runworkflowintent",
                    "slots": [
                        {
                            "name": "workflowname",
                            "type": "workflow",
                            "samples": [
                                "{workflowname}"
                            ]
                        }
                    ],
                    "samples": [
                        "transform my data",
                        "run my workflow",
                        "run workflow"
                    ]
                }

 

 

 

 

 

 

 

The ‘samples’ key in the JSON shown above is where we specify the different phrases our user can say in order to trigger this ‘intent,’ again you can customize or add phrases here if you wish.

 

I’ve also added in a second ‘intent’ in order to allow users to understand what workflows are available for them to run.

 

 

 

 

 

 

 

 {
                    "name": "listworkflowsintent",
                    "slots": [],
                    "samples": [
                        "what workflows are available to run",
                        "get workflows",
                        "list apps",
                        "list workflows"
                    ]
                }

 

 

 

 

 

 

 

In the intents section you may notice that there are a number of intents already given. These intents are given in order to prevent you having to build out items that allow your skill to handle things like cancellation and help requests.

 

The next part is something we MUST edit in order to make the skill customized to the Alteryx Server and its workflows that we are looking to trigger. In our ‘Intent’ JSON shown above we reference what is referred to as a ‘slot,’ which is essentially a dynamic variable; in the types key shown below, we define the appropriate values for the slot, both the word we want the user to be able to say, and what this translates to (in this case the ID of the workflow which can be passed into our API call).

 

The JSON in our example includes 3 placeholder workflows but you can add more very easily (assuming you can acknowledge the pattern). For each workflow you want to include you should specify the ‘value’ which represents what you want the user to say (probably the workflow name), and then the ID which can be taking from the URL for that workflow on your server.

 

 

 

 

 

 

 

"types": [
                {
                    "name": "workflow",
                    "values": [
                        {
                            "id": "datasourceid1",
                            "name": {
                                "value": "datasourcename1"
                            }
                        },
                        {
                            "id": "datasourceid2",
                            "name": {
                                "value": "datasourcename2"
                            }
                        },
                        {
                            "id": "datasourceid3",
                            "name": {
                                "value": "datasourcename3"
                            }
                        }
                    ]
                }
            ]

 

 

 

 

 

 

 

As an example, if I wish to only allow my users to trigger the ‘Corporate Customers and Profit’ application on our Alteryx Server, my JSON would look as follows…

 

BenMoss_4-1614616271111.png

 

 

 

 

 

 

 

"types": [
                {
                    "name": "workflow",
                    "values": [
                        {
                            "id": "3213sdasd2312323",
                            "name": {
                                "value": "Customers and Profit"
                            }
                        }
                    ]
                }
            ]

 

 

 

 

 

 

 

I’d then overwrite the placeholder ‘types’ key in the JSON file. It’s worth noting the ‘value’ represents what the user should say to trigger this workflow to run. The value does not have to be the same as the value given in the Gallery, it can be shortened/simplified for our use case. The important element here is the ID.

 

Now, I do appreciate that not everyone will be comfortable with manually editing this JSON file, which is why I have created an Alteryx Application that allows you to edit the different parts outlined in this blog. The application can be found here, and allows users to control the phrases used by users, alongside provide the name and id information for the different workflows they want to communicate with.

 

BenMoss_5-1614616327593.png

 

Once you have your edited JSON file you can now load it into the developer console, this is done by going to the ‘Interaction Model’ option and selecting ‘JSON editor’. You can then drag and drop your JSON file and then you must save and build your model, using the buttons available at the top of the JSON editor.

 

BenMoss_6-1614616394869.png

 

We’ve now successfully created our skill interaction model, and we can move onto the skill application logic, not before we go to the ‘endpoint’ panel to grab and take note of our skill ID which is required for our skill application logic step.

 

BenMoss_7-1614616422644.png

 

Skill Application Logic

As with our skill interaction model, we could take two approaches to create our skill application logic and deploy our skill application logic into the cloud, either a long-winded process where we create our script and manually configure each of the required AWS services, or, use the .yaml file (which is a AWS CloudFormation script) that has been set-up by The Information Lab to do this for you, we are going to do the latter.

 

So let’s grab the .yaml file (downloadable from The Information Lab’s github page) and log into your AWS account.

 

Once you have logged into AWS we must first select the region for which we want to deploy our skill application logic (which is a lambda function). At this point, our deployment file only supports US West (Oregon), US East (N. Virginia) and EU (Ireland) regions (though please shout if you would like us to support another region and we should be able to do so fairly easily).

 

The reason we chose to support these regions to start with is stated in the Alexa documentation, that using these regions will reduce the latency between Alexa and our skill service.

 

For the sake of this exercise, I’ll be deploying our implementation logic into the EU (Ireland) region.

 

Once you’ve chosen and changed your region, we’re going to navigate to the CloudFormation service. CloudFormation is a service which allows users to easily create an applications infrastructure which uses many different AWS services, as part of the process you must have access to an IAM role (which can be your user account) that has the capabilities to;

 

  • The ability to create Lambda functions and layers
  • The ability to create an IAM role

 

Whilst your own user account must have the ability to;

 

  • Create S3 buckets
  • Create CloudFormation stacks

 

On the CloudFormation home page we’re going to choose to ‘Create Stack’ and on the subsequent page choose that our ‘template is ready’ and then upload the .yaml file as our template, before hitting next.

 

BenMoss_8-1614616739614.png

 

We’ll then be prompted to give a stack name, I chose ‘alteryx-alexa’ but this is entirely up to you. On the same page we will give our Alexa skill ID that we took note of earlier, before again hitting next.

 

BenMoss_9-1614616765611.png

 

The next two pages allow us to apply further configuration details against our stack. The default values are fine, however, on the first tab it’s possible to control what set of permissions will be used during the stacks creation process (the default is to use your user credentials). On the second tab we must accept the warning that the CloudFormation script contains a create role action. We can now hit ‘Create stack’.

 

BenMoss_10-1614616819908.png

 

All of the appropriate resources will now be generated for our skill ‘back-end’ to function correctly.

 

BenMoss_11-1614616838464.png

 

Your alterx-alexa stack should complete successfully (as shown by the final ‘CREATE_COMPLETE’ message). If at any point the stack fails to create one of the resources, it will automatically rollback and remove any resources create prior to this point.

 

Providing your stack has successfully completed, we can now go to the AWS Lambda service to view the function which contains our python script which controls and actions on the communications made against our Alexa Skill.

 

BenMoss_12-1614616866927.png

 

If we navigate through to our ‘alteryx-alexa-commands’ function we can see the underlying code, which has been broken down into three .py scripts in order to make it more consumable.

 

  • Intents.py - this contains the logic for handling the different intents, for example, if the 'Run Workflow' intent is called, this script will then pass the relevant information into a function which calls the Alteryx Subscription API with your credentials.
  • trigger_workflow.py - this is the script which contains the process of authenticating your credentials and performing the POST request against the appropriate API endpoint.
  • lambda_function.py - this is the top level script which directs the different calls that can be made by your users to the correct function in the intents.py file.

 

BenMoss_13-1614617085486.png

 

Customizing the Logic

As with the .JSON file, we must make some small changes in order to make this work with our environment. Firstly, we must specify our API credentials, which can be found by looking in your profile within the Gallery. These credentials are how your server authenticates you as a valid user in that environment and controls which workflows you have the capabilities to run.

 

The line that must be edited is line 107 in the ‘intents.py’ file:

 

speech_output = execute_workflow('APIKEY','APISECRET','GALLERYADDRESS',workflowid,'')

 

You should put your key and secret in the placeholder positions, alongside your gallery address. The workflowid given in the string is a variable declared earlier in the script, and will be determined by the voice commands you give.

 

The next element we need to edit is line 88, again in our ‘intents.py’ file:

 

speech_output = "workflowname1, workflowname2 and workflowname3"

 

This line is what Alexa will read out in cases where the user requests a list of the available workflows; you should adjust these values to be the names of the workflows as you gave when you created your skill interaction model.

 

Once you have made these changes, we must then ‘deploy’, which is basically like hitting save.

 

BenMoss_15-1614618166392.png

 

The final step of our process is to grab the ARN for our developed function and paste this into our Alexa Developer Console so that each item now knows the other item they need to communicate with.

 

BenMoss_16-1614618184998.png

 

We’ll copy this, and paste it into the ‘endpoint’ tab in the ‘Default Region’ box. Hit Save Endpoints top of the screen.

 

BenMoss_17-1614618204444.png

 

 

Your skill is now ready to test. You'll first need to enable your skill in the Alexa app on your smart phone, which can be done by going to 'More > Skills and Games > Your Skills' and selecting your app which is under development, you can then hit 'enable skill' and you'll be able to now use your skill on any Alexa device connected to this account.

 

So get out your Alexa (providing it is connected to the same account as the one you developed the skill with), and test out your commands!


Once you have  finished using your skill, or if you only developed this for training purposes, then you should delete the AWS resources that were created which can be done by deleting the stack we used to create those resources. The process for doing this is outlined in this blog.

 

Communication Sketch

The last part of this blog is to highlight the different methods of communicating with the skill that we have developed, this is shown in the diagram below.

 

BenMoss_18-1614618254269.png

 

Banner image by PrinceC

Comments
cgoodman3
14 - Magnetar
14 - Magnetar

This is a really great blog. I’m trying to do something similar via Power Apps. Do you have the python files or are they created when you set up the lamda functions?

LukeG
Alteryx Alumni (Retired)

Pretty cool stuff! Great example of how Gallery APIs allow you to integrate alteryx workflows/processes with just about anything