Alexa Skills can be developed using Alexa Lambda functions or a REST API endpoint. Lambda function is Amazon’s implementation of serverless functions available in AWS. Amazon recommends using Lambda functions despite the fact that they are not easy to debug. While you can log to a CloudWatch log, you can’t hit a breakpoint and step into the code.
This makes live debugging of Alexa requests a very hard task. In this post, we will implement a custom Skill for Amazon Alexa by using Node.js, npm, and AWS Lambda Functions. This Skill is basically a “Hello World” example. With this post, you will be able to create a custom Skill for Amazon Alexa, implement functionality by using Node.js, and start your custom skill both from your local computer and from AWS. This post contains materials from different resources that can be seen in the Resources section.
Here are the technologies used in this project:
- Amazon Developer Account – How to get it.
- AWS Account – Sign up here for free.
- ASK CLI – Install and configure ASK CLI.
- Node.js v10.x.
- Visual Studio Code.
- npm Package Manager.
- Alexa ASK for Node.js (Version >2.7.0).
The Alexa Skills Kit Command Line Interface (ASK CLI) is a tool for you to manage your Alexa skills and related resources, such as AWS Lambda functions. With ASK CLI, you have access to the Skill Management API, which allows you to manage Alexa skills programmatically from the command line. We will use this powerful tool to create, build, deploy and manage our “Hello World” Skill. Let’s start!
For creating the Alexa Skill, we will use de ASK CLI previously configured. First of all, we have to execute this command:
This command will run and interactive step-by-step creation process:
- The first thing the ASK CLI is going to ask us is the runtime of our Skill. In our case,
The second step is the template that our Skill is based on. In our case, we will select the
Hello World template:
Finally, the ASK CLI is going to ask us for the name of the Skill:
These are the main files of the project:
- .ask: Folder which contains the ASK CLI’s config file. This config files will remain empty until we execute the command
.vscode/launch.json: Launch preferences to run locally your Skill for local testing. This setting launch
lambda/custom/local-debugger.js. This script runs a server on http://localhost:3001 for debug the Skill.
- hooks: A folder that contains the hook scripts. Amazon provides two hooks,
post_new_hook: executed after the Skill creation. Inn Node.js runs
npm installin each sourceDir in
pre_deploy_hook: executed before the Skill deployment. In Node.js runs
npm installin each sourceDir in
- lambda/custom: A folder that contains the source code for the skill’s AWS Lambda function:
index.js: the lambda main entry point.
utilities/languageStrings.js: i18n dictionaries used by the library
i18next, which allows us to run the same Skill in different configuration languages.
utilities/util.js: file with helpful functions.
local-debugger.js: used for debug our skill locally.
errors: folder that contains all Error handlers.
intents: folder that contains all Intent handlers.
interceptors: here you can find all interceptors.
- models: A folder that contains interaction models for the skill. Each interaction model is defined in a JSON file named according to the locale. For example, es-ES.json.
skill.json: The skill manifest. One of the most important files in our project.
The ASK SDK for Node.js makes it easier for you to build highly engaging skills by allowing you to spend more time implementing features and less time writing boilerplate code.
You can find documentation, samples, and helpful links in their official GitHub repository
index.js, located in
lambda/custom folder. This file contains all handlers, interceptors and exports the Skill handler in
exports.handler function is executed every time AWS Lambda is initiated for this particular function. In theory, an AWS Lambda function is just a single function. This means that we need to define dispatching logic so a single function request can route to appropriate code, hence the handlers.
It is important to take a look into the
LaunchRequestHandler as an example of Alexa Skill handler written in Node.js:
package.json, we will almost always find metadata specific to the project. This metadata helps identify the project and acts as a baseline for users and contributors to get information about the project.
Here is how this file looks:
This command installs a package, and any packages that it depends on. If the package has a package-lock or shrink-wrap file, the installation of dependencies will be driven by that.
It could be the way to build our Alexa Skill.
launch.json file in
.vscode folder has the configuration for Visual Studio Code, which allow us to run our lambda locally:
This configuration file will execute the following command:
For a new, incoming Skill request, a new socket connection is established. From the data received on the socket, the request body is extracted, parsed into JSON, and passed to the Skill invoker’s lambda handler. The response from the lambda handler is parsed as an HTTP 200 message format as specified here. The response is written onto the socket connection and returned.
After configuring our launch.json file and understanding how the local debugger works, it is time to click on the play button:
After executing it, you can send an Alexa POST requests to http://localhost:3001.
Following the steps before, now, you can set up breakpoints wherever you want inside all JS files in order to debug your Skill:
I’m sure you already know the famous tool call Postman. REST APIs have become the new standard in providing a public and secure interface for your service. Though REST has become ubiquitous, it’s not always easy to test. Postman makes it easier to test and manage HTTP REST APIs. Postman gives us multiple features to import, test, and share APIs, which will help you and your team be more productive in the long run.
After running your application, you will have an endpoint available at http://localhost:3001. With Postman, you can emulate any Alexa Request.
For example, you can test a
With the code ready to go, we need to deploy it on AWS Lambda so it can be connected to Alexa.
Before deploy the Alexa Skill, we can show the
config file in
.ask folder it is empty:
Deploy Alexa Skill with ASK CLI:
As the official documentation says:
When the local Skill project has never been deployed, ASK CLI creates a new Skill in the development stage for your account and then deploys the Skill project. If applicable, ASK CLI creates one or more new AWS Lambda functions in your AWS account and uploads the Lambda function code. Specifically, ASK CLI does the following:
- Looks in your Skill project’s config file (in the .ask folder, which is in the skill project folder) for an existing Skill ID. If the config file does not contain a Skill ID, ASK CLI creates a new Skill using the skill manifest in the skill project’s skill.json file and then adds the skill ID to the skill project’s config file.
- Look in your Skill project’s manifest (skill.json file) for the skill’s published locales. These are listed in the manifest.publishingInformation.locales object. For each locale, ASK CLI looks in the Skill project’s models folder for a corresponding model file (for example, es-ES.json), then uploads the model to your skill. ASK CLI waits for the uploaded models to build, then adds each model’s eTag to the skill project’s config file.
- Looks in your Skill project’s manifest (skill.json file) for AWS Lambda endpoints. These are listed in the manifest.apis..endpoint or manifest.apis..regions..endpoint objects (for example, manifest.apis.custom.endpoint or manifest.apis.smartHome.regions.NA.endpoint). Each endpoint object contains a
sourceDirvalue, and optionally, a URI value. ASK CLI uploads the contents of the
sourceDirfolder to the corresponding AWS Lambda function and names the Lambda function the same as the URI value. For more details about how ASK CLI performs uploads to Lambda, see AWS Lambda deployment details.
- Looks in your Skill project folder for in-skill products, and if it finds any, uploads them to your skill. For more information about in-skill products, see the In-Skill Purchasing Overview.
After the execution of the above command, we will have the
config file properly filled:
ngrok is a very cool, lightweight tool that creates a secure tunnel on your local machine along with a public URL you can use for browsing your local site or APIs.
When ngrok is running, it listens on the same port that your local web server is running on and proxies external requests to your local machine
From there, it’s a simple step to get it to listen to your web server. Say you’re running your local web server on port 3001. In your terminal, you’d type in:
ngrok http 3001. This starts ngrok listening on port 3001 and creates the secure tunnel:
Select the My development endpoint as a sub-domain…. option from the dropdown and click the save endpoint at the top of the page.
Go to the Test tab in the Alexa Developer Console and launch your skill.
This was a basic tutorial to learn Alexa Skills using Node.js. As you have seen in this example, the Alexa Skill Kit for Node.js and the Alexa Tools like ASK CLI can help us a lot, and also they give us the possibility to create skills in an easy way. I hope this example project is useful to you.
I hope it will be useful! If you have any doubts or questions, do not hesitate to contact me or put a comment below!
That’s all folks!