Alexa.Tip – Using Entity Framework in Your C# Alexa Skill Lambda

In this Alexa.Tip series, we explore some little bits of code that can make your life easier in developing Alexa Skills in many languages including C# + .NET, node.jS + TypeScript, Kotlin, etc. We look at concepts that developers might not be aware of, design patterns that and how they can be applied to voice application development, best practices, and more!

In the previous Alexa.Tip, we explored Accessing Lambda Environment Variables in .NET in order to access a secure connection string and initialize a DbContext to use.

If you don’t know how to access Lambda Environment variables already, PLEASE read that post first as this post’s samples will NOT show the use of hard coded connection strings – let’s follow some best practices! 😀

Alright, let’s get into it. If you’re developing Alexa Skills in .NET, you probably need to access data somewhere! At least if you’re building a real-world scale voice application. Also, if you’re a .NET developer, you’re probably used to accessing data through Entity Framework at this point rather than using some of the AWS SDKs and data services such as DynamoDB.

It’s super easy…

Initial Skill

First off, this example uses a super basic skill where the only custom intent is to get facts about animals. Here’s the intent setup in the UI:
AnimalFactIntent_Full

You can also find the interactionModel.json here:
https://github.com/SuavePirate/Cross-Platform-Voice-NET/blob/master/src/End/AnimalFacts/AnimalFacts/InteractionModels/alexaInteractionModel.json

Let’s start by abstracting some handlers out so we aren’t writing everything in our Function class – this isn’t JavaScript after all:

WelcomeHandler.cs

public class WelcomeHandler : IResponseHandler
{
    public SkillResponse GetResponse(SkillRequest input)
    {
        return ResponseBuilder.Ask("Welcome to the animal facts voice app! You can ask me about all kinds of animals - try saying, tell me about dogs.");
    }
}

AnimalFactHandler.cs

public class AnimalFactHandler : IAsyncResponseHandler
{
    public async Task<SkillResponse> GetResponse(SkillRequest input)
    {
        // TODO: Do something with data to get the fact.
    }    
}

Then we can setup our Function:

Function.cs

public class Function
{
    // using these interfaces can help us use something like Moq to unit test this bad boy.
    private readonly IResponseHandler _welcomeHandler;
    private readonly IAsyncResponseHandler _animalFactHandler;

    public Function() 
    {
        _welcomeHandler = new WelcomeHandler();
        _animalFactHandler = new AnimalFactHandler();
    }  

    public async Task<SkillResponse> FunctionHandler(SkillRequest input, ILambdaContext context)
    {
        var requestType = input.GetRequestType();

        if (requestType == typeof(IntentRequest))
        {
            if ((input.Request as IntentRequest).Intent.Name == "AnimalFactIntent")
            { 
                return await _animalFactHandler.GetResponse(input);
            }
        }
        else if (requestType == typeof(Alexa.NET.Request.Type.LaunchRequest))
        {
            return _welcomeHandler.GetResponse(input);
        }    

        return ResponseBuilder.Ask("I'm not sure I know how to do that. Try asking me for an animal fact!");
    }
}

So we have a basic setup, but we need to actually implement something in the AnimalFactHandler. In this case, we want to do a SQL search for a fact about the animal that was asked about!

SO let’s start wiring up Entity Framework to this skill:

Add Models and Context

Let’s create a model for our AnimalFact entity to represent our table:

AnimalFact.cs

public class AnimalFact
{
    public int Id { get; set; }
    public string AnimalName { get; set; }
    public string Fact { get; set; }
}

And now let’s create a DbContext implementation with our AnimalFact:

AnimalContext

public class AnimalContext : DbContext
{
    public DbSet<AnimalFact> AnimalFacts { get; set; }
    public AnimalContext(DbContextOptions options)
        : base(options)
    {

    }

    public AnimalContext()
    {

    }
}

NOTE: If you are NOT also using this AnimalContext in a web app that handles the migrations, you’ll need to add either a web app or a console project with .NET Core to handle the migrations since it needs to be an executable project type. The assumption here is that you already have a project running these migrations so you can actually add data to it, but if you are starting here, simply create a new .NET Core web app project or Console app, reference your DbContext from a shared library between the Labmda project and the app project, then run your migrations from the app project.

From here on out, we assume you have the db created from your context and some data in it.

Updating the Skill Lambda

Now that we have a DbContext, let’s add it to our Function and AnimalFactHandler in order to search for the fact!

Let’s go to the AnimalFactHandler and setup the class to take the AnimalContext in the constructor so we can easily unit test and then use the slot value from the request to query for the animal:

AnimalFactHandler.cs

public class AnimalFactHandler : IAsyncResponseHandler
{
    private readonly AnimalContext _context;
    public AnimalFactHandler(AnimalContext context)
    {
        _context = context;
    }

    public async Task<SkillResponse> GetResponse(SkillRequest input)
    {
        // get the animal name from the slot
        var intent = (input.Request as IntentRequest).Intent;
        var animal = intent.Slots["Animal"].Value;

        // query for the fact
        var animalFact = _context.AnimalFacts
                            .Where(a => a.AnimalName.ToLower() == animal.ToLower())
                            .FirstOrDefaultAsync();

        // we found the animal!
        if (animalFact != null)
        {
            return ResponseBuilder.Tell(animalFact.Fact);
        }

        // couldn't find the animal - return a decent response
        return ResponseBuilder.Ask("I don't know about that animal. Try asking me about a different one!");
    }    
}

Now that we have the updated handler, let’s update the entry point to initialize and pass in the AnimalContext.

Function.cs

public class Function
{
    // using these interfaces can help us use something like Moq to unit test this bad boy.
    private readonly IResponseHandler _welcomeHandler;
    private readonly IResponseHandler _animalFactHandler;
    private readonly AnimalContext _context;

    public Function() 
    {
        var connectionString = Environment.GetEnvironmentVariable("DatabaseConnectionString");
        var timeoutSetting = Environment.GetEnvironmentVariable("DatabaseCommandTimeout");
        var optionsBuilder = new DbOptionsBuilder<AnimalFactContext>()
            .UseSqlServer(connectionString, providerOptions => providerOptions.CommandTimeout(int.Parse(timeoutSetting)));
        _context = new AnimalFactContext(optionsBuilder.Build());
        _welcomeHandler = new WelcomeHandler();
        _animalFactHandler = new AnimalFactHandler(_context);
    }  

    public SkillResponse FunctionHandler(SkillRequest input, ILambdaContext context)
    {
        var requestType = input.GetRequestType();

        if (requestType == typeof(IntentRequest))
        {
            if ((input.Request as IntentRequest).Intent.Name == "AnimalFactIntent")
            { 
                return _animalFactHandler.GetResponse(input);
            }
        }
        else if (requestType == typeof(Alexa.NET.Request.Type.LaunchRequest))
        {
            return _welcomeHandler.GetResponse(input);
        }    

        return ResponseBuilder.Ask("I'm not sure I know how to do that. Try asking me for an animal fact!");
    }
}

Now we can publish this and check out our results!

Results

For example, I’ll use Swagger to create an animal fact, and then ask for it!

Boom – created:
create_dog_fact.PNG

And now let’s ask for it!

dog_fact_response

Conclusion

Adding data-driven responses to your custom Alexa Skills is dead easy using techniques you may already be familiar with if you are a .NET web developer! Show me what you’ve built with .NET and Alexa in the comments!


If you like what you see, don’t forget to follow me on twitter @Suave_Pirate, check out my GitHub, and subscribe to my blog to learn more mobile and AI developer tips and tricks!

Interested in sponsoring developer content? Message @Suave_Pirate on twitter for details.


voicify_logo
I’m the Director and Principal Architect over at Voicify. Learn how you can use the Voice Experience Platform to bring your brand into the world of voice on Alexa, Google Assistant, Cortana, chat bots, and more: https://voicify.com/


Advertisements

Alexa.Tip – Access Lambda Environment Variables in .NET

In this Alexa.Tip series, we explore some little bits of code that can make your life easier in developing Alexa Skills in many languages including C# + .NET, node.jS + TypeScript, Kotlin, etc. We look at concepts that developers might not be aware of, design patterns that and how they can be applied to voice application development, best practices, and more!

In this post, we’ll look at a simple tip to help secure your Alexa Skills when using an AWS Lambda. In a future post, we’ll look at a similar concept in .NET for developers using ASP.NET Core Web API instead of Lambdas.

So, you’ve taken the step to building proper data-driven Alexa Skills and have stepped out of the simple “todo” examples. Perhaps you need a database connection in your source code, need to change values depending on what environment you are running the skill in, or want to run your skill with options you can change without having to redeploy the codebase over and over again.

For all of these problems, your best bet is to use the Lambda Environment variables. Doing it is INCREDIBLY simple, yet I still see many developers unaware of how to use them and instead use runtime checks or don’t bother scaling their application at all to needing them.

In this example, we’ll look at how to create an instance of an Entity Framework DbContext and set it up for a testable injection based project.

Let’s look at a code snipet example of a piece of an Alexa Skill running in a .NET Lambda using a hard coded connection string:

Message.cs

// a simple table to represent some messages we can grab
public class Message
{
    public int Id { get; set; }
    public string Content { get; set; }
}

And here’s our DbContext we are going to grab data from:
MessageDbContext.cs

public class MessageDbContext : DbContext 
{
    public DbSet<Message> Messages { get; set; }

    public MessageDbContext(DbContextOptions options) : base(options) { }
}

Now check out a lambda I typically see with hard coded connection strings and settings:

BadLambda.cs

public class BadLambda
{
    // an EF DbContext with some message tables
    private readonly MessageDbContext _context;

    public BadLambda()
    {
        var optionsBuilder = new DbOptionsBuilder<MessageDbContext>()
            .UseSqlServer("my_connection_string", providerOptions => providerOptions.CommandTimeout(60));
        _context = new MessageDbContext(optionsBuilder.options);
    }

    public async Task<SkillResponse>(SkillRequest input)
    {
        var message = await _context.Messages.FirstOrDefaultAsync();
        return ResponseBuilder.Tell(message.Content);
    }
}

So breaking down the code… We have an EF DbContext with one table called “Messages” which has a column of Id and Content, and our Skill Lambda sets up the DbContext with a hard coded connection string and timeout setting, then returns the first messages as a simple response.

Let’s take this lambda and add environment variables so we can run mutliple environments for multiple dbs including a localhost db if we wanted to test locally before deploying to Lambda and so we can test updates before we make changes to our production skill!

In AWS

Head over to your lambda in AWS and scroll down to “Environment Variables”:
Labmda_env

Here you can add any key-value pairs you want. For this demo, we want to put our database connection string and our timeout setting here:

Lambda_env_filled

Now make sure you smash that save button and head back to Visual Studio to make our final updates!

Don’t forget you can now also create another lambda for a development or staging environment separate from your production lambda – be sure to use a different database connection in your environment variables, and then you can publish to both 😀

In the Code

Now we just have to use the

Environment.GetEnvironmentVariable(string key);

method to grab the values of these environment variables we set up and we’re off to the races!

Here’s what that looks like when updates:

GoodLambda.cs

public class GoodLambda
{
    // an EF DbContext with some message tables
    private readonly MessageDbContext _context;

    public GoodLambda()
    {
        var connectionString = Environment.GetEnvironmentVariable("DatabaseConnectionString");
        var timeoutSetting = Environment.GetEnvironmentVariable("DatabaseCommandTimeout");
        var optionsBuilder = new DbOptionsBuilder<MessageDbContext>()
            .UseSqlServer(connectionString, providerOptions => providerOptions.CommandTimeout(int.Parse(timeoutSetting)));
        _context = new MessageDbContext(optionsBuilder.options);
    }

    public async Task<SkillResponse>(SkillRequest input)
    {
        var message = await _context.Messages.FirstOrDefaultAsync();
        return ResponseBuilder.Tell(message.Content);
    }
}

Now publish it to your lambda and you’re done!

Recap

  • For the love of god, stop hard coding connection strings and settings in your lambda
  • Use environment variables
  • Setup multiple environments for scalability and testability
  • Abuse Environment.GetEnvironmentVariable()
  • Build more Alexa Skill 🙂

If you like what you see, don’t forget to follow me on twitter @Suave_Pirate, check out my GitHub, and subscribe to my blog to learn more mobile and AI developer tips and tricks!

Interested in sponsoring developer content? Message @Suave_Pirate on twitter for details.


voicify_logo
I’m the Director and Principal Architect over at Voicify. Learn how you can use the Voice Experience Platform to bring your brand into the world of voice on Alexa, Google Assistant, Cortana, chat bots, and more: https://voicify.com/


Alexa.Tip – Build Better Responses in .NET With SSML

In this Alexa.Tip series, we explore some little bits of code that can make your life easier in developing Alexa Skills in many languages including C# + .NET, node.jS + TypeScript, Kotlin, etc. We look at concepts that developers might not be aware of, design patterns that and how they can be applied to voice application development, best practices, and more!

In this post, we are looking at a VERY simple, but useful tool when developing Alexa Skills – SSML or Speech Synthesis Markup Language.

If you’re new to voice application development, SSML is a common standard among all the big players – Alexa, Google, Cortana, etc. At its simplest form, it’s a simple set of XML elements that are used to tell the Voice Assistant (such as Alexa) how to speak your response in a particular way. The alternative to using SSML is to return simple text as a string and let the assistant respond with their normal speech. However, this commonly falls short on things like:

  • Abbreviations
  • Dates
  • Emphasis
  • Pauses
  • Tone
  • Pitch

This is especially true when the assistant is reading out a particularly long set of text. It becomes too… robotic. Ironically…

So let’s look at an example of a C# and .NET response using simple text:

Function.cs

public SkillResponse SimpleTextResponse()
{
    return ResponseBuilder.Tell(new PlainTextOutputSpeech
    {
        Text = "When one door of happiness closes, another opens; but often we look so long at the closed door that we do not see the one which has been opened for us."
    });
}

With a response like this, Alexa will likely read it too quickly which can really diminish the power behind such a great quote (by Helen Keller).

Now we can very easily update a response like this to give ourselves something more powerful:

Function.cs

public SkillResponse RichTextResponse()
{
    return ResponseBuilder.Tell(new SsmlOutputSpeech
    {
        Ssml = 
            @"<speak>
                When <emphasis level=""moderate"">one</emphasis> door of happiness closes, another opens;
                <break time=""1.5s""/>
                <s>but often we look so long at the <emphasis level=""strong"">closed</emphasis> door that we do not see the one which has been opened for us.</s>
            </speak>"
    });
}

Now with this, we can add emphasis of different levels on certain words, add breaks in the speech when we need to, and separate contextual sentences!

For a full list of SSML tags that Alexa supports check this out:
https://developer.amazon.com/docs/custom-skills/speech-synthesis-markup-language-ssml-reference.html

And be sure to check back for more intro to expert level tips on Alexa Skill development!
Or take a look at other posts in the Alexa.Tip series here!

Side note: I’m also working on a tool to make building SSML responses even better – especially for very large and dynamic pieces of conversations. The idea is to build a more Fluid Design approach for something like this:

// returns a string
return SsmlBuilder.Speak("When")
    .Emphasis("one", Emphasis.Moderate)
    .Speak("door of hapiness closes, another opens;)
    .Break(1.5)
    .Sentence("but often we look so long at the closed door that we do not see the one which has been opened for us.")
    .Build();

Or something like:

// returns a string - uses string searching
return SsmlBuilder.Speak("When one door of happiness closes, another opens; but often we look so long at the closed door that we do not see the one which has been opened for us.")
    .Emphasis("one", Emphasis.Moderate)
    .Break(";")
    .Sentence("but often we look so long at the closed door that we do not see the one which has been opened for us.")
    .Emphasis("closed", Emphasis.Strong)
    .Build();

Or:

// returns a string - uses indexes and lengths
return SsmlBuilder.Speak("When one door of happiness closes, another opens; but often we look so long at the closed door that we do not see the one which has been opened for us.")
    .Emphasis(4, 3, Emphasis.Moderate)
    .Break(47)
    .Sentence(49, 112)
    .Emphasis(59, 6, Emphasis.Strong)
    .Build();

Let me know what you think in the comments or on twitter @Suave_Pirate – Would you rather build SSML Responses like this or is writing the XML in a big fat string working alright for you?


If you like what you see, don’t forget to follow me on twitter @Suave_Pirate, check out my GitHub, and subscribe to my blog to learn more mobile and AI developer tips and tricks!

Interested in sponsoring developer content? Message @Suave_Pirate on twitter for details.


voicify_logo
I’m the Director and Principal Architect over at Voicify. Learn how you can use the Voice Experience Platform to bring your brand into the world of voice on Alexa, Google Assistant, Cortana, chat bots, and more: https://voicify.com/