Practice Exams:

AI-102 Microsoft Azure AI – LUIS – Language Understanding Service

  1. Overview of LUIS

All right, so in this section of the course we’re talking about a new Azure service called the Language Understanding Service, which is abbreviated as Lewis. Now the Language Understanding Service is a new step that goes beyond the text analytics and speech analytics services we’ve seen previous in this course that you can take, take a bunch of texts, either written or spoken and have the computer understand keywords and phrases from that, maybe even summarize the text into a smaller subset of text. But now we need to get this into a programmable state where you’re able to actually take actions based on your users intention. And within the Language Understanding Service, one of the core concepts is this concept of intent.

And so you’re going to have an application. And in this example we’re going to see in a second, we’re going to pretend that we are writing a bot for a pizza restaurant and that users are going to be able to send a message, whether it’s through SMS or through a messenger type service or email, where they can tell the bot what kind of pizza that they want.

And so then the bot using Language Understanding Service is going to understand that they want to order a pizza, that is their intent, and then be able to pull out the properties of that pizza, the toppings that they want, the size, ETCA. Based on understanding that they’re trying to order a pizza. Now this is a very specialized type of artificial intelligence. As we said in the beginning of this course, there is no working general artificial intelligence. The reason why it’s specialized is because the programmer has to anticipate that the intention that the person might have is to order a pizza and then program all of the stuff around that. And then we can use the machine learning to understand when the user is trying to order a pizza and extract those properties. So we’re going to see in the code in the next video how we’re setting up the model, we’re training the model and then we’re going to pass in a phrase and the model will understand that it matches one of the intents.

Now think of this in terms of your smart home speaker or your smartphone where you can speak to it. There is only a limited set of instructions that it would understand, but you can say it in any natural language way. So your smart home speaker might be able to control the lighting in your house if you have smart lighting and there’s a specific intent to dim the lights, to turn on the lights, turn off the lights, et cetera.

So these are separate intents. Now you might be able to tell your smart home speaker in 50 different ways to turn off the lights. And using the Language Understanding Service, it will understand you when you do say it, that that is your intent. But there’s only in the back end one programmable action, which is to turn off the lights. And so we’re going to see this in a second how you can expand your possibilities for the types of things your application can do that users can speak to it in their own natural language, and your application may only have a subset of five or ten or 15 actions that it can take. But then there’s an infinite amount of ways that you can tell the application to do that.

So this is what we’re getting into in this section. So this is much closer to what we consider to be AI than just simply understanding human speech. And being able to turn that into a properly coded sentence and even analyze the key phrases and pull out names and places is a step along that path. But language understanding is several steps beyond that, and we’re going to see that in this section.

  1. Using the LUIS Portal – LUIS.ai

Now, the Lewis service has its own portal, much like machine learning also has a portal. It’s Lewis AI is the URL and we could log in and sign up for Lewis service here. But here’s a couple of demos that you might find interesting. Now, this is the language understanding. When a user says bookmia flight to Cairo, we can see that Lewis has a pre built model for understanding some basic intents. It understood bookmia flight to Cairo and it’s making a prediction in terms of what is the intention of the user. The top intention came back as book flight. So if you had a flight booking application, you could then go on and start the process of booking a flight based on this information. Now, what’s interesting is you can see what its other predictions might have been. There’s a 98. 8% chance that it’s booking a flight, which turns out to be true. There’s no intention, which I guess you could say something nonsensical to Lewis and it would come back with we don’t really get the user’s intention, 4% chance of that location. So you might say, where is Cairo? And that would be a location related intention and there’s a 1% chance of that being true. It did mention a location.

You can also have a reminder intent and finally a food order. But that is such an infinitesimal score to the power of negative seven there and it’s also extracted the location Cairo from that sentence. And so with this kind of return, you can then create an application that can then move through the user through some type of flight booking process and you already know where it is that they want to go based on your analysis. So we’re going to click on the login sign up. Now, I’m already signed into my Azure account on another tab and so it’s going to hopefully recognize me. I’m not going to have to put in my information. But you’ll see here that I do need to authorize Lewis against my subscription. So I do have a couple of subscriptions. I’m going to choose my course training one and I’m going to say create an authoring resource.

And so this is going to basically create that cognitive services account against this subscription so that then I can use Lewis. So I’m going to have to create a resource group for it or put it into a resource group, choose my data. I got a DB one here. You give it a name. So this is going to be my Lewis resource. Call it whatever you wish and it goes into a location. And once again, maybe not every location in the world is going to have cognitive services available, but a lot of them, quite a lot of them do. You can see the Switzerland and Australia and places like that that are on this list. I can actually. You’ll see, I tried to find Canada. Canada not on the list.

So that’s an example of a region that does not have Cognitive Services available. So I’ll put that in West US and say, Done. So that’s going to go and create this Cognitive Services account that I can then go and start to create these Lewis applications against it. So there is this portal interface for creating Lewis apps. We can certainly take a look at that in this course as well. We are going to shortly switch over to the Python script here where we can see that we can pass through our credentials and then we’re going to basically build an intent relating to ordering pizza in this particular example. But right now we don’t have any apps here.

Now if I click on so we’re going to click on new app because we have to start from somewhere. And this is going to be like my first Lewis app. Now culture. It says in the tooltip it’s not the interface language, but it’s the language that your app is attempting to understand. So there are some samples. We have your typical English language but you can also have Dutch, French, Brazilian, Portuguese, et cetera. It is a limited list of languages but not super, super limited. So this is an English app and this is just going to be a demo app. Eventually we’re going to have to create a prediction resource on the publication. Don’t have one currently, but that’s not going to stop us for now. So we’re creating our first Lewis app using the interface. Now there is this tutorial type thing to getting you through designing and improving your app, et cetera. Now I mentioned before that as an app designer, a lot of this revolves around Intents. And so you may in case of ordering a pizza, if the goal is the user wants to order a pizza, then you’re going to have to accept that as a pizza order and then that goes from there into how your application handles the information from ordering a pizza.

Now we’re starting off from scratch here and maybe we want to look at some of the pre built templates that we can develop from. If we look on the left here, we have prebuilt domains and if I click on it, I can see that Microsoft Azure does have some templates that we can build our application off of. This includes Intents entities which we haven’t really discussed yet, and utterances. So we can look at some of these pre built domains and we can say, oh, let’s say we’re creating an email type service. Then we can add the pre built email domain and it’s already going to have a lot of those predefined Intents entities and utterances around sending emails. So I click on the Add domain button for email and you can see I can add more than one.

So if my thing does email as well as home automation or as well as calendar, I can have multiple domains. And if I don’t want it anymore, I can remove it. So if we go over to the intents tab now, there’s a bunch of intents that weren’t there before and they all have to do with the email domain. There are examples under each of these. So if we want to say check messages, we go under there. Here are some example user inputs, please check my Outlook, show latest emails, show my emails, show my unread emails. And you can see that it’s understanding certain features and entities of those statements. So show my unread emails is clearly different than just show my emails, because unread is a property of those emails and you’re not going to show them the red emails.

So here are again, I think these were something like 40 examples of user intents that come pre loaded with this domain. But go on to the entities tab, we can see that it needs to understand stuff about your email, including the message itself, the date, the contact name, et cetera, the subject line. Now, if this looks like it’s a lot, and it certainly is, and you don’t want to necessarily support all of these actions with your machine learning algorithm here, of course we could remove the email domain and we can go back to intents and we can start to add those intents manually. So here we’re back to having no intents. You can obviously come up with your own.

You don’t have to use the pre built ones, or there’s this add pre built domain intent and the email intents are in here. And so maybe you want to support the check messages and the delete options for this language understanding application, but you don’t want to support all those dozens of others. And so you can just pick those two. They’ll come with those examples. You don’t have to pick the entire email universe within pre built domains. And so this is the basis for starting your own application that can understand either spoken or textual human language and then being able to write a program that acts on that.

  1. Creating a LUIS App Using the Portal

Alright, I’m going to switch over to the prebuilt domains and we’ll add back in the entire email domain. For this example, instead of trying to manually create one or two. So if we go back to Intense, we’ve imported back all of the intentions of your application including delete and check messages as well as forward read, reply search, etc. Now, the concept of entities are the things that the data effectively that your spoken and textual input could contain. And so if I said that send an email to Sally, well, Sally would be recognized as an entity because, you know, email is what I want to do, send an email is my intention, but Sally is the contact I want to send it to. And so if we go back to intents and we go back to send email, there’s over 100 in here. We can see that you can speak the name of the email address, you can say send an email today and that understands the date part of that, send an email now, it also understands a time part of that, et cetera.

So we are now getting the subject swim team practice window that is broken, et cetera. So you can see that now that we’ve added entities, the utterances become more intelligent and understands what I’m trying to do, the subject line, the destination, what I want the message to be, let’s have a meeting, et cetera. So it’s the entities that is the data. Okay? Now the final thing to do is if we are happy with this, some intents have no utterances or patterns, all right? So if we go under the intents and we see there’s this non intent, there’s 119 utterances that we don’t know what the intention is. This has to do with home automation. So obviously I’ve messed this up somehow by playing around with this too much. Now I could start over, create an empty application, or I could just start to go deleting these empty intentions. So that’s what I’m going to do.

All right? So after cleaning out all of the non intentions, now I should be able to then click Train. And you can see it says it as untrained changes. Now I am using pre built domains here and since it’s pre built, it shouldn’t need to be trained, but it seems to think it does. And so who am I to refuse it? So it’s now training its Lewis model so that I can then program once we pass the text to this application, it’s going to return back to me what the user’s intention is and with the data that they provided and I can then build my application off of these intents.

You can see the training has completed very quickly and the app is up to date. So I have my first Lewis app, which is a trained model around the email domain using the pre built domains. And from this point forward, we can then use this in our applications there’s a test feature. We would need some servers in order to publish this into a destination, but we’ll leave that for now. Now we can try testing this model. There is a test button at the top, opens up this little panel and it says type of test utterance. Now the accuracy of this really does depend on how well we provided some samples. So let’s say search email from Mary yesterday. That could be something someone will type or say to their email bot. And let’s have a look with this inspect button with what the computer understood. So the intention 87% is for searching and that makes sense. We can see that it did recognize the entities of being from Mary and also from yesterday. And so there are some entities that got extracted from my statement. There’s also another entity that just recognized a time.

It doesn’t assign it to the email date. That’s pretty good. Let’s say send email to Bob asking about lunch. I should probably have pretested this, but here’s a send email intent which is again 99 9%, correct contact name Bob and email subject lunch. Now it doesn’t have a well thought out body saying hey Bob, what do you want to do for lunch? But I could start up that email, pop up on my screen and it’ll just have the subject filled with lunch. And I can then email Bob and say hey, what do you want to do by lunch? Now you will find also sometimes when you provide a nonsensical unrelated utterance that this is not really designed to handle anything that’s not really the email. So if I said turn off the lights, click on it, it chose the cancel intention, but with only 7% confidence and didn’t really pull out any entities because this is not a home automation trained utterance. And so when you do your programming, you are going to want to make sure that the intention score reaches a certain threshold. So you may want to say minimum 75% intention, otherwise you’re confused and you want to ask for clarification. So we can already see the model that we’ve built understands email related conversations with high confidences and does not understand non email related conversations. But it does try and it will find it a low confidence.

  1. Creating a LUIS App Using the SDK

Now, as you would expect we can do in Python the same thing we did in the portal or with any SDK language I’m in my GitHub account in the AI 102 files repository there is a Lewis subfolder that contains a Python script. We can see here that we are importing a couple of the SDKs here the cognitiveservices language Lewis authoring and runtime SDKs the authoring client and the runtime client. Now we’re going to use our cognitive services set up here and there is a function called Create App that this code contains. And let’s scroll down to the crate app function and you can see we’re setting up a definition for our app with a name of version and the culture. And we’re using the authoring a client to go dot apps add to create ourselves an app and then that will run and come back with an app. ID going back up after we’ve created the app.

The next thing we need to do is start with our own new intention. This is a custom intention called Order pizza and it’s just the client model add intent method. In order to create ourselves an intention we also need those entities, right? This is the data that comes from ordering a pizza. So Add Entities is a method of this function here and we can see that we’ve set up the concept of a pizza as having a quantity, a type and a size as well as the concept of the toppings which will have type and quantity of those individual toppings. It’s also getting the quantity you know ordinal number prebuilt in there as well and it’s basically building some phrases like do you want more or less or few, et cetera that generates this. So you can sort of examine here that it’s basically building relationships between how many and toppings, how many pizzas you want, how many extra cheese, et cetera.

Okay, so now it’s added not only the entities but the features of those entities go back up again, there’s also a function for examples and so any machine learning model is going to need some data. Now obviously this is a very limited sample set. You’re going to want to build your model with a lot more samples but you can see that I want two small seafood pizzas with extra cheese, not into seafood pizzas myself but then that basically takes that utterance and turns it into data and we got the pizza entity. Remember it says too small and so we’re expecting to identify the index of that string. So based on the length of the string, where’s the quantity, where’s the size, where’s the type, where are the toppings? Again, based on the characters.

Remember in the Lewis interface when we were looking at entities, actually if we go under to these you can sort of see it’s actually tagging the entities within these phrases and this is exactly what we’re doing here. Then you’re basically adding those examples to your model and going back up to here, that’s when you hit the training button like we did, we wait for the training to be completed. We’re checking the status of the training with a get status and it’s going to basically wait 10 seconds and then check it again. And check it again. Now, if we want to again publish this, we can publish this to a location and then basically send it the real data we want to small pepperoni pizzas and it will understand the ordering pizza intent and understand the bits of data, the entities that are in this sentence. So we can do in Python exactly what we did in the interface and we can see that running. So for this example, I also want to show you that we do need to grab from the Language Understanding services the various keys and endpoints.

So we can go into the authoring service, grab one of the keys and paste it into the code under authoring key and the same key also works for the prediction key. Now we’re going to get the endpoint from the authoring endpoint and we can paste that in there and we also have to go back to the other cognitive service and grab that production endpoint which we’ll put in there. And now we can run the code. And what we’re expecting to happen is we’ve passed it some very basic pizza training and we’re going to get a result. And so two small seafood pizzas, it’s predicted, it’s a pizza intention and it is understanding the quantity, the type and the size, et cetera. So we can see that it’s pulled some information out of our code here. Now, one thing we can do is we can go into the conversation app section and we can see that this model now exists, right?

So we’ve created it and published it and we can go into there. So even though we created it using code, it does exist within the Lewis interface. And so if we want to continue to edit it within the interface, we can certainly do that. So in this example, but we’ve seen how you can use the Language Understanding service to train the model to understand the intentions behind what the user is asking and pick out the various relevant entities from that so that you can actually extract that information and use that in your application. You can do that in code as well as you can use that using the user interface.