Practice Exams:

AI-102 Microsoft Azure AI – Create a bot by using the Bot Framework SDK part 2

  1. Bot Framework Adaptive Cards

Alright, so one of the requirements of this exam talks about adaptive cards. There is a sample for adaptive cards. Number seven here demonstrates how we can get a card to get the user input for name and age. So let’s take a look at that. Click on Python. Let’s go to the bot first, as we do, and we look at the adaptive cards bought. Now we can see that it is defining these cards as json files inside of our resources folder. So we’ll have to check those out as how the cards themselves are defined. It’s using the core and Schema libraries again. And there’s this sort of when you connect to the bot, there’s this hello.

Essentially when it receives the message it is a card, we can see that it’s got a text, a type, which is a message type and some type of attachment. Okay, so let’s go up to the resources and see the, see the definitions of these cards. So there are five cards here. We can start with the Flight Itinerary card. Now this is going to look a lot like an Arm template, but it’s clearly got its own card schema here. There’s a Speak element and then the body of the card contains a number of things. Okay, we’re going to see this obviously when we run the bot. But you can see it’s sort of like a data store with some nested elements.

So it’s actually pretty straightforward, right? The bot itself was not complicated, but the resources themselves, the Json contain a lot of the work going to the requirements. There’s no additional requirements that we don’t already have installed. So let’s just run this bot and see how these adaptive cards, how they are adaptive. I should be able to just say Python to the app. All right, go to the Emulator, open the bot and type anything to see an adaptive card. So I can type anything at all. So here’s a card, right? So in this case it’s a review of a pizza restaurant, five star, three stars, et cetera, et cetera. So obviously this is attempting to be some type of review. If I do want to see more info, it can open the YouTube link.

I won’t in fact do that. Let’s just see what it else it does. So now it’s chosen to give me looks like it’s an image. You could say it’s a weather report, but it’s an image, right? So a totally different format to the previous adaptive card. Let’s go back to the code and see what’s going on here. So we go back to the bot itself. We can see that it’s using this create adaptive card attachment to create the adaptive card. And the first thing it does is it is basically creating a random number generator to the number of cards that we have and then it builds the path to the card and then loads that into card data and returns the adaptive card, using a card factory to create it. So basically it’s just randomly picking one of those cards.

If we go back up to the source here and we look at the resources, we had a restaurant card. Let’s see, the first one looked like a pizza restaurant. And actually I think we got this Tom’s Pie restaurant as our first sample. Let me go back to the bot emulator. Yeah, we got Tom’s Pie. I didn’t have sound on so I don’t know if it actually spoke this to me in the speaking context, but we can see here that it’s building up a column, a set of rows and columns that contain the name of it, right. Some sort of text, the boldness of the text, the size of the text, another textbook within this column and then the image in its own column. So basically this is just a table in Json and if we have an open URL action as its own row.

Okay, if we look at the weather, I believe this is just an image. Let’s look at the large weather card. Again, there might have been some speaking involved but again, it is an image and it has a building. Okay, now clicking on this I can see that it’s actually column as well, text and then these images. And if I click it, it tries to open to Microsoft. So I thought this was one giant image, but it actually is built dynamic. So this is an adaptive block, the adaptive part being I guess that you can have columns, rows, you can put content in here. If you want to have numbers being displayed, you can call an API that’s going to return you the value that you need, etc.

  1. Tracking Events with Application Insights

All right, so one of the requirements of the exam is to understand how you use logging into telemetry to get insights into the contents of your bot. There is a sample for that. If we go down from Bot Essentials into advanced bots, we can see there’s a sample of 21 called Application Insights, demonstrates how to add telemetry to your bot using Application Insights in Azure. And then you store those add information into Application Insights. Now in this case, there’s no Python example of that. So we’re going to have to switch back to Net Core to see how that works.

Now, if we go under bot, I’m going to choose the dialogue bot and the extension that it’s using is called Microsoft Extensions Logging, which is how you get stuff into Application Insights. Now, do keep in mind you’re going to have to create an Application Insights resource for your web app and have that enabled. For instance, so this extension Start Logging basically allows you to create a logger. You’ll notice that this logger is kept as a read only variable at the class level and sensiated in the creation of it. And in this particular case, it’s got a very simple usage. It’s just taking a message. This is where I am message pushing that to Application Insights.

Now, since we are dealing with Net, this is a little bit more complicated than the Python samples we’ve been looking at. We can see that this is set up as a model view controller type program. You’ve got the controller here, you’ve got the model in here and then the web front end under WW root. Let’s look at the app settings. We can see how when you’ve got the Application Insights created, it’s going to give you an instrumentation key and then you have to tie your program to that Application Insights resource using the instrumentation key in the app settings Json.

Then when we start up the application, then we’ve got the Application Insights module here and we can basically start to set up we’re creating the telemetry. So we’re creating an Application Insights telemetry and all of the settings to get that where the memory is, et cetera. In this particular example, it’s just setting up most of those before it creates the bot. Now there is this error handler code and if an error happens, you’re going to see this is where it’s going to interact with the telemetry most, right? So it’s going to on error. It’s going to basically generate the exception into the telemetry client. It’s going to have an unhandled error type message.

We can see the bot encountered an error or a bug with the error being built into here. To continue to run this bot, please fix the source code, et cetera. So this is where you can basically push out any errors into your Application Insights so that your development team can monitor for those errors, find the source of those errors instead of having a log file. In the old days, we used to write to Clog text and later on going into the Windows event log. Now in Azure, you want this to be in a central source. In this case, we’re using application insights. We can sort of see that the techniques for doing this are here. We don’t have python samples, like I said, but we can sort of see with net the types of frameworks that it’s using in order to push out those errors.

  1. Integrating with Other Cognitive Services

Now one of the other requirements for this is to be able to integrate this bot with other cognitive services, right? So now we’re getting into just basic programming when it comes to calling APIs for the Language understanding service or the speech service, etc. Along with a bot bot, we’re going to use the same sample we were just looking at to integrate with the Language Understanding service. And so if we go under App settings Json for this sample, we’re going to see that it requires not only the Application insights key, but the Language Understanding service, hostname and key, as well as an app ID. And so you’re going to need to set up Lewis basically.

Now in this particular case, once you’ve set up Lewis, we’re going to again get into a startup situation where you’re basically going to deploy a Lewis application, which is language Understanding pretty basic. It’s just going to be a flight booking. This is a flight booking application and you’re going to set up I’ll show you the Lewis app, but it’s basically going to be setting it up into Azure. You’re going to need Azure for the language understanding portion of this. If we go back to the code, we’re going to see within the cognitive model here that we’re setting up four intentions, right? So there’s the Book of Flight intention, the cancel intention to get the weather intention and sort of this catchall, which is for no known intention.

And so this flightbooking CS class is going to set up these intentions and to figure out what the top intention is, it’s going to basically look for the intention that comes back from Lewis with the highest score. So whatever of all the intentions it’s picked, whichever has the high score comes back as the top intention in this case. So we’re interacting with the Lewis service in order to determine what the user’s intention is. Now, that’s going to lead to a different type of application. If, let’s say, the user does intend to book a flight, once the Lewis service has determined that their intention is to book a flight, then we’re going to go into under dialogues.

Here the specific dialogue for booking a flight and we can see that it’s setting up this waterfall. Again, in terms of these five steps, the destination origin, the date, the confirmation step and then the final step. Once we’ve intended that they want to book a flight, it goes through that typical set of steps is ambiguous. All right? And we can see going back up the other steps which could be cancel. Remember, there was one had to do with weather or none, right? And even in the main dialogue, it’s where it’s basically going to be interacting with Lewis in order to determine what the intention is.

Okay? So if we look at the this is intro, it’s determining that Lewis is configured or not configured correctly. Once it is configured, we are calling to lewis in order to say whatever the user has entered, what is their intention, what is the top intention? And we said they can either book a flight, get the weather and there’s no code for that or some sort of catch all saying, I did not understand. Please ask in a different way so we can sort of see the code. The logic for the app, the bot is right here within the main dialog and that can fire off an additional dialogue. Begin dialogue asynchronous of the booking. Dialogue. That makes sense, right?