Practice Exams:

DP-100 Microsoft Data Science – Recommendation System Part 2

  1. Restaurant Recommendation Experiment

Hello and welcome. We are now learning the matchbox recommender from Azure ML. So far we have seen what is a recommendation system and its types, how the recommender split works, what is the train matchbox recommender and score recommender along with their parameters. We also saw the input and output parameters for both these modules. Today we are going to build a recommendation system using all of these. In today’s experiment we are going to build a restaurant recommendation engine. So let’s go to the Azure ML studio.

Alright, so here we are. And as you know, we require three types of data sets for recommender system and two of them are optional. The first one is the user item rating triple. We have that data set provided as part of sample data set by Microsoft. So let me search for it and drag and drop it here. All right. And let’s now visualize the data set. Well, it has 1161 rows and three columns of user ID, place ID which is nothing but the restaurant ID and the rating given by the user for that restaurant. We also don’t have any missing values here. That’s great. So let me close this. The second type of data set that we need is the user features data set that has also been provided by Microsoft and is called Restaurant Customer Data.

So let me drag and drop it again and let’s visualize it. As you can see, it has got 138 rows and 19 columns. The first column is of course the user ID and then it has got the coordinates of the address. It also has certain features that can have an impact on the choice of a restaurant that an individual will have such as smoking status, his or her drinking level, dress preferences, ambience that they prefer, the mode of transport they use, marital status, birth year which can give you an indication of their age. It does influence the choice of restaurants in many cases and various other features of these users. As you can see columns, smoker, dress preference, ambience transport, marietal status, ehows, activity and Budget have missing values.

So we have some work to do there. Let me close this and because all of them are string features and not too many missing values, we can replace them with mode. So we will need a clean missing values module and let’s drag and drop it on the canvas. All right. And provide the right connections and let’s launch the column selector. All right, let’s select smoker, dress preference, ambience, transport, marital status, EHOs, activity and budget and click OK, let’s keep all the default values and cleaning mode as replace with mode and it is now ready to run, but I’m not going to run it for now. Let’s first complete few more steps. All right, so we have also seen that we need a third data set called Item features and in this case it will be nothing but the restaurant ID and its details. The one that has been provided by Microsoft as the sample dataset and is called Restaurant Feature Data Set. So let’s drag and drop it and visualize the data set.

Well, we have here 130 rows and 21 columns or features of the restaurant. So the first one is the place. ID. Or restaurant? ID. We also have its location coordinates as well as geometry, name, address, city, state, country, Fax number, zip code, whether it serves alcohol or not. And if yes, then what type the smoking area, the dress code, accessibility and so on. While there are lots of missing data in columns, city, Country, Fax and others, we are not going to clean these values right now because the Fax number is not going to be a feature that will affect the rating given by a user. All right, you can pause the video and check all the columns which have missing values and evaluate for yourself whether they have any impact on the Ratings given. For now, I’m going to close this and next step in building the recommendation engine is to split the triple of user item rating using a recommender split.

So let’s get the split data drag and drop it here and let’s choose the splitting mode as recommender split. We have seen what each of these options mean, the fraction of training only users over here. Let that be 0. 5 or 50%. Let’s keep the fraction of test users at zero point 25 or 25%. For this experiment, I’m going to keep all other parameters at default values and provide the recommender split as one, two, three. Now we have configured it and we are ready to train it. As you know, for training purpose we use the Train matchbox recommender module. So let’s drag and drop it here. All right, it requires three input and as we know they are user item rating triple, user features and item features.

So let’s provide the user item triple from the split module for training purpose and the first node is the training data set. All right. Now we need the user features and item or restaurant features. But because there are so many columns which do not have any impact on the Ratings, let’s use the select column module for selecting the columns or user features we want. So let’s search and drag and drop it here, provide the right connections and let’s launch the column selector. I’m not going to select the transport, religion, color, weight and height of the user. All the other columns such as the location, smoking status, level of drinking, ambience and so on will definitely have some impact or the other on the choice of restaurant as well as the Ratings that the user may provide. Okay, so we have selected those and let’s click OK, let’s now do the same for Item feature data set, that is the restaurant feature data set in our case. So I’m going to copy and paste the select column module, provide the right connections and let’s launch the column selector here. I am going to select only the place ID, the location specific coordinates, the alcohol status, smoking area, dress code, price and ambience which seems to have the major impact on the choice of restaurant. All right, and let me click.

OK, let’s now connect these three different data sets to train matchbox recommend. All right? And I’m going to keep the default values for the parameters to this particular module. And we are ready to run it now. Awesome. It has run successfully. The next step is to score the recommender. So let’s get the score matchbox recommender module, connect the train recommender to the first node, the test data set to score to the second one user features and item features goes here and here. Let me just add just a few things here. All right, we have seen these parameters earlier, so for the type of prediction we are going to choose the default method of item recommendation. Recommended item selection will be from rated items, that is for the model evaluation.

And let’s say we want the top five recommendations per user and minimum two recommendations for a single user and let’s run it. Awesome, it has run successfully. And let’s visualize the output of score matchbox recommender. And as you can see, it has provided different items which in our case are restaurants. But how do we know it has performed good or bad? Okay, so let me close this. Well, as you must have guessed by now, we have to use the evaluate model but in this case it will be evaluate recommender. So let’s get it here.

As you can see it requires two inputs, but as seen in the previous examples, these are not two different model output. The first one is the test data set and the second one is the scored data set. So let’s connect the test data set from here all the way down to the first node of evaluate module and the output of score recommender to the second node. All right, and let’s run it and let’s understand the result of it in the next lecture. We may need to understand few more terms for evaluating the result of the recommender. So let’s do that in the next lecture. I hope you will be ready with your results from the matchbox recommender. I’ll see you soon.

  1. Understanding the Matchbox Recommendation Results

Hello and welcome to the Azurmal course. In the previous lecture we ran the matchbox recommender and predicted the restaurant recommendation to a user. For that, we used these three data sets of user, restaurant rating, triple customer features and restaurant features. We did the data transformation, then selected the relevant features for training the matchbox recommender and using the test data and the trained model. We finally did the scoring and also evaluated the model. In this lecture we are going to view, understand and analyze the matchbox recommender results.

So let’s first visualize the evaluate recommender output. Well, it only has one single value called as Ndcg and our value is 0. 903,488. Let me tell you that this is a good result. Well, let’s try to understand what is the score and the intuition behind this score. Well, the result for item recommendation is evaluated using the Ndcg score as we saw and Ndcg score which is also known as normalized discounted cumulative gain. Well, it’s a very heavy term and let’s try to understand what is this gain and how it is calculated. Let’s see that using a search example. Every time we search for an item or an item is recommended to us, we find some items relevant to us and some not so relevant.

This relevance or relative judgment determines the ranking quality of the items. All right? And the basic assumption we make in such cases is that the highest ranked item should result in highest gain. That means it should be the most relevant item for the user. Otherwise, why to recommend? Well, there are many ways of judging if the ranked item is the most relevant or not. For example, Google uses multiple metrics to find the relevance from a user’s perspective, such as the click on a search result, amount of time spent by a user, how many times it has been shared, and so on.

The second assumption is highly relevant, is more useful than marginally relevant item and marginally relevant is more useful than non relevant results. Let’s try to understand this using an example. Let’s say you searched for an item and fired a search query. The recommendation engine then fetched the most relevant results or items. This relevance is as per the recommender and not the user, so it returns item 12345 and six in that order. Let’s now calculate and score these results. So these are the items ranked and presented by algorithm. However, the user does not perceive the same gain or value as ranked by the algorithm. The user finds item one and three more useful or most useful and then the ranking quality decreases from three for item two, two for item six, one for item five, and absolutely no relevance for item four. So the cumulative gain in this case would be the sum of the user perceived gain, which is nothing but 14.

The drawback of just calculating the cumulative gain is that it will remain the same irrespective of the order of the items presented but remember, we want to present the items with highest value or gain on the top of the page hence we calculate something called as discounted cumulative gain by using the logarithmic scale. So what we do here is we have these rank numbers presented by I that goes from one to six. Second column is the graded relevance of the result at that position and this column is the log scale calculated using this particular formula and finally we take the ratio of the graded relevance and the log value for that position which provides us these values from the last column. The sum of all of these is nothing but the discounted cumulative gain. All right, but this is for the results presented by the recommender. Let’s find out what would have been this score if the recommender has been an ideal recommender that can match the perceived gain with the relevance of the result. All right, now in the ideal scenario our recommender should have provided the items in this particular order that means the gain of 44321 and zero for these items.

As you can notice, the cumulative gain is still 14 but let’s see what changes it brings when we use the same formula as before for calculating the discounted cumulative gain. Okay, and as you can see, these log values will not change but because the item ranking has changed, this ratio here will have different results. The sum of this column will give us a value of 9. 27 and because it is considering an ideal recommendation system we call it ideal discounted cumulative gain or Idcg. So we now know what is the DCG or discounted cumulative gain and what is an Idcg or ideal discounted cumulative gain. The ratio of these two is nothing but the Ndcg or normalized discounted cumulative gain. The highest value it can take is of course one when our recommender predictions are as accurate as ideal conditions. I hope you can now interpret the results for our restaurant recommendation system which had an Ndcg of 0. 903. 488. That concludes this lecture as well as our section on matchbox recommender. Thank you so much for attending the lecture on matchbox recommender. I hope you have enjoyed learning azurmel and it has added value to your profile. Thank you for joining me in this course, have a great time.