exam
exam-2

Pass Microsoft Azure AI AI-102 Exam in First Attempt Guaranteed!

Get 100% Latest Exam Questions, Accurate & Verified Answers to Pass the Actual Exam!
30 Days Free Updates, Instant Download!

exam-3
block-premium
block-premium-1
Verified By Experts
AI-102 Premium Bundle
$39.99

AI-102 Premium Bundle

$69.98
$109.97
  • Premium File 286 Questions & Answers. Last update: Mar 15, 2024
  • Training Course 74 Lectures
  • Study Guide 741 Pages
 
$109.97
$69.98
block-screenshots
PrepAway Premium AI-102 File Screenshot #1 PrepAway Premium AI-102 File Screenshot #2 PrepAway Premium AI-102 File Screenshot #3 PrepAway Premium AI-102 File Screenshot #4 PrepAway AI-102 Training Course Screenshot #1 PrepAway AI-102 Training Course Screenshot #2 PrepAway AI-102 Training Course Screenshot #3 PrepAway AI-102 Training Course Screenshot #4 PrepAway AI-102 Study Guide Screenshot #1 PrepAway AI-102 Study Guide Screenshot #2 PrepAway AI-102 Study Guide Screenshot #3 PrepAway AI-102 Study Guide Screenshot #4
exam-4

Last Week Results!

3500
Customers Passed Microsoft AI-102 Exam
95.3%
Average Score In Actual Exam At Testing Centre
90.3%
Questions came word for word from this dump
exam-5
Download Free AI-102 Exam Questions
Size: 2.69 MB
Downloads: 141
Size: 982.8 KB
Downloads: 921
Size: 813.37 KB
Downloads: 1063
Size: 791.39 KB
Downloads: 1113
Size: 1.72 MB
Downloads: 1105
exam-11

Microsoft Azure AI AI-102 Practice Test Questions and Answers, Microsoft Azure AI AI-102 Exam Dumps - PrepAway

All Microsoft Azure AI AI-102 certification exam dumps, study guide, training courses are Prepared by industry experts. PrepAway's ETE files povide the AI-102 Designing and Implementing a Microsoft Azure AI Solution practice test questions and answers & exam dumps, study guide and training courses help you study and pass hassle-free!

Object Detection with Custom Vision

1. Train a Custom Vision Object Detection Model in the Portal

Alright, so in this section of the course, we're still dealing with custom vision, but in this case, we're going to deal with the other half of it, which is object detection. Now remember, the key difference between object detection and classification is that you are mainly dealing with identifying areas of an image. So you may have a larger image and must identify the object's location.

So you're going to basically upload not only the image but also a set of coordinates for the image to identify where the objects are, okay? And then when you run against the object detection predictor, it's going to tell you where on the image it detects these objects.

So we do have code for this. If you go to the AI-102 files in GitHub Custom Vision Object Detection PY, we can see the various codes. Now we can also do this within the portal, where we've already created the Custom Vision resource in the last section.

But now we can go and create a new project; give this a name. This is going to be called fork detection, and we're using the same resource, the free resource. Instead of doing classification, we're doing object detection. Now again, we do get these domains that have been pretrained. So if we're looking for logos or for products, then we can detect those, or we can basically get a general model to start from.

So we'll start with the general model and say "Create a project." And once again, we're going to have to upload images. In this case, we must not only upload but also tag the images. Now, if you go back to the GitHub repository under Custom Vision, there's an image; I have both fork and scissors images, and so we're going to tag those.

So go back to the CustomVision website and say "Add images." And I'm going to navigate to the directory that contains my forks, and I'm going to select all 20 forks in this directory and say "Open." Now the difference between this and the previous example, when we were dealing with classification, is that we do have to basically identify the individual forks on the image. So if I open image detail, you'll see that it says to create an object, hover over, and select the region of the image. And so I use my little drag and drop here to select the fork part of the image.

And I'll call this fork, and I'll have to do it for all 20 images. It now has a pretty smart feature where I can simply pick it and it identifies. There's something here, and I can get the fork from it. I'm going to do the rest of the 20, and we'll continue. All right, so I've tagged all 20 forks and given them bounding boxes. I'm going to add in the scissors images.

So go back to the GitHub directory, navigate to "scissors," select all 20 images, upload them, and then I'm going to have to go through the same process of finding the bounding boxes of these. I'm going to let Azure do this, but I'm going to have to call this one scissors. As a result, you can forecast. I'm going to do this 20 times, and we'll come back to that. I've done all 20. Of course, I could have uploaded images that contained multiple scissors and identified each one, as well as multiple forks and knives, and identified them. One image at a time is fine as well. Alright, so now we have 20 forks and 20 scissors. I'm going to do the same thing we did with the last thing. I'm going to use quick training to train both the image and the model.

So this is going to take a few minutes to run, and we're going to again build a model that can recognise the difference between scissors and forks. Alright, so that just took a few minutes to run, and we can see slightly less than perfect scores. The previous ones with the hemlock and the Japanese cherry were 100, 100, 100. And there is some room for error with forks and scissors.

We can see that the definitions of these things, the precision, are basically that if it predicts a tag, it's 100% sure about it, but it doesn't predict every one. So there's obviously either, well, in this case, forks, and there was at least one fork that was on its side. So maybe there were some forks that were a bit confusing to it. And the mean average precision is a kind of error. So if it's going to make an error now, it's 100% precise, so there's no error.

Right. So the precision matches. What we can do is either publish this and use the prediction resource, or we can do a quick test, which we'll do. I'm going to go back into the test project, select a test fork image, and run it again to see if it gives us this red box, 39% certain that it is a fork. So let's see. 39% probability—maybe the darkness of the image or the shadow or whatever. But that's the prediction. Is that it's? which is clearly the case. If you zoom out the browser a little bit, you can see that it comes out with that prediction.

2. Train a Custom Vision Object Detection Model in the SDK

Now we can do the same thing using the SDK. Of course, if we switch over to GitHub under the AI 102 files repository, we can see we're using the same custom vision training predictions and model that we used for the classification SDK. We're going to do all the regular setup of the client, the training client, and the prediction client.

We give it a name, and in this particular case, we're going to choose the general domain of object detection, create a project using the Create project command, and do the same thing that we did with the portal, which is great tags for forks and scissors. Because we don't have the UI there, we must upload the regions in this case.

So this is a handcoded region of where on the image those forks and scissors are. Then we put those regions effectively into an array effectively. So for each file, we're going to associate the file with the region. And then we do the Create images from files method for this project and upload all those images, forks and scissors, with their tags and their regions. Then, once that's uploaded, it takes a moment. We do the train project and wait for that to be completed.

We've seen the iteration in the previous video. It waits until it's completed before it's done. In this case, we've chosen to publish the last iteration, and we're going to give you a test image, the same fork that we just tested with in the detect image call, and we'll see the results. So switch over to PyCharm and have a look at that in action. So you know the routine. We go into PyCharm, we can load the code, set it up with our API and SDK values, run it, and then it's going to train this model with the forks and the scissors.

And it's an iterative process where we have to sort of wait for it. Now the training is complete, and it's actually analysed our test image. Now, in this case, it's a bit complicated because it predicted a 75% chance of a fork in that one location, but it predicted 1% and half-percent chances in other locations.

So that result is a bit messy, but you can notice how small the percentages are and how small the boxes are. We can go back to the computer vision interface, go into the project that we just created with the SDK, and we can see the tagged data there. We can go into the performance tab, see how well it ran in terms of the percentages, and get our URL with the key that we can use in our own code. So just like we do with the portal, even if you use the SDK, that data is available to you.

3. Custom Vision Object Detection using Visual Studio 2019 and C#

So let's talk about the custom vision service within Net. Now, I'm not going to load this up in Visual Studio because it's pretty much the same demo as we just did with the scissors in Python. Custom vision service is more akin to traditional machine learning in that there are training models and then prediction, which is sort of a production service for using the model that you created.

So you start by training a model. You're passing it your own custom photos with particular bounding boxes. This is an object detection model. So you basically mark the image with the object you want to detect, and then you can make the prediction using the prediction endpoint. So we've got the endpoint. In this case, there are two keys.

One is a training key, and one is a prediction key, which then means we have to authenticate twice. one for the training and one for the prediction. You create a brand new project. You're basically training the images with tags. importing those images into the training and instructing the training project to run When you're ready, you publish the one that you want to publish, and then you can run predictions against it.

Use the training API all along until you're ready to use the prediction API. So right away I see that there's potentially a bug in that it's using the authentication training method when it should be authenticated prediction. Anyway, it's the same endpoint with a different key, so the result is the same. Create the project. Basically, you are creating tags. In this case, you're trying to determine which one is a fork and which one is a pair of scissors.

And now you're passing in images with bounding boxes to determine which ones are scissors and which ones are forks. Then you point to the fork directory and upload those, then you point to the scissors directory and upload those, and finally you call the train project method on the training API to tell it it's time to go learn what the difference between a fork and a scissors is.

You wait until the status is "training." You just repeatedly do that. Wait a second until the status says "not training." And then you can say you're finished. You can see the published step and finally the test step, which is what you're calling the detect image method against the prediction API. As we can see, Net closely follows Python. It's really just the way that the code is written and the particular names of the modules and things like that. But the purpose and style are pretty much the same.

Analyze video by using Video Indexer

1. Overview of the Video Indexer Service

So the last topic within this section says to analyse video using the Video Indexer. The Video Indexer service is an artificial intelligence service provided by Microsoft Azure that allows you to upload a video into the service and extract some insights from it. You can effectively extract a search tool from this where you're looking for certain words that appear within the video, certain people, etc. And so using this as a video search tool is a very interesting concept.

You can even modify the video using media services to have news clips and trailers, and they can basically extract the key elements from your video and create a shorter video that just contains the key points. It also does timestamps for people and any kind of topic that is in the video as well. So, if you just want to see what we talked about at this point, the video index or service can handle that. Of course, accessibility is very important, so simply getting the closed captions into your video—as well as a transcription and translation—are services available for this. Unfortunately, we live in a world where things need to be monetized.

And so if you have a video and you want to provide tags to advertisers, you can basically automatically show ads relating to your video because of the contents of your video. So this is very popular on news websites, obviously filtering out inappropriate content and gory or adult materials. Finally, if you can understand the tags of a video, you can understand if someone watches the video and recommend other similar videos, so this would be the basis for a recommendation engine. Now, the Video Indexer is able to detect people in the video as brands and also does some type of language detection as well. So if we go under the concepts of documentation under "brands" here, we can see that you can provide a video and it can identify where brands appear within this video.

So if you're a company that controls your brand, you can look over a library of videos and see if your brand appears in any of them. For example, in terms of people, once you've identified who the people you want to view are, you'll be able to determine whether you have Sachin Adele or any other type of top person at your company.

All the videos that contain those people can be identified based on that. Finally, we're similar to the speech service. We're about to talk in the next section about text and speech translations and transcriptions. So this video indexer works with the speech service to be able to real-time transcribe video closed captioning and translate that into other languages, et cetera. You can also train it based on very technical words so that you'll know that "kubernetes" is not a word that needs to be translated into other languages, for instance.

So I'll include this link in the video index or documentation attached to this video so you can do some additional research. There are also some code samples. If you go into GitHub, there's a link in the documentation right here under "video index" or "samples." We can see that it's the Internet in this case, but we can also see how it's connecting with the computer vision service in order to upload videos and have that service analyse them.

And so if you are interested in doing this, you can clone this repository, execute it, see how it runs line by line, etc. For this reason, I'm not going to have my own repository for the video index or service. If you are interested in following up on that, then I certainly recommend you go to the documentation and specifically to the GitHub repository to find out more.

2. Video Indexer In Action

Now the video indexer has a portal. Some of the cognitive services have their own portals. This one is www videoindexer AI. When you go there for the first time, you'll be asked to log in. You can log in with a social account, such as your Microsoft account or an Azure Active Directory account.

So I'm going to say AAD. Account. It's going to use the same account that I use for Azure. So you can see that it's offering me to search for any text, person, insight, or object in your videos. Now, if we wanted to, we could pick from one of the existing samples. In this particular case, there's a Microsoft Keynote and things like that. Or we could upload a video from our local machine or elsewhere on the web. And so I'm going to click this upload, so we can either browse for a file or enter a file URL. So Microsoft has a sample video that talks about the use of responsible AI.

And so I'm going to say, "Enter file URL." I'm going to paste in this responsible AI video short link, give it a name, responsible AI video, and say English will leave the default settings. I do have to verify that this is not from the police department. You see, this is pervasive through Microsoft's services. and I'm going to say upload. We can see that we have made progress. We're going to let that upload pretty quickly. And now, obviously, it will search the video for all relevant text, people, insights, or objects using the search service. So that took a couple of minutes to process. But we can see that the file was indexed five minutes ago. So I'm going to select it now. I could play the video, obviously, but what we're looking for are the insights. And we can see here that already, on the right, it's identified 26 people inside this video.

There is an Eric Horvitz in the video, as well as Roger Brown, Rabbi Yusuf, and other unknown people. We can also see where they appear in the video. So the bar here seems to indicate where they appear on the timeline tag. We can see a transcript. So it's using the speech recognition service to turn the audio of this into a transcript with the right time codes.

The captions provide additional insights in the top row. So we can see that not only the people who are talking, but any kind of keyword, etc. So I can choose any kind of text in this part of the video, and I can see who's seeing what. And so, like you can see here, the OCR has identified text that's embedded into the video, which is pretty cool. Beyond the insights, we can see the video's topics, and we can see where it comes in any kind of audio effects keywords if we're interested in understanding when the words "responsible AI" are mentioned. People, text, clothing, human faces, and so on are all examples of labels. So this is actually the search service.

So what we're seeing is a lot of insights from the video, but there is a search box here, so if I want to find something specific from the video, I'm just going to randomly pick windmill. It did find a windmill, and there it is, right? So I can search for items and objects, and it'll find them in the video. Another one is a B. There's a "b." So that's actually pretty cool, the way that you basically turn your videos into searchable ideas. And lastly, obviously, we went to this video indexing website and logged in with our own accounts.

But if we wanted to share this, let's say all these insights are very useful and we want to put them on our own website. We can get the HTML embed code for an iframe here, so we can basically extract all of this cool functionality from this portal that requires logging in onto our own web page. We can see the inside of a widget player widget or an editor widget. So that is an example of the video analyzer with the video search indexer service.

Microsoft Azure AI AI-102 practice test questions and answers, training course, study guide are uploaded in ETE Files format by real users. Study and Pass AI-102 Designing and Implementing a Microsoft Azure AI Solution certification exam dumps & practice test questions and answers are to help students.

Run ETE Files with Vumingo Exam Testing Engine
exam-8

*Read comments on Microsoft Azure AI AI-102 certification dumps by other users. Post your comments about ETE files for Microsoft Azure AI AI-102 practice test questions and answers.

Add Comments

insert code
Type the characters from the picture.