Pass Palo Alto Networks PCSAE Exam in First Attempt Guaranteed!
Get 100% Latest Exam Questions, Accurate & Verified Answers to Pass the Actual Exam!
30 Days Free Updates, Instant Download!
PCSAE Premium Bundle
- Premium File 156 Questions & Answers. Last update: Feb 15, 2024
- Training Course 8 Lectures
Last Week Results!
|Download Free PCSAE Exam Questions
Size: 95.73 KB
Size: 55.29 KB
Palo Alto Networks PCSAE Practice Test Questions and Answers, Palo Alto Networks PCSAE Exam Dumps - PrepAway
All Palo Alto Networks PCSAE certification exam dumps, study guide, training courses are Prepared by industry experts. PrepAway's ETE files povide the PCSAE Palo Alto Networks Certified Security Automation Engineer practice test questions and answers & exam dumps, study guide and training courses help you study and pass hassle-free!
Domain 4 B
1. Domain 4 B
Okay, so let's craft straight on with domain five, as the slide suggests, which is content updates and content management. This really is settled in the marketplace. Creating your own content wants you to know about Docker and how that's utilised within the Explorer environment. Yes, everything has to do with integration, scripts, and so on. Okay, so the first part of this is obviously going to be the marketplace. the marketplaces I've shown you several times before. We've looked at it in a variety of contexts up to this point, I suppose you could say. But the marketplace is a storefront, as the slide suggests, for finding, downloading, exchanging, and contributing security. orchestration of content within the XOR platform. just to finish that off.
This is basically where you're going to get your integrations from. This is where you're going to get your content packs and so on. And as we've seen before, it's very easy to use. The exam is going to want you to know how you go about using it. So how would you search for a specific vendor, let's say, or a specific use case? Unfortunately, of course, this is actually really intuitive. I mean, as you can see in the screenshot, we've got usecases, integration categories, who's published as free or premium, certified, supported, so on and so forth. So it is really easy to be honest. OK, so the content packs and support types differ based on where those content packs have come from. So partner-supported content packs, as the name suggests, are supported by technology partners.
So you take one off the top of your head. So FireEye, for instance, is a technology partner, and they have integrations and content packs with integrations, playbooks, scripts, and so on that interact specifically with their products. The support and maintenance are provided by the partner, and the details of this are all found in the content packs themselves. how you would go about looking for support with it. TSA, the industry standard support framework, is required for technology partners. So the example wants you to know that yes, they are. This is full and proper support. Developer-supported content packs, on the other hand, apply to the content packs published by third-party developers. Support and maintenance are provided by that developer, and the contact details are included in the content pack details. So additional help and support are always available through XOR live discussions in the Palo Alto Live community.
If you haven't been to the Palo Alto Live Community, then at some point you need to go. It is an awesome community of people. They are very quick to respond, and there are slots and lots of information and really good discussions there. And if you're taking any kind of Palo Alto certification, or if you work with any of their products, it's definitely worth a look. Okay, so then the third type is the community-supported content packs, and this applies also to the name of suggested content packs that are posted by Palo Alto Networks or third-party developers. No support or maintenance is provided for these content backups. Customers are encouraged to engage with the community in solving issues, and I have seen many, many examples of issues being solved very quickly because people in the community are genuinely a community, so everybody wants to sort of work together. Okay, so accessing the marketplace Free access to the marketplace has been shown in previous videos. However, for subscription-based content, it is slightly different. So when you're looking to purchase subscription-based content from providers, you need an Azure account registered with Palo Alto Networks. This isn't your Cortex login details; this is your Palata Networks account. And then you need to apply for Ex or Marketplace through the support hub. And then the user has to have a role assigned in the hub, and these are the roles that can be assigned. As you can see, the privileges of an account administrator, an app administrator, an instance administrator, and a marketplace administrator vary. Everything with zero trust means you've got to, if in doubt, work on a zero trust model. So the whole idea is to only allow people to do and change what you want them to, nothing else, because that way you also limit people from changing things they don't know about or something like that. You also limit the scope of a breach should credentials be lost, because that person can only act if Dave, for instance, loses his credentials and comes in and says, "You know what, I think my account's been compromised," or something like that. You know for a fact that Dave only has access to a certain amount of things, whereas if Dave has full access, he's an account administrator, which has been the default position for quite some time.
You know you've got a problem on your hands because anyone with the credentials can do anything Dave could do. Okay, so now we're going to go take a look at how content can be searched because it really wants you to know that. So it wants you to know how you can search on the marketplace, update, and download. Now, we've done this before as well, but as a quick refresher, we'll just go quickly through the marketplace and quickly show what can and cannot be done and how it can and cannot be done. It's only a really quick one because we've done it before and it is really intuitive. Okay, okay, so the marketplace, while we've discussed it before, we need to go over it again to make sure we've got everything under control and that everyone knows where everything is and how to search through it because it's a big part of the PCSA exam. So this is the marketplace. We can see we've got many ways of searching. So we can search by use case, for example, so we can search by breaching, attacksimulation, breach notification, brute force, case management, all the usual suspects, and that was completely useless.
So we got ransomware, fishing, and other things. So, if we go there, we can just look to see which ones are available, whether they are free or require a subscription. And then it is also to be noted that once you've downloaded it, it's not actually regarded as being installed until you've created an instance of it. It's just been downloaded. You have the file. So we've got the Breach, Breach, and TaxSimulation platforms, which I must admit I've never seen before. The Basic is at the top; this is all that is included in the pack. This is what you're getting. So you have nine indicator types, and it shows the automatics that you have joined lists of dictionaries together, dictionaries listed by group by instant fields. Again, this is going into what we were talking about before, about how you can create your own instant fields, and they've created their own for this one. So you have Safe Breach mediation data, which is in the form of a grid table, vendor-specific mediation data, and instant types. So these are the two instant types you're going to have. So you're going to have safebreach insight or safebreach simulation.
I can click on those two instance indicator fields, the field itself and what it really is in checkbox number, short text, but my mouse is useless. You can click into the commands that come with the integration to see what it does. So these are the commands that you would run again with the exclamation mark first because you're going to run a command in the integration. And then we have layouts included in this contact pack. So we've got the safe breach command. These are the same as we talked about before with the different layouts. So you create it as an incident, and then, depending on the type of incident, you get your map in, and your layout is defined there. And then we have the playbooks for the automation. As you can see, this is clearly visible due to Per Insight and Associate Indicators, Number Insights, fees, and so on. So if we want to install this now, we can install it, and then it's on time. It will happen.
Okay, so if we now go to integrations, what was it? I personally refreshed the content, and now we've got Safe Breach there. And as you can see, we've got the same list of commands that we had previously, and we can add an instance. We have all the details now that we put in, including the ones to use an engine API key for safe breach, a counter ID for safe breach, and so on. So, basically, install no and return to the marketplace. So that's installing a content pack and the parts that you get with it. Obviously, if you don't want to search for anything, you can reset filters. You could search just for categories, analytics, and SIM authentication. Can you look for soare results that were published by Cortex Explorers? We discussed it in the last one. So. only published by Cortex XOR. And then there are those who are here to show you the free. Then we have, hang on, where was I with that one? So, okay if I clear that off, or do you have free or premium? There aren't that many premium ones at the moment, as far as I'm aware. Not relative. I mean, I spent most of the time looking at the free ones, if I'm perfectly honest. And then, if you're looking for content packs, that includes specific types of things that you want.
So do you want layouts? If you want to go for layouts, then this is going to show you content packs that come with layouts. You can filter by integration, price, or tags as well. so you can filter by tags. As we saw in the slides, it should be noted that you do need So if I go to access my account, for instance, up here because I haven't registered, you can see that it needs the Pallet for network customer support portal credentials, which require you to register your support portal for Exorcist. I haven't done that yet, so I'm not going to, and that's really it really. So that's what the market has gone through. It is really intuitive, to be honest. And again, once we're here, I just want to rip on this because I hated it the first time I did it. So, if we go to "update available," that update is fine. and then that does the bulk update. Okay, so we'll move on through this. Now, it's quite a long section, this one, this domain, but it is only the last two. So we're nearly there. And it really does show the depth and use of Excel. So, if we go to IP Quality Score, we can see that it works with the Quality Score following integrations. And then we'll install it, which will be downloading it; then we'll press the content. If you go back to the Settings Integrations, and then we have IP QualityScore there and partner contribution, look into it.
We can see the content that's involved; the integration dependencies just depend on context space and version history, and there are currently no reviews for it. If you want to turn, delete that, undelete it, and that removes the pack from XOR. Okay, so another part is, of course, what we discussed earlier with the contributions. So, if you want to contribute a pack, or if you want to put together a pack, put it up. So if you select your category, these are all the classifiers that I've created. So we'd add that, followed by dependencies in other content packs. So those are the dependencies I'd need if I wanted to do external scanning mapping. So you can see that these are required because this was built using them. If you wanted an automation that I've built, add that one, and then we can see that we've got the dependencies there. So, if I wanted to contribute it now, I could. As a result, you can contribute. If we click on that and then contribute, we can see that your contribution needs to be reviewed. Save and submit your contribution. submits it immediately to XOR. You can add a description or save and download as we did previously.
So that's pretty much the marketplace all the way through. You can browse your installed content packs. There they all are. You can use content packs to browse. You can download them as certified ones, free ones, or paid ones. It is really intuitive. I say that's all I'm going to do. That's a really quick one there, because it doesn't need to go any further than that. I've done it several times now. But you will be asked, and the exam will expect you to have a good understanding of how you download. And one thing to get caught out on is when it asks you if it's installed because you've downloaded it. No, it's there, and it can be used. But creating an instance of downloaded integration is regarded as installing it, because until then, it's just a file on the computer. It's not actually installed on the box server. Okay, it's now going to talk a little bit about Docker because it wants you to describe the relationship between Docker and the marketplace. So, for starters, I want you to understand why Docker exists.
So what are the benefits of Docker, why do you use it rather than installing everything as an application on the server, and how does that help the ecosystem? And when integrations, content packs, and things like this have been disseminated around the community, how does using Docker help with that? Now, this may be a little bit like teaching people to suck eggs. I'm not a Docker expert, so I apologise, but learning some ways around Docker or some knowledge of Docker is useful here. However, it isn't massively in-depth, as I've seen from the certification, and could stand to be corrected. But essentially, they just want you to understand why and how you use it. Okay, so what is Docker? It is what it says there on the slide. It's used to package all the dependencies together for an application. It goes into a single container, and then the theory is that container can then run anywhere exactly as it did in the development environment because it holds all the dependencies and everything within that single image. As previously stated, Docker containers include all dependencies required for the application to function and must be installed on each separate machine. So regardless of where you install it, you no longer have to do a Pippy install. XOR primarily employs Docker to run Python script integrations in a controlled environment, isolating it from the server environment in order to prevent accidental compromise of the main Exor server.
I would also like to suggest and express my opinion that not only does it make Docker use easier and more reliable, but it also cleans up after removing applications and stuff. and you're continually sort of adding applications and taking applications away from the entire Linux infrastructure, it's going to be a lot more messy. It's going to need a lot more downtime. where Docker is Docker. It comes in and runs. As long as you can maintain it correctly, it is much more reliable and such a better way of doing it. Okay, so that's pretty much what you need to know about Docker. So, once you understand the advantages of custom content over other types of content, the advantages are fairly obvious. I mean, they're written by you for your environment, for instance. So if you have any quirks or anything that needs doing, you can add that into your code as opposed to having to try and work around anything. The whole idea behind XOR is the ability to integrate entirely and seamlessly with any environment. Because whenever you don't have seamless integration, you have gaps. And where you have gaps, people will get through. You can get a 50 pence piece into a round hole, but the hole has to be big enough to allow for the points, and the points allow for gaps, and then before you know it, you've got loads of holes. As we've seen, they can be shared with the community and contributed to. I dare say you can probably even make a reasonable amount of money from them if you write something really good. So there is a way forward for that. And as I said, it allows complete flexibility within challenging environments. So if you have a certain use case that isn't dealt with by the standard stuff, have a look first because there is a wide range. I believe there were over 500 integrations at the time of the video, and that number is growing all the time. So have a look first and see what there is. Most vendors are catered for, and that's one of the benefits of custom content. Implementing your new content Once you've designed your new content, you submit it to XOR by clicking the contribute button and uploading it, as we saw you do. It's reviewed for any dependencies that are missing to make it work or that are added. Your code is also reviewed against their standards, and it can be changed.
If everything is in order, it is then published to the marketplace, where you can download and install it. All the free content is open source and is in the XOR GitHub repository under the MIT license. There are slightly different approaches for subscription content that you contribute, and you need to make sure that it's a bit more like crossing your nose and dotting all your eyes. Everything is just that bit more scrutinized, and you have to include everything in it. You need to make sure that it's going to work, and it's got to be perfect. So there are different dependencies. All content is reviewed. Absolutely. It's the official Docker repository for XOR, isn't it? You can't just write something, send it off to the marketplace, and then suddenly people just start downloading it, because that would make this a very, very unsafe platform. So everything is reviewed before it's available to anyone else. So offline installation, as we saw, you can download your integration, your content packs that you've contributed, so you can download that, and that comes down to the zip file. So if you have an entirely unconnected installation that has no network access, no internet access, and no network access to be pointless but no internet access, you can download the zip file and then upload that content to the marketplace on your instance. However, before doing that, you have to do these things to stop it from trying to sync with the marketplace as it comes up, because it will try and sync everything that you put on it. And this will essentially make the performance of what you're doing much faster. And these are in the settings about troubleshooting and then the ad server configuration, which we'll just look at in a minute. And that's it for the slides. For this one, we'll just go through that, and then that's it for the main six. Okay. So as we've seen there in the slides, XOR relies on Docker underneath in order to host integrations and run Python scripts, and so on, in order to keep it, I suppose, entirely separate from the server itself, because that container is a container and it runs completely on its own. So to list them, you really have to find your way around it, really.
There's nothing particularly special about the Docker that's on XOR. So learning Docker is a good idea. I mean, learning Dock is a good idea anyway. I quite like it. But if we were to look and see the images that we have, we could list them here. We can see the Docker images that we have currently available on my instance of Excel Pipe and three EPsocks, when they were created, and the container ID, how big they are, the tag, and so on. If we were to want to create a Docker image, we can do that from here as well. It is dependent on the Docker image to create a name. You add the dependencies that you're going to need from it. This is straight from the demo version, so you can see it and then the packages you require. So, if we run that now, that will disappear and we will just wait for the there you go. We just wait for it there. And so now we go to an example. Name its size when it was created. And that's creating a Docker image. So you can now use that Docker image if you want. You could build any Docker image you want. OK.
1. Domain 5
Hello and welcome back to Domain Four Part Two. This is going to go into the threat intelligence side of things, which is obviously one of the largest parts of the XOR platform and obviously one of the most important parts, because that's what we're attempting to do: gather as much intelligence as we can and obviously put that to you because intelligence is nothing unless you can enforce it. For the PCSAE exam, you may be required to describe threat intelligence management capabilities.
So I mean, that's the gathering of intelligence indicators, expiration aggregation, and then being able to share it with others. So the first thing I want to do is go through Indicators Chase indicators, what they are, and we can look around and figure out how to remove ones we know how to enrich, ones we don't, and then sort of how to expire them once. It should be noted that even after indicators expire, they remain in the system, can be searched for, and so on. It just doesn't go up as an indicator anymore within an instance. Okay, so the first thing we can do, I think, is have a look at sorting out what we've got here. We have a good reputation.
Good, good. So we don't really want it to come up in this instance because there's no point. It's noise. As we said before, the whole idea behind this is to tune out noise. So we'll click into it, and we can see immediately where it is, if indeed that's where it is. And we have the geolocation; we have the identifier for it; we can enrich it, but it's already been enriched. So what I'd be tempted to do is just put a note in a comment at the bottom. Okay? And then we need to delete and exclude. So we're deleting it from the expired list. First, you can see it's active; expire the indicator, and then we need to delete it and add it to the exclusion list.
So to exclude, delete, and exclude Okay, so that's cool. Basically, what that means is that once you've excluded it, it won't feature an incident. We're going to add the exclusion reason, then hit delete and exclude, and then automatically we've got the indicator "not available." It might have been excluded or deleted, which of course we know for a fact it has been. So going back here, what we want to do is try and find one that hasn't been further enriched. Okay, so we've got this one. So we got this IP address, and it's now operational. We know it's bad because the Alien Vault Reputation Feed says it's bad. There's a timeline for it. So we know it was removed from the feed on July 27, it was removed from the AlienVault open feed, it was cited 414 times, and we can see how its entire timeline traces back from when it was first cited to when it was removed and cited removed. Okay, so we'll go ahead and enrich an indicator so that it can be run through the other enrichment integrations we have, as you can see, SpamCop IP phone. And if we click into these, we can see why Barracuda has given it that because it was found on the Barracuda block list and the indicator itself. So I'd be tempted to just leave that as it is. I don't think we should extend that.
I think that it will expire on its own. And then we have the different details. We have the ASN, the country, and the geolocation. So now, going back to the good indicators So, for instance, take this one, which is my gateway; you can see we've got the reliability of the different people. We also have A+ dependability. So in the system of things, if you have, say, Spamcob, which is A plus reliability, as opposed to Alien Vault, which is C, which is fairly reliable, it will take the reputation score and verdict from the most reliable source, where you have two sources of equal liability that disagree on the verdict. So, if you have two pluses and one says good and the other says bad, it will take the worst verdict. That way, you're always sort of erring on the side of caution. But in this particular instance, we know for a fact that this indicator is my gateway. So we don't need to look at that anymore.
And it is an IP, which we're going to delete and exclude it. So now, from here, we can see that it's gone. Okay? And then you would do that more and more. As you can see, you have a related instance 13, four, and one. We know that's DNS, so there's no point having it there. So we just carry that out again. And that way, you start to clean up your XOR instance. And once you get to the point where you are only seeing an instance, you're only seeing indicators that are relevant. It makes it a lot easier to go through and add the comments and do the threat hunting and look through it because you're not spending time reading through stuff that isn't relevant. Okay, so I just want to quickly go over the feeds as well. So this is the Alien Vault reputation feed. As you can see, it says when it was last run and how many indicators it pulled.
Now it's 100 indicators. The Alien Vault Reputation feed is a lot larger than that, but with the Community Edition, it limits you to 100 indicators per feed and five feeds. So clicking there is clicking on the showcommands, and you get the integration commands. So, if we do an Alien Vault indicator and it goes off, we have 100 indicators there, full table in the new tab, full auto-active new tab, because it has room, but it doesn't. It's limited to 50. So those are the current indicators. As well as being able to do that and pull indicators, we can also export these to be used in other types of CMO or anything like that, or to create a block list or so on. So you can export this from there and then import it into there in any way you can import CSV or anything like that. So to do that, you'd go to the full table. This time, give me a second. Then we can see that we have indicators, which we can download as a file or export to CSV. If we just open that, we can see that we've got a separated file that we could then import into another program.
So that's how we can share information across Welcome back. So this section is part two of domain four. We're going to look at indicators, threat management stuff, and how we monitor and run the integrations within air gaps or sensitive areas such as DMZ, where we don't want a lot of traffic coming in now on ports such as SNMP and things like this. We can retain that within that environment, providing further segmentation. And of course, segmentation is literally what this is all about. Yeah, so when you're enforcing the zero-trust security model, you've got to understand that zero trust is literally non-negotiable. So we don't want SNMP allowed in our environments so that we can monitor anything, but we still want to be able to monitor it. So with XOR, what we do is deploy an engine into that area. That engine then gets an HTTPS connection and an SSL connection back to the XOR instance; all that's encrypted, and then we can run our integrations and commands from that engine using our XOR instance, and we can also log to that, and then those logs will appear in the XOR instance.
So, essentially, we're saying that with D2O agents, we can also run analysis and any forensic work that we need to on those specific boxes within that. And if all of this is literally true, then we are also where you would previously have, I don't know, a vendor like Solar Winds polling the servers within that environment, or SNMP trapping out, or there would be lots of communication on lots of well-known ports, and there would be lots and lots of different issues with those. You would then be exposing only, as a general rule, your front-end web server. Then there are usually back-end database servers behind that. As a result, you can claim that you are severely restricting access in and out. So, yeah, let's begin with indicators. So indicators. The indicators are thus a component of the overall threat intelligence process. We talked a little about the indicator feeds previously, and when we start talking about indicators, we start to talk about IP addresses, URL reputations, files, and so on and so forth.
So we just went through these slides quickly to give you an idea of the ingestion status, expiration, exporting, and you're literally on the edge of your seat, aren't you? Okay, so indicators can come from feeds instanced from CSV, JSON, or text files. So whilst obviously the aim is to import these things automatically or by process, you can pull them from other instances, get email fees, and things like this, and then they'll go into a CSV or JSON file that can be imported manually. This can be done automatically; you can do it through feeds; or it can be done through an instant that's created, and then you upload a file to that. Okay, there are many ways of getting indicators into XOR. So, obviously, we want to make sure that what we're looking at as all of this is designed to reduce noise because it's in noise that people can come in and start to compromise things because we can't see them. You can't separate the noise from the bad actors.
So we need to control whether they're active or expired. An expired indicator is still within the system, but it's not flagged as an indicator within an instance, an incident, or something like that. And you need to define how the indicators will be expired; for instance, are you going to expire them over time? Are you going to expire them by virtue of a feed that then says, "Okay, this indicator is no longer valid?" Is it going to expire then, that type of thing? So, with smart merge, as a general rule, you'll ingest feeds, and then an instant will come in or be created, and that instant will then be enriched, the date will be enriched by one or more integrations. I would venture the opinion that you're probably going to have a lot more than one integration that's going to be doing the enrichment.
So Smart Merge ensures the same indicator is scored correctly. So even if it comes from multiple sources or methods, it understands that that is the same indicator, and it makes sure that you get the best information as opposed to a lot of really complicated, difficult, disparate information. Just a quick review before moving on to the next one that I accidentally clicked on. So that, again, as we say, if you have a feed that's got a reputation of "plus plus" and you have a feed that's got a rotation of "C," it will take the reputation. If those two reputations differ, you will be represented by the reputation that was provided by the feed with the A plus plus. Okay. And again, as we said before, if you have two reputations that come back from integrations or feeds that have the same reliability, you will get the worst verdict if they differ. So if an A plus plus says it's good and an A plus plus says it's bad, you will see bad because that's the worst-case scenario. If we work on worst-case scenarios, then the worst thing that happens is that we defend against something we didn't need to worry about. The indicator timeline is the default section that displays the indicator's complete history.
So this is in the indicator summary layout and the common indicator data model, regardless of source ingest. Indicators have unified standards of indicator fields, including the traffic light protocol. So is it good green? Not sure. Or it could be a shady, grey-wear kind of amber, and then obviously bad is red. And then you have feed-based jobs, which can be used to trigger playbooks on feeds when a change in the feed is detected. So, if you have, say, Alienware's OT X feed, and that feed changes, the job will detect the change and trigger a playbook to remove either the indicators that have expired by the feed or are no longer in the feed. It expires the indicators or add-indicators that have been added recently. Okay, so we're going to have a quick look at the different pages. And so this is the indicators page, and it's a summary page that we're talking about. covers new and old indicators along with their respective verdicts, as you can see. So you've got active indicators there. The type and what it is also cover the positive indicators. In my case, I've only ever found it covering my internal IP and broadcast IP. So you'd normally want to take those out anyway, but that's where they are. And then you can do your searches at the top, which is the Lucerne language for searching. Moving on. So then, once you click into the indicator, you can look further into the data that's held on it and carry out further enrichment. You can carry out the enrichment, enrich the indicator, or let the indicator expire indicator.
If you go to the expire indicator, it will ask you why you have expired the indicator. You can tag it, and you can see where the indicator comes from in terms of geolocation, as well as the sources of it. And there we have the reliability you can see as well as where it comes from. Sorry, this is complete rubbish. I'll start again. So you have the indicator's source, which is A, the factors involved in this specific case, the rotation, which is bad, and the reliability of that source. So that's fairly reliable. You can see as well at the top here that we had some that were enriched against all the integrations I've got for enriching IP indicators. So I got Barracuda SpamCop, an IPinfo; none of them had a bad reputation. So the reputation has been shown to be bad because that is the only reputation, despite it being fairly reliable for that indicator feed. Okay, so the Alien Vault reputation is what we're talking about, not the feed integrations.
Feed integrations pull indicators from threat feeds such as Alien Fault. Of course, it is important to remember if you're on the community version that these are limited to 100 indicators and five feeds in the community version. Of course, this means that the vast majority of smart merge and requirement to expire indicators are less, if you know what I mean, because there aren't as many of them. However, even if you imagine just my mind melding with five minors, I believe it is still running at around 1 million indicators. So, once you reach 1 million indicators, and of course, with XOR, it becomes much easier to add feed integrations and so on. So you would probably end up with more. And at that kind of level, you really need a reliable way of cutting through what is and isn't relevant and what is and isn't active. So then if we move on to the use of indicators, indicators can be consumed by any vendor.
So one of the real things I like about XOR is that it is still technically vendor-agnostic, coming as it did and being under the hood. Domisto de Misto, of course, wasn't a Palo Alto company. So you do have the ability to integrate this with everything that is already within your environment. So it will integrate with anything that can consume an external dynamic list. Checkpoint consumes external dynamic lists; Fortinet does things like this. So that's where the Pal Alto, PanoSDL service is mentioned, but you can publish external dynamic lists and have them consumed across anything. Of course, you can also export them manually, as it says there, through JSON text or CSV. That means you can then import them into anything that allows you to import CSV text or a JSON file, which is, of course, pretty much everything in today's world.
is where we see the good reputation of the IPS because here you can see their broadcast address and my network address. So with these not expiring, I would be deleting them. So you would exclude the indicator, you'd put in a reason why the indicators had been excluded, and then it would be removed from any further incidents. Okay, so that really is the end of the indicator. Next, talk through troubleshooting is next. But I say I'm going to do it in a slightly different way because I'm going to install an engine, and then we'll hopefully go through some troubleshooting methodology with the engines. There isn't much in the way of troubleshooting methodology there, to be honest. It's fairly straightforward, and there isn't much that can really go wrong because the script is created and then built on the server. Just as a side note, all the servers that are being built are being built on 1216 quads.
This is because I believe 20 is supported. However, I have had some absolute disasters with 20 that have had me sitting here scratching my head. And, while I'm willing to accept that this is due to a lack of knowledge on my part, it doesn't make for a good video if you're just going to sit there for 3 hours watching metry and trying to figure out why something isn't working. At the moment, the version 16 4 is in use. You download it if you get to Miss Stone; it goes to box six, no, box four. It also works on Windows as well. Six-four LTS is more than acceptable and can be easily made and hardened. So it's secure.
Okay, so we're going to move on now to install an engine and use my instance of XOR, and I'll just quickly talk you through the web server that we've got, which is nothing special. I say I don't get too excited, and we'll talk about the environment and why we might need to keep everything effectively air-gapped, but not quite. Okay, so getting the engines done Now the web server that we're going to be protecting, or rather the web server that is going to be in the false DMZ that's in my lab, is here on lines 200 and 312. so you don't get excited. It's just the Apache default page. Okay? In our scenario, which is basically in our DMZ, we want to allow as little connectivity into it as we possibly can. So, in or out of it, we don't want SNMP, syslog, NTP, or anything else that could potentially be used against us. However, we do need those services there. So by virtue of the engine, we are going to provide monitoring. We'll make it possible to run integration scripts and commands through for troubleshooting or similar purposes. Possibly the deployment of D, two servers, and soon an application, without having any more windows open than we need, so to speak. So here is my engine. Everything must be done as root or else it will not work. I'm going to enter Vatam because, well, that's just Tabitha. It's just where I sort of download stuff too. Okay? And we are going to create the engine using XOR. So we're going to give it a name, which is going to be DMZ Lab. Excuse me. And the installer-type shell is the first script.
Now that's done, you can see that I've got the ability to save it, which I will do so that it downloads it. And then we're going to win SCP for the box that we've chosen for our engine. You have to make sure, of course, that your engines are up to normal specs because you're going to be running integrations on them and stuff like that. So they need to have eight cores and 16 gigabytes of memory, I believe, at minimum. Okay, so let's just put SCP in the box. Okay, so we're now on Windows SCP, and in the box, we're going to go to Far Temp, and we're going to copy this across nice and steady. Okay, disconnect that session, and bring back our engine—what's going to become our engine. And then we're there. Okay. So now we'll take a look at how simple this is. So the configuration script has been created for us by XOR. We're going to make the script executable; otherwise, we'll be here forever. Okay? And then, simply as root, we're going to run that script. And sometimes this takes a while, and other times it doesn't. It all depends on just how quickly my box decides to work. And as you can see, it hasn't actually connected.
So we're going to purge it so we can get rid of that. I just wonder if there's an old version of it somewhere. There was, indeed. So there was probably an older version of this from me practising it over and over. So that's why I didn't connect. So there you go. So that's an interesting thing. Here is an even more interesting thing. Here's something else that I keep doing, and I'm going to leave it in because I think it's too easy for people to claim that they know everything about everything and they make YouTube videos. And then you sit there thinking, "Well, you know, I had this problem, I had that problem." So no, this issue is because I'm a bit of a donor. So we're going to get rid of what it's done so far. It's going to fail to execute, and that's absolutely fine. and I'm going to show you what I've done. So nano, because I'm not someone who uses Vibe because it ties me up in not trying. Okay? So my DNS name servers are there, and for some reason, I continually snapshot this server with the gateway as a DNS server because I have DNS proxies on every other network except this one. That's fine; just restart the networking to reinstall it.
And amazing, really, isn't it? with DNS always saved. So that will go through now, and that will pull the Docker images, install Docker, and pull the Docker images. You can see what it's doing. And then once it's done now, once it's started the service, which you will do at the end by default, then we should see it connected there. And then I'll show you just how easy it is for them. If you need to pull that engine, you can do so and it will disconnect by using the purge. And then you can just run the script again, and it will reconnect any files, I believe. Let me just check and make sure. So once you've lost connectivity with an engine, any logs from the engine are purged 90 days after that point if connectivity is not reestablished. So you've got a fair amount of time before you start to lose any information from it. And of course, you'll see from there anyway, so it's going to do what I thought he was going to do. So, as I've mentioned numerous times, my ESX eyebox is actually running a little underSPECT and is massively oversubscribed. So I do have an issue where every now and then, if other boxes are on that, ESXi using any resources obviously grabs those. So hopefully I'll get enough money at some point to build a better VM server. But it's all learning, isn't it? That's what we're all there to do, and it's all progression.
So I'll just pause this and let this finish off. Okay. I was actually attempting to grab it just before it restarted, but as you can see, we now have Tmzlab connected and have been connected since. Now if I just run the purge, you'll see that almost immediately we get the disconnect message, and it's disconnected, and then we can remove it. All is well with that box now, and when we reinstall it, it will reinstall, come back up, and reconnect. Hopefully, for some reason, it will do it much faster this time around. And then there we go, and it's reconnected. So that's really what we're looking at with engines. as we move into the five and six domains Five and six, then obviously we'll start looking more at it, and we'll start looking at how we can use engines within integrations. I will just show you. So if I've just been there, okay, so if we now go to an integration and pin that off, is that a good choice? And we can use an engine. And then we've got the lab engine there. You can see now that that's how you use the engine. You can use the engine for syslog and things like that, and then that will log back to here. Previously, we would have had to runsyslog back to the Cortex XOR.
There are other concepts surrounding it, such as load balancing and load balancing groups, which we'll get into more as we go. But for now, that's engines, and that's also showing you a couple of times when it went wrong and what to do. They're very uncomplicated, the scripts. Very good; it works very well. It is all good. Okay, so for further troubleshooting on the XOR server itself, what we're going to look at is if we want to reindex the database so the database is stored well enough, but we can't do that, can we? So definitely go with the data. Okay, so it's stored there in Domisto. As a result, they obtained Domistodb and OID DX. And the way that you are told to reindex it is simply to stop the service. Okay. And I'm hoping this works because this is my production box, as I call it. Okay, so we need to subtract R from Rfisto or Edx, and that's what we have to do. Then we have to remove that. And I'm really hoping now that fingers are really crossed when this comes back up. It works there.
You can't go wrong there. Can you start doing what I missed out on, and hopefully, all things being equal, now that I come back here to my server, it will probably tell me to hang on because it's starting my server. So when you start in, you'll notice that you get the mean. I'll go and fight the dark forces because I'm restarting the service. Once that service is restarted, I will try to log in again, although I suspect it may take a while if it's reindexing. So we had 186,000 unassigned incidents. So that's how you reindex the database should anything start to go wrong. Troubleshooting beyond that involves a lot of resources, and I'll leave the links again below this video. There are a lot of resources available for different types of troubleshooting. Really. The truth of it is, I guess unless you're a developer kind of person and you're very happy and comfortable with Linux in general, I would imagine that you would probably follow the last lines on most of these. Things, which is to contact the Cortex Xor support team or, I mean, somebody like myself once I'm up and running. But the initial steps of troubleshooting are fairly easy to carry out. such as that for Reindix in the database. That is, it sounds fantastic when it goes through a change record. We're going to reindex the database on the XOR, so you look kind of cool and people think you've got your stuff under control, but ultimately it's quite an easy and straightforward thing to do.
So now we're going to move on to the next part of the main four, which is how to create incidents. So we're going to create an incident from the dashboard using this badger right here, a new incident, and show how that's done and the parts that they'll want you to understand for the PC SAE exam. Okay? So basically, for the exam, you have to have an understanding, and we'll go into more depth later. However, you must understand how the instance is created in Cortex XOR. Okay, so they're basically just instances, just events that have been observed at a point in time for analysis. So that's what it is. So regardless of where it comes from, it's literally just an event. Okay? So the first step is to go to the new incident. We can set the name of the incident. We can use the time and date, which is Tuesday, to serve as a reminder. Choose tomorrow's owner, which is me. Let's say it's a low, with the type of it being a bad IP log with the role 88. So who is going to ultimately be able to view it? This is what we talked about before. Yes, this is what we talked about before, which is that you can control who can see and interact within incidents based on the role that's assigned to that incident. You could run a playbook against this. So, for example, if we wanted to run an IP enrichment against it, The phase is triage because we're triaging it.
Okay, observed. observed during the video. Okay, if you wanted to attach something, like a file or something like that, those associated with it, I admit that isn't really relevant to this. You can drop it up there. So you can attach whatever you want to it. So we wanted to create this new incident, and now we've got this incident. We can go here. So you can see we've got one minor incident there, which is ours. This is our test incident. If we go to investigate it, as a result of that, we can see if we go to the war room. So here we've got the indicators. We don't have any indicators in this interview because I've excluded eight people. It's on an exclusion list. Then go to the war room. There's nothing to look at because the indicator has been excluded. So that's one way of getting in. We go to the exclusion list. You can see I've got eight dots on eight. And this is what we're talking about when we say we're excluding certain IPS. So within that instant that we created, there wasn't actually anything to look at because it had been excluded because we knew about it. And this is where the preprocessor rules, if I can find them, play a role in instant creation. So preprocessing rules are basically applied as the data is ingested before it is presented. You can create a preprocess rule to look for certain things.
And if we look for that IP, we can actually create a rule that stops it from being created as an interface. Okay, now for the exam. Basically, that's all you really need to know about creating them manually. So, as I mentioned, for the exam, you'll need to know that it can be created automatically or manually, as we've just done. And then it basically wants you to understand logic and the order of instant creation. There's not that much to say about it, really. The integrations contain both understanding and logic. You can figure out the integration. So it basically claims that the logic and the order of instant creation are a three-step process, which I guess it is. so you can figure out the integration. You configure the integration of your third-party product and start fetching events. However, that could also be in the manual. Classification and mapping You need to classify the event, and then based on that classification, it becomes a certain type of instant. The instant has a mapping type to it. And then you have the preprocessing rule we discussed, which is to look through the instant before it's created. And this is where you save a lot of time in a virtual office because you're now not investigating every single thing that comes through. Preprocessing rules are there for you to identify what you want to engineer out of them.
Palo Alto Networks PCSAE practice test questions and answers, training course, study guide are uploaded in ETE Files format by real users. Study and Pass PCSAE Palo Alto Networks Certified Security Automation Engineer certification exam dumps & practice test questions and answers are to help students.
IT Certification Tutorials
- 7 Ethical Hacking Certifications for Your IT Career
- Top Skills that Will Increase Your Chances of Getting Hired in 2019
- Some Practical Recommendations: How to Secure Your Career for the Future?
- Having Choices Is Always a Plus: IBM Certification Analogues
- ISACA COBIT 2019 - Bonuses
- AI-102 Microsoft Azure AI - Implement Knowledge Mining Solutions
- PMI PMP Project Management Professional - Introducing Project Schedule Management
- CompTIA CASP+ CAS-004 - Chapter 01 - Understanding Risk Management Part 6
- DA-100 Microsoft Power BI - Part 1 Level 8: Other Visualization Items for the DA-100 exam
- CompTIA CASP+ CAS-004 - Chapter 03 - Implementing Advanced Authentication and Cryptographic Techniques Part 3
- IIBA ECBA - Business Analysis and Requirements Life Cycle Management Part 2
- PMI PgMP - Exam Preparations
- Salesforce Certified Platform App Builder - 5 - Business Logic and Process Automation Part 6
- Amazon AWS Certified Data Analytics Specialty - Domain 4: Analysis Part 4
- Google Professional Cloud Network Engineer - Implementing Hybrid Interconnectivity