Practice Exams:

CompTIA CASP+ CAS-004 – Chapter 02 – Network and Security Components and Architecture

  1. Chapter Introduction

In this chapter, we’re going to be looking at network and security components and architecture. A secure network design is not going to be able to be achieved unless we understand the different components that have to be included in it and as well as the concepts of secure design that need to be followed. It’s true that many security features come at a cost of performance or ease of use. These are costs that most enterprises are going to be willing to incur if they understand some important security principles. So in this chapter, we’re going to be looking at the building blocks of a security infrastructure.

  1. Topic A: Physical and Virtual Network Security Devices

In this first topic, we’re going to be talking about physical and virtual network security devices. We’ll discuss things like unified threat management, intrusion detection and prevention systems, switches, firewalls, et cetera.

  1. Unified Threat Management

In order to implement a secure network. There are going to be a number of different security devices. Some of them are going to be hardware based, well, hardware based and software based. Unified threat management or UTM, is the first one that we want to discuss. And this is actually more of an approach that combines multiple security functions into a single device. So when you use the term unified threat management it simply means that it will contain at least some of these functions.

Not every UTM device is going to be the same, but they will often include network firewalling, intrusion prevention, gateway, antivirus and possibly antispam virtual private network connections, content filtering, possibly load balancing, DLP, datalink protection or prevention and usually some form of auditing and reporting. The UTM makes administering multiple systems unnecessary, which is one of the premier advantages. But a lot of security professionals feel that the UTM creates a single point of failure and so they may desire to have multiple devices as opposed to a single device.

  1. Analyzing UTM

So what are some of those advantages of the unified threat management? Well, it certainly has a lower upfront cost, it’s a single device, lower maintenance cost, single OS that has patching installed, single vendor, less power consumption because of a single device as well. Generally speaking, they’re easy to install and configure in many cases very wizard based in a lot of areas, full integration as well with your network. But there are some disadvantages. Primarily it is that single point of failure. Anytime you have a single point of failure, it’s difficult to implement true high availability and it’s also a single attack vector. And so from both the availability perspective as well as just security in a hole, that can be a problem.

It may lack the granular configuration that’s available in individual tools as well. So some, you know, we see this in other areas when you start comboing features into software well, often they don’t do as good of a job as a particular piece of software that’s designed for that sole purpose. Right? Some of the Microsoft applications are great examples of that. And then performance issues. One device, it’s doing everything and so therefore you can suffer some performance issues. So, a great option, a single device that provides multiple functions but definitely some advantages and disadvantages.

  1. Intrusion Detection and Prevention

The next device or set of devices is Idsips. This is an intrusion detection system or intrusion prevention system. So an IDs is going to be responsible for detecting unauthorized access to attacks against the system and the network and sometimes taking action. Although if it takes action, then you’ve kind of crossed this gray line between, between IDs detect and IPS prevent. It can verify threats, it can itemize characterize them from both outside and inside the network. And really the vast majority are programmed to react in some way. And if it’s just event notification and alerts, well, then that’s really still the category of IDs. If it actually reaches out and is able to isolate traffic, terminate connections, things of that nature, then it actually is an intrusion prevention system. A couple of different categories here. Signature based IDs will analyze traffic and compare it to attack patterns known as signatures, which is very similar to the way that antimalware is used. This is also referred to as a misuse detection system. So it’s looking for abnormalities, it’s looking for something that’s not regular, it’s not normal. It identifies it as a type of attack.

So pattern matching is when the IDs compares the traffic to a database of attack patterns and then it carries out specific steps. When it detects the traffic that matches a particular pattern. Stateful matching is recording the initial operating system state. So any changes to the system state that violate a set of defined rules would result then in an alert or a notification being sent, then we have anomaly based. This type is going to analyze traffic and compare it to normal traffic.

So I kind of slipped earlier and said the word abnormality, pattern matching and stateful matching, they are looking for abnormalities, but it’s based on a defined signature. Anomaly based is saying, okay, here is the normal traffic and when we see something abnormal, then we’re going to use that to determine whether it’s a threat. So it’s also referred to as behavioral based or profile based systems. The problem with this type of system is that really anything, any traffic outside the expected norms is going to be reported. So you have a lot more false positives than you would with signature based systems.

Three main types of anomaly based IDs. The first is statistical anomaly based. So the device samples the live environment to record activities. Therefore, the longer that it’s in operation, the more accurate profile that’s built. But developing a profile that will not have a large number of false positives can actually be difficult and pretty time consuming. The protocol anomaly based is when the device has knowledge of the protocols that it’s going to monitor. And then a profile of normal usage is built over time and compared to activity versus traffic, anomaly based, which is just looking for traffic pattern changes. So all future traffic patterns are going to be compared to the sample. If you change the threshold, it can reduce the number of false positives or negatives.

This is actually a really good filter for detecting unknown attacks, but user activity is sometimes not static enough to effectively implement it. Then you have rule or heuristic based. This is more of an expert system that uses a knowledge base, an inference engine, as well as rule based programming. The knowledge is configured as rules, and then it analyzes the data and traffic, and the rules are applied to the analyzed traffic. The inference engine is using intelligent software to learn. And so if the characteristics of an attack are met, then alerts and notifications are triggered. Sometimes it’s referred to as an if then or expert system as well. And then you have the HIDs hips versus NIDS and so on and so forth. So the primary difference is that most of the time when we’re talking about IDs or IPS, we’re talking about a network based solution. It is able to monitor network traffic. When you talk about host based IDs systems or IPS, it’s monitoring traffic to a single system. So its primary responsibility is just to protect that system on which it’s installed. It’ll use information from the OS audit trail system logs, but it’s limited to that system, and it’s limited to the completeness of those logs. And there is actually another, an application based IDs IPS, more of a specialized one that analyzes transaction logs for a particular application. But those are usually provided as a part of the application or an addon that you can purchase.

  1. In-Line Network Encryptor

The next device is an ine or inline network encryptor. This is also known as a High Assurance Internet Protocol encryptor haipe it’s type one encryption device. The type one designation just indicates that it’s a system that’s certified by the NSA so it can be used in securing US government classified documents. And as you might imagine and you have to go through quite a bit to achieve that designation. Namely though it just has to use NSA approved algorithms. Ine devices can also support routing and layer two VLANs. So they’re built to be easily disabled and cleared of keys if you have any danger of physical compromise and they do so using a technique called zeroization.

Essentially these are devices that are placed in each network that might need their services and that device would communicate with another device and do so through a secure tunnel. There are advantages and disadvantages. One of the advantages we already mentioned easily disabled and cleared of keys they’re also able to communicate through a secure tunnel certified by the NSA and may sport routing and layer two VLANs. So we’d kind of mentioned actually most of those disadvantages though they’re fairly costly and they also introduce a single point of failure which as we discussed with the UTM is really never a good thing.

  1. Network Access Control

Network Access Control is another concept that we see in Network Security infrastructures. This is a service that goes way beyond the authentication of users and it includes the ability to examine the state of the system that the user is using to connect to the network. These are often used with WiFi remote access VPN connections. Cisco calls them Network administration controls or excuse me, network admission control services. And then Microsoft calls it Network Access Protection Services. Regardless, it’s all network. Access control. It all has the same goal.

The goal is when these unknown devices connect to the network, we’re going to examine them in order to determine if certain elements are present. If they’re present, then we’ve established the health of the system prior to the connection. If they’re missing, then we’ve established the fact that this system is unhealthy. What kind of things might be be checking for? While the presence of a firewall that’s enabled and a virus software that is up to date, Windows patches that are up to date, and various other security related issues.

It kind of depends on the network access control implementation. So higher level Cisco Network Admission Control Services can check for certain versions of an AV program, a certain version of a firewall, particular rules that are in place, those kinds of things are possible. Whereas Network Access Protection in Microsoft’s world doesn’t really support all of that. And I think it should be noted that yes, that is an accurate statement. That’s what Microsoft calls it, but they’ve really discontinued the use of that. Really? Server 2008 and Windows Server 2012 were the two primary operating systems that supported that. It was deprecated in 2016 and doesn’t even exist in 2019. So version of Windows Server. So really that was just because nobody was using it. From my experience, there are a number of advantages and disadvantages.

The advantage of Network Access Control is simply that it can prevent the introduction of malware infection from these systems that are connecting. We can make sure that they’re less vulnerable by pushing updates. It helps us really to support a BYOD environment and that’s what we’re kind of after. We want to allow you to bring devices, connect devices, be more productive, but we can’t introduce security threats into the environment because of that disadvantages.

Neck can’t protect information that leaves the premises, like via email or USB devices, can’t defend against social engineering, can’t prevent people from accessing data and using it inappropriately if assuming that they have authorized access to it. It’s basically all about the health of the system that you’re using, not about the individual user. So it’s not the end all, be all, but it can be a good added component to your environment.

  1. SIEM

Security information and event management seem these are utilities that receive information from log files of critical systems and they centralize the collection and the analysis of that data. Seam technology is essentially an intersection of two closely related technologies that’s security information management and security event management.

The two work together in these types of devices. Now, the log sources can be application logs, antivirus operating system logs, malware detection logs. One consideration when you’re working with a seam system is to try to limit the amount of information that you’re collecting to what is really needed. You also need to make sure that you have adequate resources in order to ensure good performance. As with others, there are advantages and disadvantages.

Advantages it’s able to identify network threats in real time and that gives us quick forensic capability. Typically, they have a Gui based dashboard as well. And the disadvantages though it’s a complex and costly deployment, it can also generate a lot of false positives and may not provide visibility when we’re using cloud assets either you.

  1. Firewalls

One of the devices that most of us are familiar with, I would think would be a Firewall, but certainly in a security class, we have to mention this, but Firewalls are network devices that are perhaps the most connected with the idea of security, of all things. Right. It can be software that’s installed on a server or client. It can be an appliance that has its own operating system.

The first would be host based, right? Host based firewall is one that is software running on a particular computer and it’s protecting traffic to and from that system. Whereas a network based Firewall is its own device, it has its own operating system, and it’s typically at the perimeter of the network. When we discuss Firewalls, we’re often discussing them on the basis of their type and the basis of their architecture. They can be physical or virtual devices. We’re going to kind of take a look at multiple angles of the Firewall. First, let’s talk about packet filtering. These are firewalls that are the least detrimental to throughput because all they really do is look at the header of the packet. All they’re scanning for is allowed IP addresses and port numbers. Port numbers are going to identify the upper layer traffic. So TCP port 25 is SMTP email traffic, port 80 is web traffic, so on and so forth.

This type of function is going to slow traffic, but it really involves only looking at the beginning of the packet and it’s a very quick allow or disallow decision, so it’s less detrimental on throughput. The problem with packet filtering firewalls is that they can’t prevent some attacks. They can’t prevent IP spoofing, they can’t prevent attacks that are specific to a particular application, or attacks that depend on the fragmentation of data, that’s attacks that are going to take advantage of the TCP handshake process so they’re not overly intuitive or complex. You would have to have a more advanced inspection firewall to stop those types. Stateful firewalls are aware of the proper functioning of the three way TCP handshake process.

Stateful means they keep track of the state of all the connections as it relates to the TCP handshake, so they can recognize when packets are trying to enter to the network. And it doesn’t make sense in the context of the TCP handshake. Right, and that process. A packet, for instance, should never arrive at the Firewall for delivery with both the sin flag and the AC flag, the synchronize and the acknowledge. The only way that that would happen is if it was a part of an existing handshake process. So it should be in response to a packet that had come from the inside with the synchronization flag set.

That type of packet on a stateful firewall would be disallowed automatically. And stateful firewalls have the ability to recognize other attack types that attempt to misuse that process as well. Proxy firewalls stand between the internal and the external side of a network and it makes an internal to external connection on behalf of the endpoints that are using it. So a proxy firewall is a firewall that’s used as a forward proxy.

There’s really no direct connection. So client A is not connecting directly to the web server. Instead client A is connecting the firewall. The proxy firewall is acting as a relay between those two points and you can have circuit level or application level proxy. Circuit level are going to operate at the session layer of the OSI model. So it makes decisions based on protocol headers and session layer information. It does not do any sort of deep packet inspection. It’s not able to look at the in size of layer seven traffic. So it’s considered to be application independent because it can be used for a wide range of protocols, whereas application level proxies do deep packet inspection up to layer seven.

So it understands the details of the communication process at layer seven and so it’s able to detect certain types of attacks a little bit better. Dynamic packet filtering really isn’t the type of firewall, it’s a process that a firewall may or may not handle, but it’s, it’s worth discussing at this point when it’s internal. Computers are attempting to establish a session with a remote computer and this process is going to put both the source and the destination port numbers in the packet. So I’m trying to communicate with the web server. It’s port 80.

Because Http uses port 80 by default, the source computer randomly selects the source port from the numbers above 1023, which is above your well known port numbers. And so because it’s impossible to predict what the random number will be, it’s impossible to create a firewall rule that will allow that traffic back through the firewall in a random port. So dynamic packet filtering firewalls will keep track of the source port and automatically add a rule to the list to allow that back. Kernel proxy firewall is an example of a fifth generation firewall. It inspects a packet at every layer of the OSI, but it doesn’t introduce the same performance hit as your application layer firewalls because it’s doing it at a kernel layer. And then we have next generation firewalls NGFWs. These are a category of devices that attempt to address traffic inspection and application awareness shortcomings of the traditional stateful firewall. While not hampering performance, these are typically unified threat management devices or some variant of it. And so they can operate in a little bit more complex fashion.

They are going to be application aware, which means they can distinguish between specific applications. Instead of allowing all traffic to come in via the typical web ports, they’ll examine packets only once during the deep inspection phase, which is going to be required to detect malware and anomalies a lot of different features of Neff’s very no disruptive.

They have very little impact on the performance of the network. They have standard firewall capabilities like Nat and stateful protocol inspection. VPNs typically integrated IPS signature based application awareness. SSL offloading a lot of different capabilities, but they are going to require a little bit more involved management than standard firewalls and typically lead to a reliance on a single vendor. You.

  1. Firewall Architecture

When we talk about the type of firewall, we’re talking about the internal operation of the firewall. The architecture on the other hand, just refers to the way in which firewalls are deployed in a particular network and how they go about forming a system of protection. So let’s talk about those with bastion host. You actually may or may not be talking about a firewall. So there are bastion hosts that are FTP servers, DNS servers, web servers, email servers. The term bastion host actually just refers to where the device is, the position of it. If it’s exposed directly to the Internet or to any untrusted network and and at the same time screening the rest of the network from exposure, well then it’s a bastion host. So a front end email server would be a bastion host, a public DNS server could be a bastion host. And so it doesn’t really matter what type of device it is.

What’s key is to understand that a bastion host is completely exposed to the internet. So in that case, we need to take some extra steps to make sure that it’s secure. We should turn off any unnecessary services, protocols, ports, we should use separate authentication systems or services from the trusted host within the network. We should try to get rid of as much as is practical on that system, make sure it’s updated, encrypt everything locally with administrative, usernames and passwords, et cetera. Collectively that’s just called reducing the attack surface and that’s definitely something we may do everywhere. It’s certainly something we would do on a bastion host. The next architecture, excuse me, is that of a dual home firewall.

And dual home just means it has two network interfaces. So one points in the internal network, the other points to the untrusted network. I mean, when you think about this, every firewall is at the very least dual homed. Sometimes in a dual home firewall, the routing between the internal and external interface will be completely turned off. Actually, in many cases the firewall configuration will then allow or deny traffic based on the firewall rules. One of the problems with relying on a single do home firewall is just that it can be a single point of failure.

So if it gets compromised, then the network is compromised. If it suffers a denial of service attack, then no traffic is going to go through it. But they are good in that it’s a simple configuration. It’s less costly than using two firewalls, but you are going to suffer a little bit in availability and security.

Multi home firewalls are just firewalls with multiple interfaces. More than two. One popular type is the three legged firewall. So three interfaces, one connecting to the internet or untrusted network, second to the internal network, and the third to the part of the network called the DMZ or the Demilitarized zone. This is just a protected network that typically would contain systems that needed to be accessible from the Internet web servers, email servers, DNS servers, and the multi home firewall just controls traffic.

So from the untrusted network, you can only go to the DMZ. From the DMZ, there would be limited connection activity to the internal network, but never from the untrusted Internet directly to the internal network. Three legged firewalls are going to give us some cost savings on devices because you still only need one firewall rather than two or three. And it is possible to do network address translation, but it does increase complexity, and you still are dealing with a single point of failure. A screened host firewall is a little bit different. So far we’ve talked about firewalls that typically connect directly to an untrusted network, at least one of the interfaces. But a screen host firewall is placed between the final router and the internal network.

So when traffic is coming into the router, it’s forwarded to the firewall, and then it’s inspected before going to the internal network. This is similar to that of a dual home firewall, but the difference is the separation between the perimeter network and the internal network is logical rather than physical. There’s only one interface. So the advantages of this type of firewall is that they give a lot more flexibility than dual home firewalls because they use rules rather than interfaces to create the separation. And they can also have a cost savings, but it is a more complex configuration. Screen subnet just takes that concept one step further.

In this case, you’d have two firewalls, and the traffic would need to be inspected at both firewalls before it can enter. Excuse me. The internal network, it’s called a screen subnet because there’s a subnet between two firewalls. I mean, technically it’s just another kind of DMZ, but it is the highest level of security because you got two firewalls. One firewall is before the DMZ, and that protects the devices in the DMZ, and then the other firewall protects the internal network. But it is certainly costlier and it adds to complexity. So those are your different options as it relates to firewall architecture, and you can choose whatever is going to be the best approach for your situation.