CKA: Certified Kubernetes Administrator Certification Video Training Course
The complete solution to prepare for for your exam with CKA: Certified Kubernetes Administrator certification video training course. The CKA: Certified Kubernetes Administrator certification video training course contains a complete set of videos that will provide you with thorough knowledge to understand the key concepts. Top notch prep including CNCF CKA exam dumps, study guide & practice test questions and answers.
CKA: Certified Kubernetes Administrator Certification Video Training Course Exam Curriculum
Core Concepts
-
1:00
1. Core Concepts Section Introduction
-
9:00
2. Cluster Architecture
-
3:00
3. ETCD For Beginners
-
3:00
4. ETCD in Kubernetes
-
5:00
5. Kube-API Server
-
4:00
6. Kube Controller Manager
-
4:00
7. Kube Scheduler
-
2:00
8. Kubelet
-
4:00
9. Kube Proxy
-
9:00
10. Recap - PODs
-
7:00
11. PODs with YAML
-
10:00
12. Demo - PODs with YAML
-
6:00
13. Practice Test Introduction
-
4:00
14. Demo: Accessing Labs
-
8:00
15. Practice Test - Solution (Optional)
-
16:00
16. Recap - ReplicaSets
-
8:00
17. Practice Test - ReplicaSets - Solution (Optional)
-
4:00
18. Deployments
-
8:00
19. Namespaces
-
5:00
20. Solution - Namespaces (optional)
-
14:00
21. Services
-
4:00
22. Services Cluster IP
-
4:00
23. Services - Loadbalancer
-
5:00
24. Solution - Services (optional)
-
13:00
25. Imperative vs Declarative
-
8:00
26. Solution - Imperative Commands (optional)
-
5:00
27. Kubectl Apply Command
Scheduling
-
1:00
1. Scheduling Section Introduction
-
3:00
2. Manual Scheduling
-
3:00
3. Solution - Manual Scheduling (optional)
-
6:00
4. Labels and Selectors
-
4:00
5. Solution : Labels and Selectors : (Optional)
-
10:00
6. Taints and Tolerations
-
8:00
7. Solution - Taints and Tolerations (Optional)
-
3:00
8. Node Selectors
-
7:00
9. Node Affinity
-
7:00
10. Solution - Node Affinity (Optional)
-
3:00
11. Taints and Tolerations vs Node Affinity
-
6:00
12. Resource Requirements and Limits
-
5:00
13. Solution: Resource Limits : (Optional)
-
4:00
14. DaemonSets
-
6:00
15. Solution - DaemonSets (optional)
-
9:00
16. Static Pods
-
11:00
17. Solution - Static Pods (Optional)
-
6:00
18. Multiple Schedulers
-
7:00
19. Solution - Practice Test - Multiple Schedulers : (Optional)
Logging & Monitoring
-
1:00
1. Logging and Monitoring Section Introduction
-
4:00
2. Monitor Cluster Components
-
3:00
3. Solution: Monitor Cluster Components : (Optional)
-
3:00
4. Managing Application Logs
-
2:00
5. Solution: Logging : (Optional)
Application Lifecycle Management
-
1:00
1. Application Lifecycle Management - Section Introduction
-
8:00
2. Rolling Updates and Rollbacks
-
8:00
3. Solution: Rolling update : (Optional)
-
7:00
4. Commands
-
3:00
5. Commands and Arguments
-
11:00
6. Solution - Commands and Arguments (Optional)
-
1:00
7. Configure Environment Variables in Applications
-
5:00
8. Configuring ConfigMaps in Applications
-
9:00
9. Solution - Environment Variables (Optional)
-
6:00
10. Configure Secrets in Applications
-
10:00
11. Solution - Secrets (Optional)
-
2:00
12. Multi Container PODs
-
10:00
13. Solution - Multi-Container Pods (Optional)
-
8:00
14. Solution - Init Containers (Optional)
Cluster Maintenance
-
1:00
1. Cluster Maintenance - Section Introduction
-
4:00
2. OS Upgrades
-
6:00
3. Solution - OS Upgrades (optional)
-
3:00
4. Kubernetes Software Versions
-
11:00
5. Cluster Upgrade Process
-
13:00
6. Solution: Cluster Upgrade
-
7:00
7. Backup and Restore Methods
-
18:00
8. Solution - Backup and Restore
Security
-
2:00
1. Security - Section Introduction
-
3:00
2. Kubernetes Security Primitives
-
6:00
3. Authentication
-
1:00
4. TLS Introduction
-
20:00
5. TLS Basics
-
8:00
6. TLS in Kubernetes
-
11:00
7. TLS in Kubernetes - Certificate Creation
-
5:00
8. View Certificate Details
-
6:00
9. Certificates API
-
9:00
10. KubeConfig
-
6:00
11. API Groups
-
4:00
12. Role Based Access Controls
-
5:00
13. Cluster Roles and Role Bindings
-
5:00
14. Image Security
-
2:00
15. Security Contexts
-
8:00
16. Network Policy
-
12:00
17. Solution - Network Policies (optional)
Storage
-
1:00
1. Storage - Section Introduction
-
1:00
2. Introduction to Docker Storage
-
13:00
3. Storage in Docker
-
2:00
4. Volume Driver Plugins in Docker
-
4:00
5. Container Storage Interface (CSI)
-
4:00
6. Volumes
-
3:00
7. Persistent Volumes
-
4:00
8. Persistent Volume Claims
-
19:00
9. Solution - Persistent Volumes and Persistent Volume Claims
-
4:00
10. Storage Class
Networking
-
2:00
1. Networking - Section Introduction
-
12:00
2. Prerequisite - Switching Routing
-
15:00
3. Prerequisite - DNS
-
15:00
4. Prerequisite - Network Namespaces
-
7:00
5. Prerequisite - Docker Networking
-
6:00
6. Prerequisite - CNI
-
2:00
7. Cluster Networking
-
7:00
8. Solution - Explore Environment (optional)
-
9:00
9. Pod Networking
-
3:00
10. CNI in kubernetes
-
6:00
11. CNI weave
-
3:00
12. Solution - Explore CNI Weave (optional)
-
4:00
13. Solution - Deploy Network Solution (optional)
-
3:00
14. IP Address Management - Weave
-
5:00
15. Solution - Networking Weave (optional)
-
9:00
16. Service Networking
-
5:00
17. Solution - Service Networking (optional)
-
6:00
18. DNS in kubernetes
-
7:00
19. CoreDNS in Kubernetes
-
13:00
20. Solution - Explore DNS (optional)
-
23:00
21. Ingress
-
11:00
22. Solution - Ingress Networking 1 - (optional)
-
11:00
23. Solution - Ingress Networking - 2 (optional)
Design and Install a Kubernetes Cluster
-
6:00
1. Design a Kubernetes Cluster
-
6:00
2. Choosing Kubernetes Infrastructure
-
8:00
3. Configure High Availability
-
13:00
4. ETCD in HA
Install "Kubernetes the kubeadm way"
-
2:00
1. Introduction to Deployment with Kubeadm
-
3:00
2. Deploy with Kubeadm - Provision VMs with Vagrant
-
15:00
3. Demo - Deployment with Kubeadm
-
9:00
4. Solution - Deploy a Kubernetes Cluster using kubeadm : (Optional)
Troubleshooting
-
1:00
1. Troubleshooting - Section Introduction
-
3:00
2. Application Failure
-
23:00
3. Solution - Application Failure : (Optional)
-
1:00
4. Control Plane Failure
-
15:00
5. Solution - Control Plane Failure : (Optional)
-
2:00
6. Worker Node Failure
-
11:00
7. Solution - Worker Node Failure : (Optional)
About CKA: Certified Kubernetes Administrator Certification Video Training Course
CKA: Certified Kubernetes Administrator certification video training course by prepaway along with practice test questions and answers, study guide and exam dumps provides the ultimate training package to help you pass.
Certified Kubernetes Administrator (CKA) Exam Prep
Course Overview
This Certified Kubernetes Administrator (CKA) training course is designed to prepare you thoroughly for the CKA certification exam. The course covers all the essential Kubernetes concepts, tools, and hands-on practices required to become a proficient Kubernetes administrator. You will gain practical skills to manage Kubernetes clusters, troubleshoot issues, and deploy applications effectively. The training emphasizes real-world scenarios and includes practice tests to boost your confidence before the exam.
This course is structured in five parts, each covering critical topics aligned with the CKA exam domains. You will progress from foundational knowledge to advanced cluster administration techniques, enabling you to master the exam content step-by-step.
Course Description
The course starts with Kubernetes architecture, basic cluster setup, and the essential concepts of pods, services, and controllers. Then, it moves into more advanced topics such as networking, security, storage, and troubleshooting. You will also learn how to manage role-based access control (RBAC), monitor clusters, and perform backup and restore operations.
The hands-on labs and practice tests are integrated into each module to provide an immersive learning experience. This approach ensures you not only understand the theory but also can apply your knowledge practically. The course aligns with the official Kubernetes documentation and exam objectives, making it a comprehensive guide for exam success.
Course Requirements
To get the most out of this course, you should have a basic understanding of Linux commands and networking fundamentals. Familiarity with Docker or container technologies will be beneficial but is not mandatory. Access to a computer where you can set up and run Kubernetes clusters is essential for practical exercises. We recommend using Minikube, kind, or a cloud provider for hands-on labs.
Who This Course Is For
This course is ideal for IT professionals, DevOps engineers, system administrators, and developers who want to advance their Kubernetes skills and obtain official certification. If you manage containerized applications or plan to work in cloud-native environments, this course will equip you with the necessary expertise.
Beginners with some Linux or container knowledge will find the course accessible, while experienced administrators can deepen their mastery and fill any knowledge gaps to prepare for the CKA exam.
Why Become a Certified Kubernetes Administrator?
Kubernetes has become the de facto standard for container orchestration. Organizations worldwide rely on Kubernetes to deploy, scale, and manage their applications efficiently. Earning the CKA certification validates your skills and boosts your career opportunities in cloud computing and DevOps roles. Certified administrators are recognized for their ability to ensure highly available and resilient Kubernetes clusters.
The certification also demonstrates your commitment to staying current with evolving cloud-native technologies and industry best practices. This course helps you reach that goal through structured learning and extensive practice.
What You Will Learn in This Course
You will learn how to install and configure Kubernetes clusters, manage workloads, configure networking, secure clusters, and troubleshoot issues. The course covers persistent storage options, monitoring tools, and RBAC configurations. By the end of the course, you will be able to confidently take the CKA exam and demonstrate your Kubernetes administration expertise.
Understanding Kubernetes Architecture
Kubernetes is a powerful container orchestration platform. At its core, it follows a client-server model. The control plane manages the cluster, while worker nodes run the actual applications inside containers.
The control plane consists of multiple components including the API Server, Controller Manager, Scheduler, and etcd. These components together ensure the health, scalability, and automation of the Kubernetes environment.
The worker nodes run the kubelet, which communicates with the control plane. They also include the container runtime (like containerd or Docker) and kube-proxy for managing network rules.
The Role of etcd
etcd is a distributed key-value store. It holds all the configuration data and state of the cluster. It’s vital for Kubernetes' operation because it stores everything from pod definitions to configuration changes.
Loss of etcd data can cripple the cluster, so backup strategies are essential. You will learn how to secure and maintain etcd as part of your training.
Communication Between Components
The API Server is the main communication hub. All kubectl commands go through it. Other components, like the Scheduler or Controller Manager, use the API server to read and write the cluster state.
The kubelet on each worker node also connects to the API Server to receive instructions and report back the node’s status.
Kubernetes Installation Options
You can install Kubernetes in several ways. This course focuses on kubeadm, which is the most commonly used method in real-world production environments.
With kubeadm, you can initialize the control plane and then join worker nodes to the cluster. It’s a straightforward, modular tool that abstracts away much of the complexity.
Other installation options include Minikube, kind (Kubernetes in Docker), and managed Kubernetes services like EKS, GKE, and AKS. These are also useful for testing or learning, especially when full infrastructure setup isn't feasible.
Installing Kubernetes with kubeadm
Before using kubeadm, certain system prerequisites must be in place. These include disabling swap memory, setting up proper hostnames, and installing container runtimes.
You will configure networking between the nodes, open the required ports, and ensure that your firewall settings don’t block communication. kubeadm will then be used to initialize the control plane, and you’ll receive a command to join the worker nodes.
Once the cluster is up, you’ll use kubectl to interact with it. You’ll also configure a pod network so containers can communicate across nodes.
Choosing a Container Runtime
Kubernetes supports multiple container runtimes. The most commonly used are containerd, Docker, and CRI-O. Since Docker support was deprecated in Kubernetes 1.20, containerd has become the standard choice.
You’ll learn how to install containerd, configure it, and ensure it integrates properly with kubelet. Proper runtime configuration is critical for stability and performance.
Networking in Kubernetes
Kubernetes requires a networking layer that allows all pods to communicate with each other across nodes. Several CNI (Container Network Interface) plugins are available for this purpose.
You’ll explore plugins like Calico, Flannel, and Weave. Each has its advantages, and you’ll practice installing and configuring at least one of them to provide full network connectivity in your cluster.
The course explains key networking concepts including pod-to-pod communication, DNS resolution within the cluster, and service discovery.
Setting Up Cluster DNS
DNS is a vital service in Kubernetes. CoreDNS is the default DNS service used for service discovery. When you deploy an application, Kubernetes automatically assigns it a DNS entry.
You’ll learn how to verify DNS is working, troubleshoot name resolution issues, and use kubectl exec to test DNS queries from within pods.
Configuring the kubeconfig File
kubeconfig files allow users to connect to a Kubernetes cluster. When you install Kubernetes, kubeadm generates a kubeconfig file for the admin user.
You’ll learn where this file is located, how to use it with kubectl, and how to configure access for additional users or service accounts.
Understanding kubeconfig is essential when managing multiple clusters or switching between cloud and local environments.
Verifying Cluster Health
Once Kubernetes is installed, you’ll need to verify that all components are running correctly. You’ll use commands like kubectl get nodes, kubectl get pods -n kube-system, and kubectl describe to gather cluster information.
You’ll also learn how to inspect logs from key components, check etcd health, and confirm that your networking layer is functioning properly.
Adding Worker Nodes
After setting up the control plane, you’ll add worker nodes using the token generated by kubeadm. This token is only valid for a limited time, but you’ll also learn how to regenerate it.
Once joined, the worker nodes become part of the cluster and can start accepting pods. You’ll validate node status and ensure kubelet and container runtimes are working as expected.
Post-Installation Steps
After installation, some basic housekeeping is necessary. You’ll label nodes, taint nodes for specific workloads, and set resource limits.
You’ll also install useful tools like metrics-server for resource monitoring and dashboard for visualizing the cluster.
Proper configuration after installation ensures that the cluster is secure, efficient, and ready for real-world workloads.
What are Workloads in Kubernetes
A workload in Kubernetes is any deployment that runs one or more containers via Pods. Workloads are higher-level objects that manage Pods on your behalf. They ensure your applications run as you expect even when failure happens. Knowing the different workload types is essential for operating, scaling, maintaining, and recovering your Kubernetes environment. Workloads include objects like Deployments, ReplicaSets, StatefulSets, DaemonSets, Jobs, and CronJobs. Each has its use case and specific properties. Some are continuous, long-running services; others are ephemeral or scheduled tasks.
Pods as Fundamental Unit
Pod is the smallest deployable unit in Kubernetes. A Pod can encapsulate one or more containers that share network namespaces and storage volumes. Pods are ephemeral; they come and go. Workload controllers manage Pods for scaling, self-healing, and replicas. You’ll often interact with Pods via kubectl run, kubectl get pods, kubectl describe pod. Pods can also use configuration objects like ConfigMaps or Secrets via environment variables or mounted volumes. Understanding how to manage Pod spec is vital for many exam questions.
ReplicaSet: Maintaining Desired State of Pods
ReplicaSet ensures a specified number of Pod replicas are running. If a Pod dies, the ReplicaSet brings up a new one. ReplicaSets are almost always managed through Deployments rather than created directly for production. Knowing ReplicaSet behavior helps understand how rolling updates, scaling, and rollbacks work. ReplicaSet selector matching is strict. If labels do not match, Pods may not be recognized. Updating labels or selectors incorrectly can lead to orphaned Pods.
Deployments: Declarative Updates, Rollouts, Rollbacks
Deployments sit on top of ReplicaSets. A Deployment defines desired states for Pods (such as count, container version, resource requests/limits). When you change the deployment spec, Kubernetes creates a new ReplicaSet and gradually shifts Traffic/pods from old to new—this is rolling update. If something goes wrong, you can rollback to a previous revision. You will practice using commands like kubectl rollout status, kubectl rollout history, kubectl rollout undo. You’ll also configure parameters like maxSurge and maxUnavailable to control how aggressively pods are replaced.
StatefulSets: Stateful Applications
StatefulSets manage workloads that require stable, unique identities and stable storage. They’re appropriate for databases, key-value stores, messaging systems. Pods in a StatefulSet have stable network IDs, stable storage with PersistentVolumeClaims, and are created/destroyed in order. Scaling up and down occurs in order. Rolling updates occur in sequence. You’ll practice writing StatefulSet specs with volumeClaimTemplates, setting up headless services for stable DNS, verifying PVCs are bound correctly, ensuring storageClass supports required modes.
DaemonSets: One Pod per Node or Subset of Nodes
DaemonSet ensures that one copy of a Pod is running across all or some nodes. As new node joins, DaemonSet adds pods; when nodes leave, DaemonSet removes pods. These are often used for logging agents, monitoring agents, or infrastructure services. You’ll learn how to limit which nodes a DaemonSet runs on using node labels, taints and tolerations, node affinity. Also how DaemonSet behaves differently under node failure or tainted nodes.
Jobs and CronJobs: Batch and Scheduled Tasks
Jobs run Pods to completion. Once the specified number of successful completions is reached, the Job is done. Jobs are useful for one-off tasks like migrations, data processing, batch workloads. Deploying Jobs involves specifying completions, parallelism, backoffLimit. You’ll also learn how to clean up finished Jobs. CronJobs schedule Jobs to run at set intervals using cron syntax. You’ll build CronJob specs to perform tasks like backups or reports. You’ll test schedules, past run histories, concurrency policies (e.g. whether overlapping jobs are allowed).
ConfigMaps and Secrets: Configuration Injection
Many workloads need configuration that changes across environments. ConfigMaps allow injecting non-sensitive configuration via environment variables or mounted volumes. Secrets allow sensitive data (passwords, tokens) being handled securely. You’ll create, consume, update these in Pods. Understand differences such as how Kubernetes stores Secrets, how to mount them, and how to avoid accidental exposure. You will see how Deployments, StatefulSets, etc use envFrom, env variables from ConfigMap or Secret, or mount volumes.
Scheduling: Affinity, Taints, Tolerations, NodeSelector
Scheduling controls where Pods run. NodeSelector allows matching nodes by labels. Node affinity is more expressive allowing preferred or required placements. Taints and tolerations allow excluding Pods from nodes unless they tolerate specific taints. This is needed for controlling workloads in large clusters with special-purpose nodes. You’ll use scheduling constraints in workload specs. You’ll practice combining topology (e.g. availability zones) with affinity or anti-affinity rules so Pods spread across nodes for high availability.
Pod Resource Requests, Limits, and QoS Classes
Workloads must define resource requests (minimum guaranteed) and limits (maximum allowed) for CPU and memory. Kubernetes scheduler uses requests to place Pods, limits for enforcement at runtime. QoS classes (Guaranteed, Burstable, BestEffort) come from how these are set. Understanding this influences performance, scheduling, eviction. You’ll write workload specs specifying requests & limits. Find how pods behave under pressure.
Scaling: Manual, Replica-based, and Autoscaling
Manual scaling means editing spec or using kubectl scale. Replica-based scaling via Deployment or StatefulSet. Horizontal Pod Autoscaler (HPA) allows automatic scaling based on CPU, memory, or custom metrics. You will practice configuring HPA, metrics-server, verifying scaling behavior. You’ll also explore vertical scaling concepts and limits.
Rolling Update, Recreate, and Deployment Strategies
Deployment spec includes strategy: RollingUpdate vs Recreate. RollingUpdate replaces pods gradually (controlled via maxSurge & maxUnavailable). Recreate kills all existing pods before starting new ones. For stateful applications, sometimes different strategy needed. You’ll try both strategies to see behavior under load, ensure minimal downtime, understand trade-offs.
Lifecycle Hooks, Probes, Readiness, and Liveness
Though more of workload reliability, you should know readiness and liveness probes. These help ensure service only gets traffic when ready. Lifecycle hooks like postStart and preStop allow scripts or commands run when container starts or stops. You’ll embed probes in spec, cause failure to observe how Kubernetes restarts or waits.
Introduction to Kubernetes Networking
Kubernetes networking connects Pods, Services, external clients and ensures communication is reliable, performant, and secure. You will learn how Service objects expose pods, how DNS works in the cluster, how ingress and ingress controllers route HTTP/S, and how network policies restrict traffic for security.
Why Networking is Critical for CKA
Networking is one of the major domains in the CKA exam. You must understand service types, service discovery, how to expose applications, how to secure traffic between pods, and how to debug networking issues. Many real-world failures are due to misconfigured services, mislabelled selectors, wrong port settings, lacking ingress rules, or insecure network policy gaps.
Service Abstraction: What It Does
A Service in Kubernetes provides a stable endpoint (IP, DNS name) to a dynamic set of Pods. Pods are ephemeral; their IPs change. Services solve this by using selectors to group Pods. Then clients inside the cluster or outside (via other constructs) can use a Service name or IP instead of tracking pod IPs.
Types of Services
ClusterIP is the default. It exposes the service on a cluster-internal IP. Only pods inside the cluster can access it. NodePort exposes the service on a static port on each node’s IP. External traffic can hit <NodeIP>:<NodePort> to reach the service. LoadBalancer type services create an external load balancer provided by cloud provider (or external solution) to route traffic to NodePorts / backends. ExternalName maps service to external domains (via DNS CNAME). For some environments without cloud LB support, MetalLB or equivalent can provide LoadBalancer services.
Service Selectors, Endpoints, and Labels
Service spec includes selector blocks that match labels on pods. If pods have matching labels, the endpoints for the Service are set automatically. If there are no matching Pods, endpoints list is empty and Service is non-functional (no traffic). It’s crucial to check labels on Pods, selector blocks on Services, their ports and targetPorts match container ports.
Port, targetPort, protocol, and port naming
Service spec defines port and targetPort. Port is the port the service listens on, targetPort is the container’s port in pods. Protocol (TCP/UDP) may be specified. Some services allow named ports. Named ports help when containerPort or targetPort uses names. Path to pathType in ingress uses names for backend ports sometimes. Always check consistency.
Internal vs External Traffic, ExternalTrafficPolicy
Services can route external traffic (NodePort, LoadBalancer) and Kubernetes supports policy to preserve source IP or change it. ExternalTrafficPolicy controls behavior: when set to “Local”, traffic goes only to pods on that node, and the source IP is preserved; when “Cluster”, traffic is allowed across nodes, but source IP may be changed. Understanding this is important for troubleshooting and for latency / source IP use cases.
DNS and Service Discovery (CoreDNS)
Once Services are created, Kubernetes automatically registers them in DNS (via CoreDNS usually). A service gets a DNS name like <service-name>.<namespace>.svc.cluster.local. Pods can communicate using DNS names. If in different namespaces they include the namespace and maybe domain parts. CoreDNS is configured via a ConfigMap. You should know how to view its pods, configuration, how to modify or debug DNS issues (e.g. when DNS isn’t resolving, if pods can’t reach service names).
Types of DNS Issues and Debugging
Common DNS issues: misconfigured service names, wrong namespace, pods not having correct service name, pods not seeing CoreDNS pods, network policies blocking DNS, CoreDNS pods not ready or failing, named ports mismatch, no record. You should know commands like kubectl get svc, kubectl get endpoints, kubectl get pods -n kube-system, kubectl exec into Pod to test DNS resolution (e.g. nslookup, dig, ping).
Network Communication inside Cluster
Pod-to-Pod communication must work across nodes. This typically uses Container Network Interface (CNI) plugin. The CNI plugin implements how pods connect, how container networking operates, how addresses are assigned, how overlay or routing works. You should know the basic flow of traffic: pods talking directly via pod IPs, services mapping via kube-proxy (iptables or IPVS), how traffic flows via Service virtual IP (ClusterIP) and how kube-proxy handles endpoints.
Kube‑proxy Modes and Their Impacts
kube-proxy has modes such as iptables or IPVS. These control how service traffic is load balanced to endpoints. The performance differences are important. IPVS often gives better scaling. You should be able to check which mode your cluster is using, and know how service type and endpoints map to kube-proxy rules.
Ingress & Ingress Controllers: Layer 7 Routing
Ingress gives HTTP/S routing from outside into the cluster. You define rules based on host, path. An Ingress resource needs an Ingress Controller deployed to implement it (NGINX, Traefik, HAProxy, etc.). IngressClass may be used so different ingress controllers can coexist. Ingress allows TLS termination, rewrite rules, redirect, routing based on hostname or path.
Deploying an Ingress Controller
Before Ingress resource works, you must install the controller. For example, NGINX ingress controller via Kubernetes manifests or using helm. Ensure RBAC, permissions, load balancer or nodeports etc are setup. Once controller is working you can create ingress objects. Verify ingress controller logs and status, ensure that backend services are reachable and ready.
Ingress Resource Syntax, Path Types, Annotations
Ingress resource spec has rules: host, paths, pathType (Prefix, Exact), backend service name and port. Annotations allow modifying behavior: rewrite-target, cert-manager annotations, redirect, proxy timeouts etc. Understanding these is vital because many exam questions include path-based routing, TLS, and specific annotations.
TLS & SSL in Ingress
Ingress supports TLS termination. You define a tls: block in the ingress spec giving hosts and secretName. Then you create a TLS secret with proper certificates (or self-signed in labs). Knowing how to apply TLS, what issues happen (wrong secret, wrong namespace, missing host match) matters. Also verifying that Ingress controller picks up TLS, mapping port, and that controller certificate paths are correct.
Service Mesh vs Ingress vs Service Types
While Service Mesh (Istio, Linkerd) is outside main scope for many parts of CKA, knowing differences helps in larger architectures. But focus on plain Kubernetes constructs: services, ingress, network policies. Know that Ingress is a L7 router, Service types like NodePort/LoadBalancer are L4, etc.
Network Policies: Purpose & Basics
By default all pods can communicate with all other pods (flat networking). Network Policies allow restricting traffic: ingress, egress, or both. They are namespace-scoped objects. Enforcement depends on CNI plugin (e.g. Calico, Cilium, Antrea). If CNI does not support policies, they may be defined but not enforced.
Network Policy Objects: Key Fields
NetworkPolicy spec includes podSelector to select which pods the policy applies to. policyTypes: Ingress, Egress or both. ingress rules: from which pods/namespaces/IPBlocks, and ports/protocols. egress rules likewise: to which pods/namespaces/IPBlocks, ports/protocols. It may include namespaceSelector, podSelector, ipBlock. If you specify only ingress type, egress is not restricted, and vice versa.
Crafting Default Deny Policies
Often you define “deny by default” policies. For example, one that denies all ingress traffic in a namespace by selecting all pods (empty podSelector) and having no ingress rules. Then define exceptions. Similarly for egress. This is important for hardening cluster or zero-trust scenarios.
Using NamespaceSelector, PodSelector, and IPBlock in Policies
You can combine selectors: select pods in specific namespaces, allow IP blocks (CIDR) with optional exclusions. Must understand syntax and semantics: namespaceSelector selects pods in other namespaces by labels; podSelector filters within that namespace; ipBlock allows specific ranges. Important to know limitations: ipBlock can't be used inside namespaceSelector in some CNI, or some features vary per CNI.
Common Network Policy Examples
Allow only frontend pods to talk to backend pods on port X; deny all ingress from outside except from certain pods; restrict database pods to only accept traffic from API pods; allow DNS traffic (since CoreDNS needs port 53) across namespace/pods. Always remember to allow DNS so that name resolution works.
Service and Networking Practical Exercises
You will deploy a sample multi-tier application: frontend, backend, database. You will create ClusterIP services for backend and database, expose frontend via NodePort or LoadBalancer. You will set up an Ingress to route external host/domain to frontend. You will test host-based and path-based routing, TLS termination via secret. Then write network policies to restrict traffic: block all ingress to database except from backend, block external access to backend except via ingress/NodePort, allow DNS etc. Then simulate misconfigurations: wrong selectors, missing endpoints, no ingress class, pathType mismatch.
Troubleshooting Service & Networking Issues
If a Service returns no endpoints check selectors and pod labels. If DNS is failing check CoreDNS pods, logs, configuration. If Ingress is not routing traffic check Ingress controller is running, service backend ports are correct, service itself is exposed and healthy. If network policies block traffic inadvertently check the policy spec, namespace selectors, podSelectors, missing allowed ports (e.g. forgot port 53 for DNS). Check CNI plugin whether it supports network policies.
Security Best Practices in Services & Networking
Use minimal exposure. Prefer ClusterIP internally. Use NodePort or LoadBalancer only when necessary. Secure Ingress controllers: only open necessary ports, enable TLS. Avoid wildcard hosts unless needed. Use annotations to enforce redirect to HTTPS. Use network policies to limit lateral movement. Always allow DNS traffic in policies. Isolate namespaces. Use labels properly and consistently.
Kubernetes Gateway API (Brief Overview)
While not always required for core CKA exam, recent Kubernetes versions are introducing Gateway API as a more flexible alternative to Ingress, but focus for now is on stable Ingress controllers and Ingress resources. Understanding that newer APIs exist may help when exam references pop up for advanced setups.
Service Types: ClusterIP, NodePort, LoadBalancer, ExternalName. Ingress, IngressClass, pathType, host, rules, annotations. ExternalTrafficPolicy, kube-proxy, CNI plugin, Endpoint slices, DNS name resolution, default backend, TLS secret, path rewriting. NetworkPolicy, podSelector, namespaceSelector, ipBlock, policyTypes, ingress and egress rules.
Prepaway's CKA: Certified Kubernetes Administrator video training course for passing certification exams is the only solution which you need.
Pass CNCF CKA Exam in First Attempt Guaranteed!
Get 100% Latest Exam Questions, Accurate & Verified Answers As Seen in the Actual Exam!
30 Days Free Updates, Instant Download!
CKA Premium Bundle
- Premium File 23 Questions & Answers. Last update: Oct 19, 2025
- Training Course 138 Video Lectures
- Study Guide 268 Pages
Student Feedback
Can View Online Video Courses
Please fill out your email address below in order to view Online Courses.
Registration is Free and Easy, You Simply need to provide an email address.
- Trusted By 1.2M IT Certification Candidates Every Month
- Hundreds Hours of Videos
- Instant download After Registration
A confirmation link will be sent to this email address to verify your login.
Please Log In to view Online Course
Registration is free and easy - just provide your E-mail address.
Click Here to Register