Amazon AWS Certified Advanced Networking - Specialty ANS-C01 Exam Dumps & Practice Test Questions
Question No 1:
A company operates a high-availability application on Amazon EC2 instances distributed across multiple Availability Zones (AZs) to ensure fault tolerance. The application is behind a Network Load Balancer (NLB), which initially handled traffic correctly when instances were only in one AZ.
To improve availability, the Solutions Architect deployed additional EC2 instances in a second AZ and added them to the existing target group linked to the NLB. However, the Operations Team observed that traffic continues to be routed only to the instances in the original AZ, and the new instances in the second AZ are not receiving any traffic.
What is the most operationally efficient solution to ensure traffic is properly distributed across both Availability Zones?
A Enable the new Availability Zone on the NLB
B Create a new NLB for the instances in the second Availability Zone
C Enable proxy protocol on the NLB
D Create a new target group with the instances in both Availability Zones
Correct Answer: A
Explanation:
In AWS, when using a Network Load Balancer (NLB), simply adding instances from a new Availability Zone to a target group does not allow traffic distribution to that zone unless the NLB itself has that zone enabled.
To distribute traffic to the new EC2 instances in the second AZ, the NLB must explicitly be configured to include that AZ. This enables AWS to provision NLB nodes in the new zone and allows them to route traffic to targets located there.
This solution is operationally efficient because it uses the current NLB configuration and requires only a small update through the AWS console, CLI, or automation tools. It avoids unnecessary complexity or infrastructure duplication.
B would create an entirely separate NLB, increasing overhead and complicating management.
C is unrelated to traffic distribution—it only affects source IP visibility.
D does not resolve the issue because traffic won't reach the new instances unless the NLB supports the AZ they're in.
By enabling the new AZ on the NLB, the system will achieve true multi-AZ load balancing, improving both availability and performance.
Question No 2:
A network engineer is tasked with deploying a Linux-based network appliance on Amazon EC2 in an Auto Scaling group for high availability. The appliance requires two Elastic Network Interfaces (ENIs):
A primary ENI in a private subnet to manage internal traffic
A secondary ENI for external internet traffic, using a specific Elastic IP (EIP) from a BYOIP pool
The engineer is preparing a launch template to automate instance provisioning.
Which approach properly supports the design, maintains high availability, and configures networking correctly?
A Configure both network interfaces in the launch template. Assign the primary ENI to a private subnet and the secondary ENI to a public subnet. Use the BYOIP pool for the secondary ENI.
B Configure only the primary ENI in the launch template for a private subnet. Use a user data script to attach the secondary ENI in a public subnet with auto-assign public IP enabled.
C Use an AWS Lambda function triggered by a lifecycle hook in the Auto Scaling group to attach a network interface to an AWS Global Accelerator endpoint.
D Define the primary ENI in the launch template for a private subnet. In the user data script, allocate and attach a secondary ENI and associate an Elastic IP from the BYOIP pool.
Correct Answer: D
Explanation:
In an Auto Scaling group, the launch template can define only one network interface at launch—typically eth0. To use a second ENI, it must be created and attached after the instance is running. This can be automated using a script in the user data section of the launch template.
D is correct because it uses a user data script to:
Dynamically create or attach the secondary ENI in the correct subnet
Associate the required Elastic IP from the BYOIP pool to the secondary ENI
This approach ensures compatibility with Auto Scaling and maintains high availability without requiring manual steps or additional resources.
A is incorrect since launch templates for Auto Scaling do not support multiple ENIs at launch.
B uses auto-assign public IPs, which are managed by AWS and not compatible with BYOIP.
C is not applicable, as AWS Global Accelerator does not facilitate ENI or EIP assignments directly.
Option D provides the necessary automation and network configuration to support scaling while maintaining correct IP address assignments and interface roles.
Question No 3:
A company hosts several publicly accessible applications under a shared domain name (such as example.com) and uses Amazon Route 53 as the DNS provider. These applications follow a three-tier architecture within the AWS Cloud:
The frontend tier runs on EC2 instances in public subnets using Elastic IPs
The application and database tiers reside in private subnets within the same VPC and use RFC1918 private IP addresses
A network engineer is currently developing a new version of one application. For internal communication between application components, the engineer needs internal systems to resolve DNS names like app.example.com just as external users do. The solution must also allow DNS updates to occur without significant manual intervention.
Which three actions should the engineer take to meet the DNS requirements for both internal and external access?
A Add a geoproximity routing policy in Route 53
B Create a Route 53 private hosted zone for the same domain name and associate it with the VPC
C Enable DNS hostnames for the application's VPC
D Create records in the private hosted zone that map each hostname to the corresponding private IP address
E Create an EventBridge rule and Lambda function to sync changes from public to private hosted zone
F Add private IP addresses to records in the public hosted zone
Correct Answers: B, C, D
Explanation:
To support internal and external DNS resolution using the same domain names (e.g., app.example.com), a split-horizon DNS setup is needed. This allows internal AWS resources to resolve names differently than external users.
B Create a Route 53 private hosted zone:
This allows AWS services inside the VPC to resolve domain names to private IPs without affecting the public DNS resolution seen by external users. Associating the private hosted zone with the VPC ensures only internal services use it.
C Enable DNS hostnames for the VPC:
This setting allows EC2 instances and other AWS services within the VPC to resolve and use internal hostnames effectively. It is a prerequisite for using Route 53 private hosted zones.
D Create records in the private hosted zone:
Adding DNS records that point to the internal private IPs ensures internal traffic resolves to local, non-public addresses while using the same domain names as public traffic.
The remaining options are not appropriate:
A Geoproximity routing affects traffic direction based on geography but is not related to internal DNS resolution.
E Automating record synchronization between public and private zones via EventBridge and Lambda introduces unnecessary complexity and may cause inconsistencies.
F Including private IP addresses in public DNS records poses security and functionality risks since private IPs are not routable over the internet.
This solution offers a simple, secure, and scalable approach using native AWS features for managing DNS across both internal and external environments.
Question No 4:
A technology company is preparing to deploy a scalable microservices-based application using Amazon Elastic Container Service (Amazon ECS) with the Fargate launch type. The application runs as multiple containers across the ECS cluster. Each container hosts workloads that must communicate securely using SSL (Secure Sockets Layer) connections.
The application must be accessible privately by consumers located in other AWS accounts. These consumers will not access the application publicly over the internet, so a private connectivity mechanism is essential. Additionally, the application must scale efficiently to handle increasing demand as more external AWS accounts start using the service.
Given these requirements:
SSL communication is mandatory.
Private access from other AWS accounts is required.
The solution must support scaling as usage increases.
Which solution best meets the above requirements?
A Use a Gateway Load Balancer (GLB) with a lifecycle hook to add ECS tasks dynamically. Configure VPC peering with other AWS accounts and update routing tables to enable access.
B Use an Application Load Balancer (ALB) with path-based routing. Create a VPC endpoint service for the ALB and share it with other AWS accounts for private access.
C Use an Application Load Balancer (ALB) with path-based routing. Set up VPC peering with other AWS accounts and update route tables to allow access to the ALB.
D Use a Network Load Balancer (NLB) and specify it in the ECS service. Create a VPC endpoint service for the NLB and share it with other AWS accounts for private access.
Correct Answer: D
Explanation:
The correct solution is D: Use a Network Load Balancer (NLB) and expose it through a VPC endpoint service, allowing private access from other AWS accounts.
Here's why:
Private Access Between AWS Accounts:
The requirement is to allow secure, private communication between ECS services and consumers in other AWS accounts. To achieve this, AWS PrivateLink is the most suitable option. PrivateLink allows services to be securely exposed as VPC endpoint services, which can be accessed over a private network without traversing the public internet.Load Balancer Type:
To use PrivateLink, the load balancer must be a Network Load Balancer (NLB). Unlike Application Load Balancers (ALBs), NLBs are supported for creating VPC endpoint services, which are required for cross-account private connectivity.Fargate Compatibility & SSL:
NLBs work seamlessly with ECS on Fargate, and they support SSL traffic. The SSL termination can happen at the application container level, ensuring encrypted traffic flows end-to-end.Scaling:
Amazon ECS with Fargate automatically handles task scaling based on the service’s configuration. As demand increases, ECS can launch more tasks, and the NLB automatically distributes incoming traffic to healthy targets.
Other Options:
Option A uses GLB, which is primarily for traffic inspection and not suitable here.
Options B & C use ALBs, which do not support PrivateLink. While they allow path-based routing, they can’t be shared across AWS accounts via VPC endpoint services.
Summary:
Option D offers secure, scalable, and private cross-account connectivity, making it the most appropriate solution based on all technical requirements.
Question No 5:
A network engineer is working to modernize a company’s hybrid IT environment to support IPv6 in preparation for a new application launch. The application will be hosted within an AWS Cloud Virtual Private Cloud (VPC). The company’s cloud infrastructure already includes multiple interconnected VPCs, using a Transit Gateway. These VPCs are also connected to the on-premises network through AWS Direct Connect and AWS Site-to-Site VPN.
The on-premises infrastructure has been upgraded to support IPv6. On the AWS side, IPv6 has been enabled on the VPC by assigning an IPv6 CIDR block, and the subnets are configured to support both IPv4 and IPv6. New EC2 instances running the application have been launched in these IPv6-enabled subnets.
The network engineer needs to meet two important requirements:
No changes should be made to the existing infrastructure, including VPCs, VPNs, and routing configurations.
The new IPv6-enabled instances should not be directly accessible from the internet but must have outbound internet access over IPv6.
Given these requirements, which solution provides the most operational efficiency while meeting all technical constraints?
A. Configure a new VPN connection for IPv6 and configure a new Direct Connect virtual interface for IPv6
B. Update the Direct Connect transit VIF and configure BGP peering with the AWS-assigned IPv6 peering address. Update the existing VPN connection to support IPv6 connectivity. Add an egress-only internet gateway. Update any affected VPC security groups and route tables to provide connectivity within the VPC and between the VPC and the on-premises devices.
C. Create a new IPv6 NAT Gateway to handle the outbound traffic for the new EC2 instances.
D. Create a new VPC for the IPv6-enabled EC2 instances, configure new VPN connections and Direct Connect interfaces, and then migrate the instances to the new VPC.
Correct Answer: B.
Explanation
The most efficient solution in this case involves modifying the existing infrastructure rather than creating entirely new components, which fits the requirement of avoiding large-scale changes.
In Option B, the Direct Connect transit virtual interface (VIF) is updated to support IPv6. This is done by configuring BGP peering with the IPv6 peering address provided by AWS. This allows for seamless IPv6 traffic between AWS and the on-premises network over Direct Connect.
Instead of creating a new VPN connection, the existing Site-to-Site VPN is updated to support IPv6, which is more efficient than deploying a new connection and ensures no major changes to the infrastructure.
To provide outbound internet access for the IPv6-enabled EC2 instances, an egress-only internet gateway (EIGW) is used. The EIGW allows IPv6 traffic to flow outbound to the internet while blocking inbound traffic, ensuring the security requirement of preventing direct internet access to the instances is met.
Lastly, security groups and route tables must be updated to accommodate IPv6 traffic. Security groups should allow outbound IPv6 traffic and block inbound access, while route tables must direct IPv6 traffic to the EIGW for internet access, while maintaining internal routing between the VPCs and on-premises resources.
The other options are not suitable. Option A suggests creating a new VPN connection and Direct Connect interface, which would involve unnecessary new infrastructure. Option C mentions using a NAT Gateway, but it only supports IPv4 traffic, so it’s not suitable for IPv6. Option D proposes creating a new VPC and migrating instances, which is not necessary and contradicts the requirement to avoid changing existing infrastructure.
Thus, Option B is the most efficient solution that meets all technical requirements.
Question No 6:
A network engineer is tasked with improving the security of encrypted communications on Application Load Balancers (ALBs). The goal is to ensure that the encryption uses unique, randomly generated session keys for each connection.
What configuration change should the engineer make to meet this requirement?
A. Set the ALB security policy to use only the TLS 1.2 protocol
B. Utilize AWS Key Management Service (AWS KMS) for encrypting session keys
C. Attach an AWS WAF Web ACL to the ALBs and implement a rule to enforce forward secrecy (FS)
D. Modify the ALB security policy to enable support for forward secrecy (FS)
Correct Answer: D.
Explanation:
Forward Secrecy (FS), also known as Perfect Forward Secrecy (PFS), is an important security feature in SSL/TLS encryption. It ensures that each session uses a unique, randomly generated key, which protects past sessions even if a server’s long-term private key is compromised. This feature is essential to protect the confidentiality of individual sessions and ensure that compromising one session’s key does not affect others.
To enable Forward Secrecy on AWS Application Load Balancers (ALBs), the network engineer should adjust the ALB’s security policy. AWS provides several predefined security policies that include cipher suites supporting ephemeral key exchanges, such as Elliptic Curve Diffie-Hellman Ephemeral (ECDHE) or Diffie-Hellman Ephemeral (DHE). These algorithms generate a unique session key for each connection, ensuring forward secrecy.
Option D is the correct choice because modifying the ALB security policy to support Forward Secrecy will configure the ALB to use the appropriate cipher suites that ensure unique session keys are generated for each connection.
Options A and B are incorrect. While TLS 1.2 is secure, it does not automatically guarantee Forward Secrecy unless the appropriate cipher suites are used. AWS KMS (Option B) manages encryption keys but is not used to manage ephemeral session keys for TLS sessions. Option C is also incorrect because AWS WAF is used for filtering HTTP traffic and doesn’t manage encryption or enforce forward secrecy.
Thus, the best configuration to ensure unique, random session keys for encrypted communications on ALBs is to enable Forward Secrecy through the security policy, making Option D the correct solution.
Question No 7:
A global company has successfully deployed a software-defined wide area network (SD-WAN) to ensure secure, optimized, and resilient connectivity between its geographically distributed offices. As part of its cloud adoption strategy, the company is migrating key workloads to Amazon Web Services (AWS). To facilitate secure and manageable connectivity between the AWS cloud and the existing SD-WAN infrastructure, the company plans to use AWS Transit Gateway Connect with two SD-WAN virtual appliances hosted in AWS.
According to company policy, only one SD-WAN appliance should handle traffic from AWS workloads at any given time, creating a primary/standby configuration. The other appliance must remain on standby and only become active if the primary one fails.
The network engineer must now configure routing on AWS Transit Gateway and the SD-WAN appliances to enforce this policy.
What configuration should the network engineer implement to meet this requirement?
A Add a static default route in the transit gateway route table to point to the secondary SD-WAN virtual appliance. Add more specific routes pointing to the primary SD-WAN virtual appliance.
B Configure the BGP community tag 7224:7300 on the primary SD-WAN virtual appliance for BGP routes toward the transit gateway.
C Configure the AS_PATH prepend attribute on the secondary SD-WAN virtual appliance for BGP routes toward the transit gateway.
D Disable equal-cost multi-path (ECMP) routing on the transit gateway for Transit Gateway Connect.
Correct Answer: C
Explanation:
In this scenario, the goal is to use AWS Transit Gateway Connect with two SD-WAN virtual appliances while ensuring that only one appliance handles the traffic at any time. The company has a primary/standby setup for routing, where the primary appliance should handle the traffic under normal conditions, and the secondary appliance should only take over if the primary fails.
To implement this configuration, the network engineer needs to influence the routing decisions in favor of the primary appliance. AWS Transit Gateway Connect uses Border Gateway Protocol (BGP) to exchange routes between the SD-WAN appliances and the transit gateway. Both SD-WAN appliances advertise the same routes, so without additional configuration, traffic could potentially be sent to either appliance.
The AS_PATH prepend attribute is a well-known BGP technique used to make a route less preferred. This is done by artificially increasing the length of the AS path, which causes BGP to prefer a shorter AS path. By applying AS_PATH prepending to the routes advertised by the secondary appliance, the engineer ensures that the transit gateway prefers routes from the primary SD-WAN appliance, as it will have the shorter AS path, unless the primary appliance becomes unavailable.
This configuration ensures seamless failover, making the primary appliance the preferred path for all traffic under normal circumstances, while the secondary appliance is ready to handle traffic if the primary fails.
Let's analyze the other options:
Option A: Static routing is not ideal for dynamic cloud environments like this one. It is not scalable or flexible for handling changing conditions.
Option B: The BGP community tag 7224:7300 is used by AWS for enabling Equal-Cost Multi-Path (ECMP) routing, not for prioritizing one appliance over another.
Option D: Disabling ECMP routing simply stops load-sharing between multiple paths but does not ensure primary/standby behavior. The traffic could still be sent to either appliance based on other factors, not necessarily the desired routing preference.
Therefore, Option C, configuring AS_PATH prepending, is the most effective and compliant way to enforce the primary/standby behavior as required.
Question No 8:
A company is expanding its hybrid cloud infrastructure and wants to establish secure, scalable connectivity between its on-premises data center and multiple VPCs in AWS. Which solution is best suited for this scenario?
A. Use AWS Site-to-Site VPN connections for each VPC separately.
B. Deploy a NAT Gateway in each VPC and route all traffic through the internet.
C. Establish AWS Direct Connect with a Transit Gateway to route between on-premises and VPCs.
D. Set up VPC Peering between the on-premises network and each VPC.
Correct Answer: C
Explanation:
The most scalable and secure solution for connecting a company’s on-premises environment to multiple Amazon VPCs is to establish a Direct Connect connection and integrate it with AWS Transit Gateway. This approach simplifies complex network topologies, improves bandwidth, and ensures high availability and consistent performance.
AWS Direct Connect offers a dedicated, private connection from a customer’s data center to AWS. When used with Transit Gateway, it becomes possible to aggregate multiple VPCs and on-premises networks using a single connection, rather than creating redundant VPNs or peering links for each VPC. This design supports large-scale hybrid architectures, reduces administrative overhead, and centralizes traffic flow through a single routing domain.
Option A, while functional, involves multiple VPN tunnels, which become hard to manage and scale as more VPCs are added. It also offers less consistent performance compared to Direct Connect.
Option B is not appropriate because it sends sensitive traffic over the public internet, which introduces latency, cost, and security risks.
Option D misuses VPC Peering, which does not support transitive routing, meaning on-premises cannot route traffic to other peered VPCs through a single peering link.
Using Direct Connect with Transit Gateway is the recommended approach in most enterprise-level hybrid networking designs, ensuring secure and high-throughput connectivity across a growing cloud footprint.
Question No 9:
An enterprise needs to ensure that its web applications hosted across several AWS Regions are accessible with the lowest latency possible from various global user locations. Which AWS service should be used to meet this requirement?
A. Amazon Route 53 with latency-based routing
B. AWS Transit Gateway with cross-region peering
C. AWS Global Accelerator for TCP acceleration
D. Amazon CloudFront with regional edge caches
Correct Answer: A
Explanation:
To ensure global users connect to the lowest-latency application endpoint, the recommended solution is Amazon Route 53 with latency-based routing. This DNS-based routing policy evaluates latency measurements between the user's location and multiple AWS Regions, then returns the IP address of the Region offering the fastest response time.
This method is ideal for multi-region web applications, where identical copies of a service or application are deployed across several AWS Regions. Route 53 dynamically determines the closest Region and directs users accordingly, improving performance and user experience. Latency-based routing is fully integrated into AWS's infrastructure and is highly reliable for directing traffic intelligently based on geography and network conditions.
Option B, Transit Gateway, is intended for inter-VPC and hybrid connectivity, not for directing global users to low-latency application endpoints.
Option C, Global Accelerator, improves TCP and UDP performance but routes traffic to endpoints in specific AWS Regions. While it reduces latency through edge network paths, it doesn’t provide DNS-level routing logic across multiple Regions.
Option D, CloudFront, accelerates content delivery, not dynamic application traffic. It's better suited for static files like images or videos, not regional load balancing of web applications.
In conclusion, for multi-region web applications where response time and performance are critical for a globally distributed user base, Amazon Route 53 with latency-based routing is the most effective and scalable solution.
Question No 10:
A company wants to establish secure and reliable communication between its on-premises network and its AWS VPC. The company also requires minimal data transfer costs and high throughput for large-scale data transfer.
Which solution best meets the company's requirements?
A. Set up an AWS Site-to-Site VPN and use VPC Peering for communication.
B. Use AWS Direct Connect with a dedicated link and a Transit Gateway.
C. Establish an AWS Client VPN with a VPN tunnel for on-premises access.
D. Use AWS Internet Gateway with multiple NAT Gateways for traffic routing.
Correct Answer: B
Explanation:
To meet the requirements of secure and reliable communication with minimal data transfer costs and high throughput, the best solution is to use AWS Direct Connect with a dedicated link and integrate it with AWS Transit Gateway. This approach provides a private, high-bandwidth connection between your on-premises network and AWS, ensuring that large-scale data transfers occur with consistent low latency and without the limitations of public internet connections.
AWS Direct Connect provides dedicated, private connectivity, which ensures a more predictable and stable network performance compared to internet-based VPN connections. By using Direct Connect, data transfer costs are significantly reduced because the connection is dedicated and private, avoiding the usage of public internet and the associated costs with VPN connections that run over the internet.
When paired with Transit Gateway, Direct Connect enables centralized management of your network, allowing you to route traffic between multiple VPCs, on-premises networks, and remote locations through a single connection, simplifying the architecture and reducing network complexity. This solution is ideal for large enterprises that need scalable and high-performance network designs across AWS Regions.
Option A would use a VPN connection, which works over the internet and does not offer the high throughput or reliability of Direct Connect. It also incurs higher data transfer costs.
Option C, the AWS Client VPN, is typically used for remote user access, not for large-scale network-to-network connectivity.
Option D, using an Internet Gateway and NAT Gateways, doesn’t offer the high performance or low latency required for large-scale data transfers. It is better suited for internet-bound traffic from private subnets.
In conclusion, AWS Direct Connect combined with Transit Gateway is the optimal choice for enterprises needing a secure, high-performance connection with minimal data transfer costs, especially for large-scale or high-throughput applications.