freefiles

Amazon AWS-SysOps Exam Dumps & Practice Test Questions

Question 1

You oversee infrastructure for several distinct projects, each operating under its own dedicated AWS account. To optimize cost control, you are responsible for ensuring that spending stays within the defined monthly budgets. You need a proactive system that alerts you before budget limits are exceeded.

What is the most effective strategy to ensure each account stays within its assigned budget?

A. Merge all accounts into a consolidated billing setup to simplify cost tracking
B. Use Auto Scaling and set CloudWatch alarms to monitor and alert when EC2 instance counts increase
C. Set CloudWatch billing alerts on specific tagged resources to warn when individual resource budgets are met
D. Create CloudWatch billing alerts at the account level to notify when 50%, 80%, and 90% of the budget is reached

Correct Answer: D

Explanation:

When managing multiple AWS accounts, maintaining strict cost control is crucial to ensure that each project or account does not exceed its allocated budget. The goal is to proactively monitor and receive notifications before the budget is breached. Let's break down the options to determine the most effective strategy for this scenario:

  • A. Merge all accounts into a consolidated billing setup to simplify cost tracking
    While this strategy helps streamline billing across multiple AWS accounts, it does not directly address proactive budget management or alerting. Consolidated billing allows for aggregated cost reporting and the application of discounts across accounts, but it does not enable specific alerting or budget tracking per individual account. Therefore, while useful for simplifying billing, it does not fulfill the proactive alerting requirement specified in the scenario. Thus, this option is not the most effective.

  • B. Use Auto Scaling and set CloudWatch alarms to monitor and alert when EC2 instance counts increase
    Auto Scaling and CloudWatch alarms are useful for managing EC2 instances and responding to traffic fluctuations, but this option does not directly address cost control or budget monitoring. Auto Scaling ensures that EC2 instances scale based on demand, but it doesn't provide direct monitoring of overall spending or budget thresholds. Therefore, this strategy is not directly related to the need for budget alerts.

  • C. Set CloudWatch billing alerts on specific tagged resources to warn when individual resource budgets are met
    Setting CloudWatch billing alerts for specific tagged resources can help with cost tracking for individual resources, but this approach does not provide a holistic view of the entire account’s budget. Additionally, managing alerts for individual resources could become complex and inefficient when you need to track spending across multiple projects and accounts. While resource tagging is useful, it’s more practical to set account-level alerts to track overall budget spending. This option is less effective than setting alerts at the account level.

  • D. Create CloudWatch billing alerts at the account level to notify when 50%, 80%, and 90% of the budget is reached
    This is the most effective strategy. By setting up CloudWatch billing alerts at the account level, you can monitor your spending against your defined budget. AWS allows you to configure budget alerts at the account level to notify you when spending reaches specific thresholds, such as 50%, 80%, and 90% of the allocated budget. These alerts provide early warnings, giving you ample time to adjust usage or take corrective action before the budget is exceeded. This proactive approach is critical for maintaining cost control across multiple accounts and ensuring that you stay within the defined budget.

The best solution is to set up CloudWatch billing alerts at the account level to notify you at predefined spending thresholds (e.g., 50%, 80%, and 90%) to ensure that each account stays within its budget.


Question 2

While using Amazon EBS, you initiate the process of capturing a snapshot of an existing volume. You want to understand how this impacts the functionality and availability of the volume that is currently in use by EC2 instances.

What is the behavior of an EBS volume while a snapshot is actively being created?

A. You cannot detach or reattach the volume until the snapshot is finished
B. The volume becomes read-only for the duration of the snapshot
C. The volume remains fully operational and accessible during snapshot creation
D. The volume is temporarily disabled and cannot be used until the snapshot process ends

Correct Answer: C

Explanation:

When working with Amazon Elastic Block Store (EBS) volumes and snapshots, understanding the impact of snapshot creation on the functionality and accessibility of the volume is crucial, especially in production environments. Let's analyze the options to determine the correct behavior when an EBS snapshot is being taken:

  • A. You cannot detach or reattach the volume until the snapshot is finished
    This is incorrect. While creating a snapshot, you can still detach or reattach an EBS volume. However, this might impact the consistency of the snapshot if done improperly (such as detaching during the snapshot process), but it is not a restriction imposed by AWS during snapshot creation. Therefore, this behavior does not apply.

  • B. The volume becomes read-only for the duration of the snapshot
    This is also incorrect. The EBS volume does not become read-only during the snapshot process. It remains fully operational and read/write accessible to the attached EC2 instances. The snapshot operation is designed to be non-disruptive to the running volume. The snapshot process captures the state of the volume at a specific point in time, but it does not affect the ability of the volume to perform normal read/write operations.

  • C. The volume remains fully operational and accessible during snapshot creation
    This is the correct answer. When an EBS snapshot is being created, the volume remains fully operational and accessible for read and write operations. AWS uses a technique called incremental snapshots, which means the snapshot only records the changes made to the volume since the last snapshot. This allows the volume to remain available for regular use without interruption, even as the snapshot is in progress. This non-disruptive process is one of the key benefits of EBS snapshots.

  • D. The volume is temporarily disabled and cannot be used until the snapshot process ends
    This is incorrect. The EBS volume remains accessible throughout the snapshot process. There is no temporary disabling of the volume, and it continues to function normally for any EC2 instances that are attached to it. The snapshot process is designed to be as non-intrusive as possible to ensure continuous access to the volume.

The correct behavior is that the EBS volume remains fully operational and accessible during the snapshot creation process. The snapshot does not interfere with the volume's functionality, allowing you to continue using the volume without disruption.


Question 3

You’re running ElastiCache with Memcached to speed up data access and manage session data. CloudWatch shows frequent cache evictions and a high number of misses, indicating that cache storage is under pressure or poorly optimized.

Which two actions would help decrease evictions and improve cache hit rates? (Choose two)

A. Add more nodes to expand cluster capacity
B. Adjust the max_item_size parameter for storing larger items
C. Reduce the number of cache nodes in your cluster
D. Upgrade to larger instance types for the nodes

Correct Answers: A, D

Explanation:

Cache evictions and high cache misses are common signs that your Memcached setup is not optimized for the workload, and it is crucial to address these issues to improve performance. Let's evaluate each option and see how they affect cache evictions and hit rates:

  • A. Add more nodes to expand cluster capacity
    This is a correct action. Adding more nodes to your ElastiCache Memcached cluster increases storage capacity and the cache's ability to handle a larger volume of data. This reduces the likelihood of cache evictions because the cache has more memory to store data. By expanding the cluster, you can better accommodate the dataset and reduce evictions, improving cache hit rates over time. Increasing cluster capacity is a common way to improve cache performance when faced with frequent evictions.

  • B. Adjust the max_item_size parameter for storing larger items
    This is not necessarily correct in the context of improving evictions and hit rates. The max_item_size parameter controls the maximum size of an individual cache item that Memcached will allow. If you adjust this parameter to store larger items, it might allow more data to be cached in a single item, but it doesn't address the underlying problem of cache storage pressure or insufficient memory. In fact, storing larger items could exacerbate memory issues if the total cache size is already under pressure. This option is unlikely to improve cache hit rates significantly or reduce evictions.

  • C. Reduce the number of cache nodes in your cluster
    This action would increase the likelihood of evictions and is counterproductive to improving cache hit rates. Reducing the number of nodes in the cluster would decrease the total cache memory available, putting more pressure on the remaining nodes and increasing the chances of evictions. Therefore, this is not an effective strategy for reducing cache evictions or improving hit rates.

  • D. Upgrade to larger instance types for the nodes
    This is another correct action. Upgrading to larger instance types (with more memory) for the cache nodes provides more memory for storing cached data. This can help reduce evictions because the system will have more space to hold data, leading to fewer items being removed from the cache prematurely. Larger instance types can also improve cache hit rates by allowing more data to be stored and retrieved faster.

To improve cache performance and reduce evictions in your Memcached setup, the best actions are to add more nodes to your cluster (which expands its capacity) and upgrade to larger instance types (which gives each node more memory). These actions will directly address the storage limitations causing evictions and improve cache hit rates.


Question 4

Your database is hosted on an EC2 instance with its storage on EBS. During high traffic periods, users face slow responses, and monitoring shows high disk I/O wait times.

Which two approaches would help enhance performance while still using EBS for persistent storage? (Choose two)

A. Upgrade to an instance type with SSD-based storage
B. Enable EBS-optimized features on the EC2 instance
C. Use Provisioned IOPS volumes (io1 or io2) for better throughput
D. Replace EBS with ephemeral instance store volumes on an m2.4xlarge EC2

Correct Answers: B, C

Explanation:

When dealing with high disk I/O wait times on an EC2 instance that uses EBS for storage, it's important to address both the EC2 instance and EBS configuration to improve performance. Let’s evaluate each option:

  • A. Upgrade to an instance type with SSD-based storage
    While upgrading to an EC2 instance with SSD-based storage (such as an instance type like I3 or M5 that includes local SSD storage) can improve disk I/O performance, this does not directly apply to EBS. EBS is a separate storage service, and its performance depends on the EBS volume type and EC2-EBS integration rather than the instance's local storage. This option does not address the performance issues related to EBS storage, so it is not one of the best solutions in this case.

  • B. Enable EBS-optimized features on the EC2 instance
    This is a correct answer. Enabling EBS-optimized instances improves the network performance between your EC2 instance and EBS volumes. By enabling this feature, you dedicate additional bandwidth to EBS traffic, reducing network contention and improving EBS throughput and I/O performance. This is an effective way to mitigate high I/O wait times for EC2 instances using EBS for storage. EBS-optimized instances provide better disk performance by ensuring that the instance has dedicated bandwidth for EBS communication, which can directly improve disk I/O performance.

  • C. Use Provisioned IOPS volumes (io1 or io2) for better throughput
    This is a correct answer. Switching to Provisioned IOPS (io1 or io2) EBS volumes is one of the best ways to address high I/O wait times and slow disk performance. These volumes are specifically designed for high-performance, low-latency workloads and can provide up to 64,000 IOPS (for io2 volumes), making them ideal for applications like databases that require fast and consistent disk I/O. By using io1 or io2 volumes, you can provision the exact amount of IOPS required for your workload, significantly improving disk performance and reducing I/O wait times.

  • D. Replace EBS with ephemeral instance store volumes on an m2.4xlarge EC2
    While ephemeral instance store volumes (local storage) provide low-latency and high-throughput storage, they are not persistent—data is lost when the instance is stopped or terminated. Since the scenario specifically mentions using EBS for persistent storage, switching to ephemeral storage would not meet the requirement for persistent storage and would be an unsuitable option. Additionally, instance store volumes are limited in terms of both capacity and durability compared to EBS.

To address high disk I/O wait times and improve performance while still using EBS for persistent storage, the best actions are:

  1. Enable EBS-optimized features to dedicate more bandwidth to EBS traffic, reducing network contention.

  2. Use Provisioned IOPS volumes (io1 or io2) to ensure high throughput and low latency for applications with high I/O requirements.


Question 5

You have a monitoring EC2 instance that performs regular health checks on your multi-tier application. If multiple issues are detected in a short period, CloudWatch alerts your team via email and SMS. Now, you need a reliable and straightforward way to monitor this monitoring instance itself.

What’s the most efficient and simple solution to detect failure of the monitoring instance?

A. Launch another instance to regularly ping the monitoring instance and notify on failure
B. Configure a CloudWatch alarm using EC2 system and instance-level health checks
C. Trigger an alarm based on CPU usage, simulating failure with an artificial load
D. Set up a failover heartbeat mechanism using SQS and a backup monitoring instance

Correct Answer: B

Explanation:

In this scenario, you need to monitor the health of the monitoring EC2 instance itself to ensure that the process of health-checking your multi-tier application does not fail. Let’s analyze the available options:

  • A. Launch another instance to regularly ping the monitoring instance and notify on failure
    This solution could work, but it is not the most efficient or simplest. Setting up a separate EC2 instance to ping the monitoring instance adds unnecessary complexity, additional resources, and extra maintenance. While this approach could provide redundancy, it involves extra steps and resources, making it more cumbersome than a direct solution. Moreover, it doesn’t leverage existing AWS tools effectively.

  • B. Configure a CloudWatch alarm using EC2 system and instance-level health checks
    This is the most efficient and simple solution. You can use CloudWatch to configure alarms that monitor the system and instance-level health of your EC2 instance. EC2 instances can report their system status (e.g., whether the operating system is functioning properly) and instance status (e.g., whether the EC2 instance is reachable and running correctly). By setting up CloudWatch alarms based on these health checks, you can automatically monitor the health of the monitoring instance itself. This solution integrates seamlessly with AWS and is very straightforward to configure without additional overhead or resources.

  • C. Trigger an alarm based on CPU usage, simulating failure with an artificial load
    While monitoring CPU usage might provide insight into the instance's activity, it is not a reliable indicator of failure. A high CPU usage may not necessarily mean the instance is unhealthy—some workloads might legitimately consume a high amount of CPU resources. Relying on artificial load to simulate failure is an inefficient and non-optimal approach. It doesn’t accurately reflect the health of the instance and could lead to false alarms or missed alerts.

  • D. Set up a failover heartbeat mechanism using SQS and a backup monitoring instance
    While this solution might be more redundant and fault-tolerant, it is not the simplest or most cost-effective solution for detecting failures in the monitoring instance itself. Setting up a heartbeat mechanism with a backup monitoring instance adds complexity and requires managing multiple resources. Additionally, this solution might be over-engineered for a simple health check requirement, and it doesn't directly leverage CloudWatch’s built-in capabilities to monitor EC2 instance health.

The most efficient and simple solution for monitoring the health of your monitoring EC2 instance is to use CloudWatch alarms that rely on EC2 system and instance-level health checks. This approach is natively supported, easy to configure, and integrates directly with your existing monitoring setup.


Question 6

Your application is hosted on EC2 instances controlled by an Auto Scaling group. You’ve decided to switch to a new instance type for better cost or performance.

Where must you update the instance type to ensure newly launched instances reflect the change?

A. In the launch configuration associated with the Auto Scaling group
B. Directly in the Auto Scaling group settings
C. Within the Auto Scaling policy definitions
D. In the tags assigned to the Auto Scaling group

Correct Answer: A

Explanation:

When you want to change the instance type of EC2 instances launched by an Auto Scaling group, the instance type is defined in the launch configuration (or the newer launch template if you're using that option). Let's analyze each option to clarify the correct answer:

  • A. In the launch configuration associated with the Auto Scaling group
    This is the correct answer. The launch configuration is used by the Auto Scaling group to define the parameters for instances that are launched, including the instance type. To switch to a new instance type, you must update the launch configuration with the desired instance type. However, it's important to note that launch configurations cannot be modified after creation. If you want to change the instance type, you need to create a new launch configuration and update the Auto Scaling group to use the new one.

  • B. Directly in the Auto Scaling group settings
    This is incorrect. You cannot directly specify or change the instance type in the Auto Scaling group settings. The Auto Scaling group itself relies on the launch configuration (or launch template) to determine the instance type when launching new instances. The group settings mostly control scaling policies, instance health checks, and other scaling-related parameters, but not the instance type.

  • C. Within the Auto Scaling policy definitions
    This is incorrect. Auto Scaling policies are used to control scaling actions (such as when to add or remove instances based on metrics like CPU usage), but they don't control the instance type. The instance type is specified in the launch configuration or launch template, not in the scaling policies.

  • D. In the tags assigned to the Auto Scaling group
    This is incorrect. Tags are used for resource identification and organization purposes, but they do not influence the configuration of instances launched by an Auto Scaling group. Changing tags will not affect the instance type of new instances.

To change the instance type of EC2 instances managed by an Auto Scaling group, you need to update the launch configuration (or launch template) that is associated with the Auto Scaling group. After creating the new launch configuration, you must update the Auto Scaling group to use it.


Question 7

You are unable to access an EC2 instance hosted in a VPC, even though it has an Elastic IP, correct security group rules, and an attached Internet Gateway.

Which VPC component should you check next to troubleshoot connectivity issues?

A. NAT instance configuration
B. Route table associated with the subnet
C. Internet Gateway configuration settings
D. Source/destination checking on the instance

Correct Answer: B

Explanation:

In this scenario, the EC2 instance has an Elastic IP, correct security group rules, and an attached Internet Gateway, so the basic network setup seems correct. To troubleshoot further, let's examine each component and its relevance to the issue:

  • A. NAT instance configuration
    A NAT instance is used to allow instances in a private subnet to access the internet (for example, for updates or external communication). Since the instance in this scenario appears to have an Elastic IP (typically used for instances in public subnets) and is intended to be accessed directly, it doesn't rely on a NAT instance. Therefore, the NAT instance configuration is not relevant in this case. This option is not the first thing to check.

  • B. Route table associated with the subnet
    This is the correct answer. The route table associated with the subnet in which the EC2 instance resides is crucial for determining how traffic is directed in and out of the instance. Specifically, you need to ensure that the route table has an entry directing outbound traffic to the Internet Gateway (IGW) for instances in the public subnet. If the route table is missing the necessary route, the EC2 instance will not be able to send or receive traffic from the internet, even with an attached IGW. Checking the route table will help you confirm if the routes are configured properly to allow traffic flow.

  • C. Internet Gateway configuration settings
    The Internet Gateway is attached to the VPC, but the issue might not lie in the IGW itself, as the other components like the Elastic IP and security group rules seem correct. The Internet Gateway configuration is unlikely to be the root cause unless it is improperly attached to the VPC or not correctly configured in the route table. However, this is generally not the next step after confirming the basics are in place, like the IGW attachment.

  • D. Source/destination checking on the instance
    Source/destination checking is a setting used in VPC to ensure that instances can send or receive traffic based on their source or destination IP address. This setting is typically disabled only for NAT instances or instances acting as routers. If the EC2 instance is a regular instance in a public subnet, source/destination checking should remain enabled. However, if the instance is in a private subnet or requires routing through a different network appliance, you may need to disable source/destination checking. In this case, this setting would be more relevant if the EC2 instance were acting as a router or had specific routing configurations. But given that the instance is in a public subnet, this is less likely the cause of the issue compared to a routing misconfiguration.

When troubleshooting connectivity issues with an EC2 instance in a VPC, especially when the instance has an Elastic IP, security group rules are correct, and an Internet Gateway is attached, the route table is the next component to check. A misconfigured route table is the most likely reason why the EC2 instance cannot access or be accessed from the internet.

Question 8

You’re migrating a high-load Node.js application to AWS and need to align with your company's standards, including using Chef for configuration management. You also want to minimize manual work and automate deployment lifecycle operations.

Which deployment solution meets these needs while offering Chef support and reducing manual setup?

A. Use AWS OpsWorks to build a stack, define layers, and deploy the app
B. Use Elastic Beanstalk with a Node.js environment for managed deployment
C. Launch an EC2 instance using a pre-built AMI and manually configure the app
D. Install a Chef Server on EC2 and manage infrastructure via CLI scripting

Correct Answer: A

Explanation:

When migrating an application and aiming to integrate Chef for configuration management, it’s crucial to choose a solution that minimizes manual setup and allows automation for deployment and lifecycle management. Let’s analyze each option:

  • A. Use AWS OpsWorks to build a stack, define layers, and deploy the app
    This is the correct answer. AWS OpsWorks is a fully managed configuration management service that integrates seamlessly with Chef. It allows you to define stacks and layers, where you can deploy your application while using Chef for configuration management. With OpsWorks, you can automate the deployment process, manage the app's lifecycle, and handle the configurations of EC2 instances based on Chef recipes. It is a powerful and flexible solution that reduces manual work by automating the configuration, deployment, and operational tasks, making it ideal for high-load Node.js applications.

  • B. Use Elastic Beanstalk with a Node.js environment for managed deployment
    While Elastic Beanstalk is an excellent choice for managed deployments, especially for applications like Node.js, it does not natively support Chef for configuration management. Elastic Beanstalk offers great support for automatic scaling and management of applications, but it is more abstracted compared to AWS OpsWorks. If Chef is a strict requirement for your application deployment, Elastic Beanstalk may not align with your needs for configuration management, though it could still be used with custom solutions. This option does not meet the Chef integration requirement.

  • C. Launch an EC2 instance using a pre-built AMI and manually configure the app
    This option involves more manual intervention. While you could launch an EC2 instance using a pre-built AMI and configure your Node.js application manually, this would not minimize the manual work or automate the deployment lifecycle. The use of Chef for configuration management would also require you to configure it manually, which goes against the goal of automating deployment lifecycle operations. This solution is not the most efficient or scalable for high-load applications.

  • D. Install a Chef Server on EC2 and manage infrastructure via CLI scripting
    While this option allows you to install Chef and manage infrastructure, it still involves more manual setup than AWS OpsWorks. Setting up and maintaining a Chef server on EC2 requires more effort and overhead compared to using a managed service like OpsWorks, which integrates Chef support out of the box. You would need to manage the Chef server, configuration files, and scripting via the CLI, adding complexity without the built-in management tools that OpsWorks provides. This solution is more hands-on and could increase administrative work, making it less optimal than AWS OpsWorks.

Given the requirements to use Chef for configuration management and automate the deployment lifecycle with minimal manual work, AWS OpsWorks is the best option. It integrates Chef with a managed environment, allowing you to easily define your infrastructure and deploy applications while reducing manual setup.


Question 9

You're tasked with automating backup and recovery processes while relying primarily on AWS-managed services. Some operations, however, require additional scripting for customization and control.

Which task is best suited for scripting using AWS CLI or automation scripts?

A. Schedule and rotate daily EBS snapshots with monthly retention
B. Configure daily RDS backups with long-term retention policies
C. Identify and shut down idle EC2 instances based on usage metrics
D. Automatically register Auto Scaling instances to a Load Balancer

Correct Answer: A

Explanation:

In this scenario, you are looking to use AWS-managed services for automation and backup while still needing additional scripting for customization and control. Let’s go through the options:

  • A. Schedule and rotate daily EBS snapshots with monthly retention
    This is the correct answer. While AWS offers managed services to automate backups (e.g., through Amazon Data Lifecycle Manager for EBS snapshots), the need to schedule, rotate, and manage retention policies with custom rules (like keeping daily snapshots for 30 days, weekly snapshots for 3 months, etc.) requires more granular control. AWS CLI or automation scripts can be used to manage EBS snapshot rotation and ensure that old snapshots are deleted according to custom retention policies. This task often requires scripting, especially if your retention policy deviates from what AWS-managed services directly support.

  • B. Configure daily RDS backups with long-term retention policies
    This is incorrect. Amazon RDS provides automatic backups by default and supports features such as point-in-time recovery. You can configure backup retention directly from the RDS management console or API, and it includes long-term retention options without needing extra scripting. Therefore, there is little need for AWS CLI or automation scripts to configure backups in this case. RDS backup and retention policies are largely managed by AWS without needing custom scripts.

  • C. Identify and shut down idle EC2 instances based on usage metrics
    This task can certainly be automated using AWS Lambda and CloudWatch alarms (no scripts required). While scripting could be used to identify idle EC2 instances, AWS services like CloudWatch and Lambda can be directly leveraged to automatically monitor EC2 usage metrics and shut down idle instances. This task is easily handled by AWS native automation services and doesn't require detailed custom scripts beyond simple Lambda functions.

  • D. Automatically register Auto Scaling instances to a Load Balancer
    This is incorrect. Auto Scaling groups automatically register newly launched instances with a Load Balancer (if configured to do so) without the need for extra scripting. AWS Auto Scaling integrates directly with Elastic Load Balancers (ELB) to ensure that new instances are automatically added to the load balancer. There is no need for custom scripting for this process as it’s part of the managed Auto Scaling feature.

Scheduling and rotating EBS snapshots with customized retention policies requires more detailed control than what AWS-managed services like Data Lifecycle Manager typically offer. Therefore, using the AWS CLI or automation scripts is the best way to implement a customized backup rotation strategy.

Question 10

You're designing a fault-tolerant web application that serves users globally. You want to ensure high availability and low latency by routing users to the nearest healthy endpoint, even during partial outages.

Which AWS service and routing method should you choose to achieve global performance and reliability?

A. Use Route 53 with geolocation routing to direct users based on their region
B. Use ELB with a single-region setup and enable sticky sessions
C. Use CloudFront with static content only and manual failover scripts
D. Deploy in one Availability Zone and enable auto recovery for the EC2 instance

Correct Answer: A

Explanation:

In the context of designing a fault-tolerant web application that serves a global audience, it's essential to ensure high availability and low latency by routing users to the nearest healthy endpoint. Let’s evaluate each option:

  • A. Use Route 53 with geolocation routing to direct users based on their region
    This is the correct answer. Amazon Route 53 is a highly available and scalable DNS service that allows routing decisions based on a variety of criteria, including geolocation routing. This method enables you to direct users to the nearest healthy endpoint, ensuring low latency and high availability. Geolocation routing allows you to configure the routing of users to different AWS resources depending on their geographic region, which helps in maintaining performance, even during partial outages of certain regions. Additionally, it can ensure that users are routed to healthy endpoints, providing fault tolerance and resilience in a global setup.

  • B. Use ELB with a single-region setup and enable sticky sessions
    Elastic Load Balancer (ELB) with sticky sessions ensures that requests from a user are routed to the same instance, providing a session-aware experience. However, this setup only works within a single region and does not address global performance or low-latency routing. Sticky sessions are more relevant to maintaining user sessions during requests, but they don’t help in routing users to the nearest endpoint or handling failovers across multiple regions. This solution will not provide the global coverage or fault tolerance required in this scenario.

  • C. Use CloudFront with static content only and manual failover scripts
    Amazon CloudFront is a Content Delivery Network (CDN) service designed to deliver static content with low latency. While CloudFront can handle static content and provide a fast user experience, it doesn’t fully address the requirement for routing users to healthy endpoints during partial outages. Additionally, the manual failover scripts mentioned would introduce more complexity and delay, and CloudFront does not handle dynamic content routing based on availability across multiple regions. This approach does not guarantee the same level of global performance and availability that Route 53 can provide with automated failover and geolocation routing.

  • D. Deploy in one Availability Zone and enable auto recovery for the EC2 instance
    Deploying your application in a single Availability Zone (AZ) would not provide high availability or fault tolerance, as it introduces a single point of failure. While auto recovery can help to restore the EC2 instance if it fails, the entire application is still vulnerable to the failure of that single AZ, making it unsuitable for a global, fault-tolerant setup. For a high-availability and low-latency architecture, a multi-AZ or multi-region approach is preferred, which this option does not support.

To achieve global performance and reliability for your web application, and to route users to the nearest healthy endpoint, Amazon Route 53 with geolocation routing is the best choice. This method offers automated, region-based routing, ensuring both low latency and high availability.