Amazon AWS Certified Solutions Architect - Professional SAP-C02 Exam Dumps & Practice Test Questions
Question 1:
A company using AWS Organizations with full features enabled wants to enable its developers to access and purchase third-party software from AWS Marketplace. The organization follows a centralized procurement model where each Organizational Unit (OU) has a dedicated shared services account managed by procurement managers.
The company has implemented a policy that allows developers to only acquire third-party software from an approved list, which they intend to enforce using the Private Marketplace feature in AWS Marketplace. Additionally, they want to ensure that only users assuming a specific IAM role called procurement-manager-role can administer the Private Marketplace, with all other IAM entities (users, groups, roles, and administrators) explicitly denied administrative access.
What is the most efficient and secure way to implement this solution?
A. Create an IAM role named procurement-manager-role in all shared services accounts in the organization. Attach the AWS Private Marketplace Admin Full Access policy to the role. Create an organization-level Service Control Policy (SCP) to deny permissions to manage the Private Marketplace to everyone except the procurement-manager-role.
B. Create an IAM role named procurement-manager-role in all shared services accounts in the organization. Attach the AWS Private Marketplace Admin Full Access policy to the role. Use IAM policies to restrict access for all IAM identities in the organization.
C. Create an IAM role named procurement-manager-role in all shared services accounts in the organization. Attach the AWS Private Marketplace Admin Full Access policy to the role. Create an organization root-level SCP to deny permissions to administer Private Marketplace to everyone except the role named procurement-manager-role. Create another organization root-level SCP to deny permissions to create an IAM role named procurement-manager-role to everyone in the organization.
D. Create an IAM role named procurement-manager-role in all shared services accounts in the organization. Use IAM roles and permissions to restrict access for all IAM identities to manage the Private Marketplace. Use Service Control Policies (SCP) to deny permissions to create the procurement-manager-role.
Correct Answer: C.
Explanation:
In this case, the company needs a solution that enforces governance and access control efficiently while ensuring that only authorized personnel can manage the AWS Private Marketplace. The objective is to centralize software procurement while applying strict role-based access control (RBAC) and ensuring least privilege access.
Option C provides the most secure and efficient solution, and here's why:
IAM Role for Procurement Manager: By creating a dedicated IAM role (procurement-manager-role) in each shared services account and attaching the necessary AWS Private Marketplace Admin Full Access policy, the company grants the procurement team the necessary permissions to curate and manage the Private Marketplace. This ensures the procurement managers can perform their tasks without elevating permissions to unnecessary entities.
Service Control Policies (SCPs): The use of SCPs at the organization level provides a robust governance framework. The first SCP restricts everyone except the designated procurement-manager-role from managing the Private Marketplace. This effectively enforces the principle of least privilege across the organization, preventing unauthorized IAM entities from accessing or modifying marketplace configurations.
Second SCP for Role Integrity: The second SCP explicitly denies the creation of the procurement-manager-role by any IAM entities other than those authorized to do so. This adds an additional layer of security by preventing unauthorized role creation, ensuring that only approved entities can assume administrative responsibilities.
This solution centralizes administration, minimizes risk, and is highly scalable, especially for large organizations with multiple AWS accounts. By combining IAM roles with SCPs, it ensures granular access control while preventing unnecessary administrative overhead.
Other options (A, B, and D) may have limitations in terms of scalability, security, or efficiency. Option A and B fail to incorporate the second SCP that is crucial to preventing unauthorized role creation. Option D would introduce potential inefficiencies and greater complexity in managing IAM policies and roles.
Question 2:
A company has a monolithic REST API hosted on five Amazon EC2 instances within public subnets in a Virtual Private Cloud (VPC). The application experiences unpredictable traffic spikes, which are not efficiently managed by the current architecture. The mobile clients use a domain name managed by Amazon Route 53, and Route 53 uses multivalued answer routing to return the IP addresses of these EC2 instances.You are tasked with re-architecting the solution to handle dynamic traffic increases and ensure minimal operational overhead.
Which solution best addresses these requirements?
A. Re-architect the application into separate AWS Lambda functions behind an Amazon API Gateway. Update Route 53 to point to the API Gateway endpoint.
B. Containerize the application and deploy it to an Amazon EKS cluster using EC2 nodes. Create a Kubernetes ingress and update Route 53 to use its address.
C. Add the EC2 instances to an Auto Scaling group and configure dynamic scaling based on CPU usage. Use a Lambda function to update Route 53 records accordingly.
D. Deploy an Application Load Balancer (ALB) in front of the EC2 instances (moved to private subnets). Register the instances as ALB targets and update the Route 53 record to point to the ALB.
Correct Answer: D.
Explanation:
The current infrastructure uses Route 53 with a multivalued answer routing policy, which lacks intelligent routing, health checks, and scalability. These limitations prevent the system from handling the high and unpredictable traffic spikes effectively.
Option D provides a robust and efficient solution for scaling and load balancing while simplifying the architecture:
Application Load Balancer (ALB): By introducing an ALB in front of the EC2 instances, traffic is intelligently routed to healthy instances. ALBs are highly capable of handling large traffic volumes with built-in load balancing and health checks. This ensures that only healthy EC2 instances receive traffic, improving reliability and reducing the risk of service degradation during traffic spikes.
Moving EC2 Instances to Private Subnets: Moving the EC2 instances to private subnets enhances security by preventing direct internet access to the instances. This protects the underlying infrastructure and limits exposure to the internet, making the system more secure.
Route 53 Integration: Updating Route 53 to point to the ALB’s DNS name ensures that traffic is directed to the load balancer, eliminating the need to manually manage individual EC2 instance IPs. This simplifies traffic management and removes the need for multivalued answer routing.
Auto Scaling Integration: The ALB can work in conjunction with an Auto Scaling group, which dynamically adjusts the number of EC2 instances based on traffic demand. This enables the application to scale in or out based on actual traffic, ensuring that the system remains responsive during high-demand periods.
Compared to Option A (Lambda + API Gateway) and Option B (EKS), Option D involves less re-architecture and operational complexity, as it works with the existing monolithic application and enhances its scalability without requiring significant changes to the underlying architecture. Option C lacks the intelligent routing and health checks provided by an ALB, making it less efficient in managing unpredictable traffic.
Question No 3:
A large technology company has set up its AWS environment using AWS Organizations. To improve governance and track costs effectively, the company has created separate Organizational Units (OUs) for each engineering team, and each OU includes multiple AWS accounts. Across the entire organization, there are hundreds of AWS accounts in total.
The company's financial and operations teams need each engineering OU to have detailed visibility into AWS usage and costs. The teams should be able to analyze cost data across all accounts within their respective OU, ideally through a visual dashboard.
As a Solutions Architect, your task is to design a scalable and efficient solution to meet the cost visibility needs of each OU.
Which of the following solutions is the most appropriate and scalable way to provide the required cost visibility for each OU?
A. Create an AWS Cost and Usage Report (CUR) for each OU using AWS Resource Access Manager and allow each team to visualize the CUR via an Amazon QuickSight dashboard.
B. Generate an AWS Cost and Usage Report (CUR) from the AWS Organizations management account and allow each team to visualize the CUR through an Amazon QuickSight dashboard.
C. Create an AWS Cost and Usage Report (CUR) in each AWS Organizations member account and allow each team to visualize the CUR via an Amazon QuickSight dashboard.
D. Generate an AWS Cost and Usage Report (CUR) using AWS Systems Manager and allow each team to visualize the CUR through Systems Manager OpsCenter dashboards.
Correct Answer: B. Generate an AWS Cost and Usage Report (CUR) from the AWS Organizations management account and allow each team to visualize the CUR through an Amazon QuickSight dashboard.
Explanation:
The most efficient and scalable way to manage and analyze AWS costs across multiple accounts and organizational units (OUs) is to use the consolidated billing feature within AWS Organizations. By using the AWS Cost and Usage Report (CUR), companies can gather detailed information about their AWS service usage and associated costs.
When generating the CUR from the management account of AWS Organizations (formerly known as the master account), it enables the aggregation of cost and usage data across all member accounts. This approach provides a centralized view of the entire organization’s spending, making it easier to analyze the costs by different organizational units (OUs) or accounts. You can filter the data based on specific tags or account metadata to create detailed insights for each team.
Once the CUR is saved in an Amazon S3 bucket, you can leverage tools like Amazon Athena to query the data and Amazon QuickSight to create visual dashboards. These dashboards can be customized to show relevant data, and access can be securely controlled by sharing them only with the appropriate teams.
Option A is not optimal since AWS Resource Access Manager is used to share resources, not for generating or managing cost reports. Option C is not scalable because it requires generating individual CURs in every member account, making centralized analysis and reporting more challenging. Option D is incorrect because AWS Systems Manager OpsCenter is not designed for cost management or reporting, making it unsuitable for this use case.
Hence, Option B is the best choice for providing the required scalable and detailed cost visibility.
Question No 4:
A mid-sized organization is currently managing and storing its critical data on-premises using a Windows file server. The company generates about 5 GB of new data daily. Recently, the organization migrated a portion of its Windows-based workloads to AWS to leverage cloud scalability and reliability. To support this hybrid environment, the organization has set up a dedicated AWS Direct Connect link between its on-premises environment and AWS to ensure low-latency, high-throughput connectivity.
The organization now needs its on-premises data to be accessible via a cloud-native file system. This would allow seamless access and integration with its cloud-hosted applications. The requirement is to automate the daily synchronization of the newly generated on-premises data to the cloud file system, minimizing operational overhead.
Which data migration strategy best meets the company’s requirements?
A. Use the file gateway option in AWS Storage Gateway to replace the existing Windows file server and point the existing file share to the new file gateway.
B. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon FSx.
C. Use AWS Data Pipeline to schedule a daily task to replicate data between the on-premises Windows file server and Amazon Elastic File System (Amazon EFS).
D. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon Elastic File System (Amazon EFS).
Correct Answer: D. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon Elastic File System (Amazon EFS).
Explanation:
AWS provides several services to assist with data migration and synchronization between on-premises storage and cloud-based solutions. AWS DataSync is a managed data transfer service designed to facilitate fast and secure data migration, particularly for large volumes of data. It supports seamless data transfer between on-premises storage systems and AWS storage services, including Amazon Elastic File System (EFS), which is an ideal cloud-native file system for this scenario.
In this case, the organization needs to synchronize 5 GB of new data daily from its on-premises Windows file server to the cloud, while keeping the data accessible through a scalable cloud file system. Amazon EFS is a fully managed, scalable, and highly available NFS-based file system, making it well-suited for this purpose. EFS integrates smoothly with AWS cloud applications and can handle growing data volumes efficiently.
By using AWS DataSync, the company can schedule daily synchronization tasks, ensuring that new data is transferred automatically with minimal operational overhead. DataSync also supports incremental transfers, optimizing bandwidth and improving performance. Additionally, DataSync includes encryption for data in transit, network optimization, and detailed logging, ensuring secure and efficient data synchronization.
Option A (Storage Gateway - File Gateway) is more suitable for cases where the organization wants to keep files in Amazon S3 and access them via SMB or NFS from on-premises systems. However, the need here is to use a cloud-native file system, which makes EFS the better option.
Option B is not as appropriate since Amazon FSx is typically used for Windows File Server workloads or fully managed file systems, and it might not be the best fit for a more generic cloud file system solution.
Option C involves using AWS Data Pipeline, but this service is designed for ETL tasks and structured data transformation, not for file synchronization, making it unsuitable for this use case.
Thus, Option D is the best choice for automating the transfer of data to a cloud-based file system with minimal overhead.
Question No 5:
A company has deployed a web application on AWS, which serves static content (like images, CSS, and JavaScript files) from an S3 bucket in the us-east-1 region. For disaster recovery and business continuity, a second S3 bucket in a different AWS region has been set up to replicate these static assets. The company’s solutions architect needs to create a multi-region architecture that ensures high availability and low operational overhead for content delivery.
Which of the following solutions would be the most effective for ensuring high availability across AWS regions with minimal maintenance and automatic failover?
A. Modify the application to upload each object to both S3 buckets and use Amazon Route 53 with a weighted routing policy for traffic distribution.
B. Set up an AWS Lambda function to automatically replicate new objects from the S3 bucket in us-east-1 to the second region and configure Amazon CloudFront with an origin group.
C. Enable S3 Cross-Region Replication (CRR) from the us-east-1 bucket to the secondary region and set up Amazon CloudFront with an origin group to handle failover between the two S3 buckets.
D. Enable cross-region replication and update application code manually to switch to the second region’s S3 bucket in case of failure.
Correct Answer: C
Explanation:
The most efficient solution is Option C, which combines two fully managed AWS services: S3 Cross-Region Replication (CRR) and Amazon CloudFront.
S3 Cross-Region Replication (CRR) automatically replicates new objects from the primary S3 bucket in the us-east-1 region to the secondary bucket in another region, providing redundancy. This eliminates the need for manual copying or Lambda functions, making it a hands-off solution with minimal operational complexity.
CloudFront, a global content delivery network (CDN), enhances this architecture by improving static content delivery speeds. By configuring origin groups, CloudFront attempts to fetch content from the primary S3 bucket. If the primary region fails, CloudFront automatically falls back to the secondary bucket in the other region, ensuring uninterrupted content availability.
This solution is highly automated and provides robust fault tolerance, with CloudFront managing failover without requiring manual updates to application code, unlike Option D. It also avoids the operational overhead of managing Lambda functions (as in Option B) and ensures that the application is not forced to upload objects to both buckets (as suggested in Option A).
Thus, Option C is the best solution for low-maintenance, highly available content delivery across multiple AWS regions.
Question No 6:
A company operates a three-tier web application on its on-premises infrastructure, built using the .NET framework, and relies on a MySQL database. After a sudden surge in user traffic led to system failures and financial losses, the company has decided to migrate the application to AWS. The system must be scalable, resilient across multiple availability zones (AZs), and able to handle 200,000 daily users with minimal downtime.
Which approach should you adopt to design an optimal AWS architecture that satisfies these requirements?
A. Use AWS Elastic Beanstalk with a web server environment and a MySQL Multi-AZ Amazon RDS instance. Deploy a Network Load Balancer (NLB) in front of an EC2 Auto Scaling Group across multiple AZs and direct traffic via Route 53.
B. Use AWS CloudFormation to provision an Application Load Balancer (ALB) in front of an EC2 Auto Scaling Group across three AZs, and deploy an Amazon Aurora MySQL database with Multi-AZ support and a Retain deletion policy. Direct traffic via Route 53.
C. Use Elastic Beanstalk to create a web server environment in two separate regions, each with its own ALB, and implement an Aurora MySQL Multi-AZ cluster with a cross-region read replica. Use Route 53 geo-proximity routing to manage user traffic.
D. Use AWS CloudFormation to deploy an ALB in front of an ECS cluster of Spot Instances across three AZs, and launch an RDS MySQL instance with a Snapshot deletion policy. Route traffic via Route 53.
Correct Answer: B
Explanation:
Option B provides a scalable and highly available architecture that effectively addresses the company’s needs. Here’s why:
AWS CloudFormation allows you to define infrastructure as code, making it repeatable and easy to manage.
The Application Load Balancer (ALB) is ideal for web applications because it provides intelligent routing based on HTTP/HTTPS traffic, making it more suited to handling user requests than the Network Load Balancer (NLB), which is designed for Layer 4 traffic (TCP/UDP).
EC2 Auto Scaling Groups across three availability zones (AZs) ensure that the application can scale up or down automatically based on the incoming traffic load, thus maintaining both high availability and fault tolerance.
For the database layer, Amazon Aurora MySQL offers better performance than traditional MySQL and supports Multi-AZ deployments for high availability, automatic failover, and data redundancy across different AZs. This ensures that the database layer can withstand the loss of an AZ without impacting service availability.
Retain deletion policy on the database ensures that the data persists even if the CloudFormation stack is deleted, adding another layer of data durability.
This solution meets the requirements for scalability, high availability, and minimal downtime by utilizing proven AWS services designed for production workloads.
Option A is less optimal because NLB doesn’t offer the same layer-7 routing features as ALB. Option C introduces unnecessary complexity and cost with a multi-region deployment, and Option D involves the use of Spot Instances, which are not reliable for production workloads due to their potential for interruption.
Thus, Option B is the best fit for the company's needs in terms of scalability, resilience, and minimal downtime.
Question No 7:
A large organization is using AWS Organizations to centrally manage multiple AWS accounts across various departments. In order to enhance security and monitoring, the organization requires that an Amazon Simple Notification Service (Amazon SNS) topic be automatically created in every member account for integration with a third-party alerting system.
To streamline this process at scale, the solutions architect decides to use an AWS CloudFormation template together with CloudFormation StackSets to automate the deployment of the SNS topic across each AWS account. Trusted access between CloudFormation StackSets and AWS Organizations has already been enabled.
Given this setup, what is the most efficient and scalable approach for the solutions architect to deploy the CloudFormation StackSets across all AWS accounts in the organization?
A) Create a stack set in the Organizations member accounts. Use service-managed permissions. Set deployment options to deploy to an organization. Use CloudFormation StackSets drift detection.
B) Create stacks in the Organizations member accounts. Use self-service permissions. Set deployment options to deploy to an organization. Enable the CloudFormation StackSets automatic deployment.
C) Create a stack set in the Organizations management account. Use service-managed permissions. Set deployment options to deploy to the organization. Enable CloudFormation StackSets automatic deployment.
D) Create stacks in the Organizations management account. Use service-managed permissions. Set deployment options to deploy to the organization. Enable CloudFormation StackSets drift detection.
Correct Answer: C
Explanation:
The best approach for deploying CloudFormation templates across multiple AWS accounts within an AWS Organization is to use CloudFormation StackSets with service-managed permissions, initiated from the management account of the AWS Organization. Here's why this is the most efficient and scalable solution:
Management Account as the Control Plane: StackSets must be created from the management account because this account has the necessary authority and visibility to manage resources across all member accounts. The management account acts as the control plane, making it the central point for deploying stacks throughout the organization.
Service-Managed Permissions: Using service-managed permissions simplifies the deployment process. AWS automatically manages the required IAM roles for each account, removing the need for manual role creation in each member account. This ensures a more streamlined deployment process, especially when new accounts are added to the organization in the future.
Deployment to the Entire Organization: With trusted access enabled between CloudFormation StackSets and AWS Organizations, the StackSets can be deployed to all organizational units (OUs) or the entire organization without the need for manual intervention for each individual account.
Automatic Deployment: Enabling automatic deployment ensures that any new accounts added to the organization will automatically receive the SNS topic configuration. This guarantees that the deployment remains consistent and compliant across the entire organization, reducing administrative overhead and potential errors.
Why the other options are not suitable:
Option A is incorrect because StackSets must be created from the management account, not the member accounts. Member accounts lack the necessary permissions and control to manage deployments across the organization.
Option B uses self-service permissions, which require manual IAM role setup in each member account. This approach is less scalable and more cumbersome compared to using service-managed permissions.
Option D mentions drift detection, which is useful for identifying configuration changes after deployment, but it doesn’t help with the initial or automatic deployment process. Drift detection is not relevant for ensuring a smooth deployment of the SNS topic across all accounts.
Therefore, Option C is the correct choice as it offers a secure, scalable, and automated solution that aligns with AWS best practices for multi-account infrastructure deployments.
Question No 8:
You are tasked with designing a multi-region architecture for a global e-commerce platform hosted on AWS. The platform needs to provide low-latency access to users across different regions, while ensuring data consistency across the regions. The platform must be highly available and resilient to regional failures.
Which solution is the best for meeting the low-latency and high availability requirements, while ensuring data consistency across regions?
A. Set up Amazon CloudFront in front of an S3 bucket and use cross-region replication.
B. Use Amazon Route 53 with weighted routing to distribute traffic across multiple EC2 instances in different regions, and use Amazon RDS with cross-region replication for database consistency.
C. Set up an Application Load Balancer (ALB) in each region, deploy EC2 instances in each region, and use Amazon Aurora Global Databases to handle cross-region data consistency.
D. Use AWS Global Accelerator to route traffic across multiple regions and leverage Amazon DynamoDB Global Tables for data consistency across regions.
Correct Answer: C.
Explanation:
For this architecture, the company needs to focus on achieving low-latency access, high availability, and cross-region data consistency for a global e-commerce platform. Here's why Option C is the best choice:
Application Load Balancer (ALB) in Each Region: The ALB can distribute traffic across EC2 instances in each region. This setup ensures that users are routed to the closest available region, minimizing latency. The ALBs offer high availability and integrate with Auto Scaling to adjust the number of instances based on traffic demand.
Amazon Aurora Global Databases: Aurora Global Databases for MySQL or PostgreSQL is specifically designed for globally distributed applications. It provides cross-region replication with low-latency reads and writes. Aurora ensures strong consistency between regions, making it an ideal solution for maintaining data consistency across multiple regions while enabling disaster recovery and high availability.
Option A (CloudFront and S3) would improve content delivery but does not address data consistency requirements or database replication.
Option B (Route 53 and RDS cross-region replication) involves managing traffic distribution and database replication but does not leverage the scalability and performance benefits of Aurora Global Databases.
Option D (AWS Global Accelerator with DynamoDB Global Tables) is a good option for key-value data but does not address relational database needs for the e-commerce platform, which is likely to require complex querying and relational data structures.
In summary, Option C meets all the critical requirements: low-latency access, high availability, and consistent data across regions.
Question No 9:
A company operates an AWS environment with several AWS Lambda functions. The functions are invoked asynchronously by an Amazon S3 event and process data stored in S3 buckets. However, the company is experiencing issues with Lambda timeouts and throttling during peak periods, and sometimes, the processed data is not written back to S3.
Which solution should you implement to ensure that Lambda functions process the data reliably, even during peak traffic, without risking data loss?
A. Use Amazon SQS to queue the S3 event notifications and configure Lambda functions to process messages from the queue.
B. Increase the timeout and memory settings for the Lambda functions to handle peak traffic.
C. Enable AWS Lambda Destinations to capture failed invocations and configure a retry policy.
D. Use Amazon SNS to send event notifications to multiple Lambda functions to handle the peak traffic more efficiently.
Correct Answer: A.
Explanation:
The issue here is related to Lambda timeouts, throttling, and potential data loss. The best approach is to implement Amazon SQS to decouple the S3 event notifications and handle the Lambda invocations reliably.
Amazon SQS (Simple Queue Service): By using SQS to queue S3 event notifications, the system can buffer the incoming events and ensure that they are processed in a controlled manner. The Lambda function can read from the queue at its own pace, allowing it to handle bursts of traffic without hitting the concurrency limit or experiencing throttling issues.
Reliability and Scaling: SQS is designed to handle large volumes of messages and supports retrying message delivery if Lambda is unable to process them due to throttling or failures. This ensures that no events are lost and that Lambda functions have a more reliable mechanism for processing events, even during high traffic periods.
Option B (Increasing Lambda timeout and memory settings) might help mitigate the issue in some cases, but it doesn't address the root cause, such as managing the rate of incoming requests or throttling.
Option C (Lambda Destinations) is useful for capturing failure information, but it doesn’t resolve the underlying problem of managing high throughput or ensuring data is processed reliably.
Option D (SNS and multiple Lambda functions) may increase capacity but could lead to complexities in ensuring proper event processing and scaling. It doesn't provide the queueing mechanism needed to handle peaks in traffic efficiently.
Therefore, Option A offers a more scalable and reliable solution by decoupling event processing through Amazon SQS.
Question No 10:
You are designing a solution for a financial company that requires encryption at rest and in transit for all data processed on AWS. The company must also be able to control and manage the encryption keys used across AWS services.
Which solution meets these requirements in the most secure and cost-effective manner?
A. Enable server-side encryption with AWS KMS-managed keys (SSE-KMS) for all data stored in Amazon S3, Amazon EBS, and Amazon RDS. Use Amazon CloudFront with HTTPS for encrypting data in transit.
B. Use AWS CloudHSM for all encryption key management and configure AWS services to use CloudHSM for encryption at rest and in transit.
C. Use AWS Key Management Service (KMS) to manage encryption keys for Amazon S3, Amazon EBS, and Amazon RDS. Enable encryption in transit using SSL/TLS for all communications between services.
D. Use AWS Secrets Manager to store encryption keys for all services and configure each service to retrieve keys from Secrets Manager for encryption.
Correct Answer: C.
Explanation:
For this scenario, the company needs encryption at rest and in transit, along with centralized control over encryption keys. Option C is the best solution for several reasons:
AWS KMS for Encryption Key Management: AWS KMS (Key Management Service) is the most appropriate service for managing encryption keys at scale. It integrates seamlessly with other AWS services like Amazon S3, EBS, and RDS, providing centralized control over encryption. KMS simplifies key management by handling key rotation, auditing, and access control.
Encryption at Rest: With KMS-managed keys, you can enable encryption for data stored in Amazon S3, EBS, and RDS. KMS supports strong encryption algorithms and integrates with AWS services to automatically encrypt data at rest.
Encryption in Transit: Using SSL/TLS for communication between services (including data in transit) ensures that data is encrypted during transmission. This is essential for meeting security and compliance requirements for sensitive data.
Option A (SSE-KMS) is a good option for encryption at rest but lacks the centralized key management across multiple services and doesn’t provide comprehensive control over key usage.
Option B (CloudHSM) provides high levels of security but is more complex and costly to manage compared to AWS KMS, which can fulfill the needs of most use cases without the overhead of managing dedicated hardware security modules (HSMs).
Option D (Secrets Manager) is ideal for managing sensitive data like API keys and passwords, but it's not designed for managing encryption keys across AWS services at scale.
Thus, Option C offers the most secure, scalable, and cost-effective solution for managing encryption keys and ensuring encryption at rest and in transit across AWS services.