AWS Certified Security - Specialty SCS-C02 Exam Dumps & Practice Test Questions
Question 1:
A large organization has implemented a multi-account architecture using AWS Organizations, managing hundreds of AWS accounts. The company operates exclusively in a single AWS region and has set up a dedicated security tooling AWS account as the delegated administrator for Amazon GuardDuty and AWS Security Hub.
In this setup, the organization has configured automatic enrollment of both GuardDuty and Security Hub for all existing and new accounts, with all threat alerts and findings centrally aggregated in the security tooling account.
While conducting a routine security check, the security team launched an Amazon EC2 instance in one of the member accounts and deliberately generated DNS queries to example.com to simulate malicious behavior in hopes of triggering a DNS-related finding in GuardDuty.
However, no expected findings appeared in the Security Hub console of the delegated administrator account. What is the most likely cause of this issue?
A. VPC flow logs were not enabled in the EC2 instance's VPC.
B. The VPC used a custom DNS resolver (e.g., OpenDNS) via DHCP option sets.
C. GuardDuty was not integrated with Security Hub in the EC2 instance’s account.
D. Cross-region aggregation in Security Hub was not enabled.
Answer: B
Explanation:
In this scenario, the organization has set up a multi-account architecture with Amazon GuardDuty and AWS Security Hub configured for centralized security management. The security team generated DNS queries from an EC2 instance in one of the member accounts, expecting a DNS-related finding in GuardDuty, which would then be aggregated in Security Hub. However, no expected findings appeared.
Let's evaluate each option to identify the most likely cause of this issue.
Key Components:
GuardDuty: This service monitors malicious or unauthorized behavior in AWS accounts and resources. It analyzes DNS queries, VPC traffic, and other data to detect potential threats.
Security Hub: It aggregates and visualizes findings from various security services, including GuardDuty, to provide a central point of visibility.
Evaluation of Options:
A. VPC flow logs were not enabled in the EC2 instance's VPC.
VPC flow logs are important for monitoring traffic at the network interface level, but GuardDuty doesn’t rely on VPC flow logs to detect DNS-related findings. GuardDuty can detect DNS activity through CloudTrail logs, network data, and DNS logs, but VPC flow logs aren’t specifically required to detect DNS-related threats.
This option is unlikely to be the cause.B. The VPC used a custom DNS resolver (e.g., OpenDNS) via DHCP option sets.
This is the most likely cause of the issue.
If the VPC in question uses a custom DNS resolver, such as OpenDNS or another external DNS service, GuardDuty may not be able to monitor DNS traffic effectively because GuardDuty typically detects DNS-related threats using AWS-provided DNS servers. When a custom DNS resolver is used, GuardDuty’s ability to detect suspicious DNS queries might be limited, as it no longer has visibility into DNS traffic going through AWS's DNS servers.
This could explain why no findings were generated in GuardDuty or Security Hub.C. GuardDuty was not integrated with Security Hub in the EC2 instance’s account.
While it’s possible that GuardDuty isn’t properly integrated with Security Hub, the question indicates that the organization has already configured automatic enrollment for GuardDuty and Security Hub across all accounts, and findings are supposed to be centrally aggregated. Therefore, this option is less likely, as GuardDuty should already be integrated with Security Hub in the member accounts.D. Cross-region aggregation in Security Hub was not enabled.
The question specifies that the organization operates exclusively in a single AWS region. Therefore, cross-region aggregation isn’t required, and this option isn’t relevant to the issue.
The lack of findings is more likely due to an issue with GuardDuty’s ability to detect the DNS queries, rather than a problem with cross-region aggregation.
The most likely cause of the issue is that the VPC used a custom DNS resolver, such as OpenDNS, via DHCP option sets. GuardDuty may not be able to detect DNS-related findings if the traffic is routed through a non-AWS DNS resolver.
Question 2:
A company utilizes containerized services deployed on Amazon Elastic Container Service (Amazon ECS) for their web application. The container images are stored in Amazon Elastic Container Registry (Amazon ECR).
Following a security audit, the team identified vulnerabilities in some of the stored container images. To enhance security, they wish to implement both continuous image scanning and on-push scanning when new images are uploaded to ECR.
Additionally, they require a centralized dashboard to display all scanning results, with the ability to extend it for future security findings from other tools. Certain ECR repositories must be excluded from scanning due to business needs.
Which solution would best meet these security and monitoring requirements?
A. Enable Amazon Inspector, set inclusion rules in ECR for repositories to be scanned, and send the findings to AWS Security Hub.
B. Enable basic ECR scanning for container images, set inclusion rules for repositories to be scanned, and send findings to AWS Security Hub.
C. Enable basic ECR scanning for container images, set inclusion rules for repositories, and send findings to Amazon Inspector.
D. Enable Amazon Inspector, set inclusion rules in Inspector for repositories to be scanned, and send the findings to AWS Config.
Answer: A
Explanation:
The scenario involves ensuring security through container image scanning and creating a centralized monitoring dashboard for security findings. The solution should meet the following requirements:
Continuous and On-push Scanning: Scanning images as they are uploaded to ECR and continuously thereafter.
Centralized Dashboard: All findings should be displayed centrally, potentially extending to future security tools.
Exclusion Rules: Certain repositories need to be excluded from scanning due to business needs.
Let's evaluate the options based on these requirements.
Key Components:
Amazon Inspector: A security assessment service that helps to identify vulnerabilities in AWS workloads, including container images in ECR.
AWS Security Hub: A centralized security management service that aggregates, organizes, and prioritizes findings from various AWS services like Amazon Inspector and ECR.
Basic ECR Scanning: Amazon ECR supports basic vulnerability scanning for container images using Amazon ECR's own capabilities, but this may not provide the same depth as Amazon Inspector, particularly in terms of centralized findings and extensibility.
Exclusion Rules: The ability to exclude certain repositories from scanning is crucial in this case.
Evaluation of Options:
A. Enable Amazon Inspector, set inclusion rules in ECR for repositories to be scanned, and send the findings to AWS Security Hub.
This is the best option.Amazon Inspector is a specialized tool for container security, offering deeper insights into container vulnerabilities and security issues than basic ECR scanning.
Inspector can be integrated with AWS Security Hub, which provides a centralized dashboard for viewing and managing security findings from various tools, including those from Amazon ECR.
You can set inclusion rules in ECR for which repositories should be scanned, and findings from Amazon Inspector can be automatically forwarded to AWS Security Hub, where they can be consolidated with other security findings. This provides a centralized, extendable solution for security monitoring across AWS services.
This solution meets the need for both scanning and exclusion rules and provides a centralized dashboard.
B. Enable basic ECR scanning for container images, set inclusion rules for repositories to be scanned, and send findings to AWS Security Hub.
While basic ECR scanning is useful for detecting common vulnerabilities in container images, it does not provide the same level of detailed vulnerability detection or extensibility as Amazon Inspector.It does integrate with AWS Security Hub, but the security capabilities are more limited than those of Amazon Inspector, which would be better suited for deep, continuous scanning and finding vulnerabilities.
While this option could work, it doesn't provide the level of detail and extensibility required for future integrations with other tools.
C. Enable basic ECR scanning for container images, set inclusion rules for repositories, and send findings to Amazon Inspector.
This approach does not make sense as findings from ECR scanning would not be sent to Amazon Inspector. Amazon Inspector is a vulnerability assessment service, but it’s not designed to receive findings from ECR. This approach would result in a disjointed system and does not meet the requirement of having a centralized dashboard for findings.D. Enable Amazon Inspector, set inclusion rules in Inspector for repositories to be scanned, and send the findings to AWS Config.
While Amazon Inspector is an appropriate tool for scanning container images, AWS Config is a service that tracks configuration changes rather than acting as a centralized security dashboard. Config is more focused on resource configurations and compliance, not security findings.Sending the findings to AWS Config wouldn’t provide the intended centralized security dashboard for monitoring and would complicate extending the solution to other security tools.
Therefore, using AWS Security Hub for centralized findings is the correct approach.
Option A best meets the requirements of continuous and on-push scanning, centralized monitoring, and exclusion rules for certain repositories. Amazon Inspector provides more in-depth scanning than basic ECR scanning, and integrating it with AWS Security Hub ensures a centralized, extendable security dashboard.
Question 3:
A company operates a single AWS account and uses Amazon EC2 instances for testing application code. The company recently discovered that one of its EC2 instances had been compromised and was serving malware. A forensic investigation revealed the compromise had occurred 35 days ago, which raised concerns about delayed threat detection and response.
To improve security and address this issue, a security engineer is tasked with implementing a continuous threat detection solution to automatically identify compromised EC2 instances, flag high-severity findings, and notify the security team by email.
Which three actions should the security engineer take to fulfill these requirements? (Choose THREE)
A. Enable AWS Security Hub in the AWS account.
B. Enable Amazon GuardDuty in the AWS account.
C. Set up an Amazon Simple Notification Service (SNS) topic and subscribe the security team’s email distribution list to it.
D. Set up an Amazon Simple Queue Service (SQS) queue and subscribe the security team’s email distribution list.
E. Create an Amazon EventBridge rule for GuardDuty findings of high severity and configure it to send messages to the SNS topic.
F. Create an EventBridge rule for Security Hub findings of high severity and configure it to send messages to the SQS queue.
Correct answers: B, C, E
Explanation:
To meet the company’s security goals of continuous threat detection, flagging high-severity alerts, and email notification to the security team, the best approach requires a combination of enabling managed detection services, automating event handling, and using messaging services to alert users. Let’s break down each requirement and evaluate the options.
1. Continuous Threat Detection for EC2:
Amazon GuardDuty is a managed threat detection service that continuously monitors AWS accounts, workloads, and data stored in Amazon S3 for malicious activity. It specifically identifies threats such as EC2 instance compromises, port scans, unauthorized access attempts, and communication with known malicious IPs.
Option B (Enable Amazon GuardDuty in the AWS account): This is essential. GuardDuty is designed for precisely this use case — continuously analyzing AWS CloudTrail logs, VPC Flow Logs, and DNS logs for indicators of compromise. Once GuardDuty is enabled, it will generate findings when suspicious activity is detected.
2. Notification via Email:
To notify the security team via email, Amazon SNS (Simple Notification Service) is the ideal choice. SNS is a fully managed pub/sub messaging service that supports email subscriptions.
Option C (Set up an Amazon Simple Notification Service (SNS) topic and subscribe the security team’s email distribution list to it): This step ensures that any critical security findings can be pushed via email to the appropriate personnel. The security team can subscribe their email distribution list to the SNS topic, enabling real-time alerts.
3. Flagging and Responding to High-Severity Findings:
Once GuardDuty detects a threat and generates a high-severity finding, it needs to be routed to a notification mechanism. Amazon EventBridge is used to capture these findings as events and take actions based on customizable rules.
Option E (Create an Amazon EventBridge rule for GuardDuty findings of high severity and configure it to send messages to the SNS topic): This connects GuardDuty’s detection capability to the SNS topic, ensuring that high-severity findings are immediately sent via email. EventBridge allows filtering based on severity level, so only the most critical findings will trigger alerts.
Now let’s analyze the incorrect options:
Option A (Enable AWS Security Hub): While Security Hub is valuable for aggregating and prioritizing findings across AWS services (including GuardDuty), it is not required to detect threats or send alerts. GuardDuty can operate independently and provides the core detection capabilities needed in this scenario.
Option D (Set up an SQS queue and subscribe to the security team’s email): Amazon SQS does not support direct email notifications. Unlike SNS, which pushes messages to email, SQS is a message queue primarily used for decoupled, asynchronous workloads. It would require additional configuration (e.g., a Lambda function to poll the queue and send email), which is unnecessarily complex for this use case.
Option F (Create an EventBridge rule for Security Hub findings and send to SQS): Again, Security Hub is not the primary detection mechanism here. Even if it were enabled, sending findings to SQS would not directly result in email notifications. It introduces an unnecessary and less efficient communication layer.
The company’s core requirements can be fulfilled by enabling GuardDuty to detect threats, using EventBridge to monitor high-severity findings, and integrating with SNS to deliver immediate email alerts to the security team. Therefore, the correct actions are:
B, C, and E.
Question 4:
A company uses AWS Organizations to manage its cloud infrastructure, with separate accounts for departments like HR, finance, software development, and production. All developers work within the software development account.
Recently, the company found that developers were launching Amazon EC2 instances with unauthorized third-party software. To reduce security risks and ensure compliance, the company wants to restrict EC2 launches to pre-approved software, but only in the software development account. Developers should not be able to bypass this restriction.
What is the most effective solution to enforce this policy at scale?
A. In the software development account, create AMIs (Amazon Machine Images) with the approved software and use an AWS CloudFormation template referencing these AMI IDs for EC2 instance launches.
B. Create an Amazon EventBridge rule in the software development account that triggers on EC2 instance launches and uses AWS Systems Manager Run Command to install the approved software.
C. Use AWS Service Catalog to create a portfolio of EC2 products containing only approved AMIs, and allow developers to launch EC2 instances only through this catalog in the software development account.
D. Create approved AMIs in the management account, deploy them using AWS CloudFormation StackSets, and let developers launch these stacks from the management account.
Correct answer: C
Explanation:
The key challenge presented in this scenario is to restrict EC2 instance launches to only pre-approved software in a single AWS account (software development), and to do so in a way that developers cannot bypass. The solution must scale well and be manageable over time. Let's examine each option in detail.
Option A involves creating custom AMIs with approved software and referencing them in CloudFormation templates. While this allows for controlled launches, it doesn't prevent developers from bypassing the templates. Developers can still directly launch EC2 instances via the AWS Console or CLI using unauthorized AMIs unless additional controls are placed, which this option does not inherently provide. Therefore, this is not a complete solution for enforcement.
Option B relies on Amazon EventBridge to detect EC2 launches and then uses AWS Systems Manager Run Command to install approved software. This is a reactive strategy — the EC2 instance has already been launched, and potentially unauthorized software could be used before the remediation occurs. Furthermore, developers could interfere with or disable Systems Manager, making this method less secure and not enforceable in real time. It also adds complexity and isn't ideal for enforcing compliance.
Option C is the most appropriate and secure-by-design approach. AWS Service Catalog allows administrators to create pre-approved products — in this case, EC2 configurations using only approved AMIs. Administrators can restrict EC2 launches so that developers must use Service Catalog products rather than launching EC2 instances manually. By combining this with IAM permissions, the company can prevent developers from launching EC2 instances outside of the Service Catalog, effectively enforcing compliance and ensuring control over what software is deployed. This approach is scalable, integrates cleanly into multi-account environments, and ensures policy enforcement at the source of provisioning.
Option D involves using CloudFormation StackSets from a central management account. While this is a good option for deploying infrastructure across accounts, it does not prevent developers from launching EC2 instances independently within their account using other means. Unless all EC2 launch permissions are removed from developers — which is not indicated in this option — it would not fully enforce the use of approved AMIs. Also, launching stacks from a centralized account can introduce administrative overhead and limits developer autonomy in the software development account.
Why Option C is Best:
It prevents bypassing because IAM policies can restrict EC2 launches to only Service Catalog.
It ensures that only approved AMIs are used in a repeatable and auditable manner.
It's scalable and integrates well with AWS Organizations, allowing governance at the account level.
It gives developers flexibility within a controlled environment while maintaining security and compliance.
In conclusion, the most effective way to enforce the use of pre-approved software on EC2 instances in the software development account, without letting developers bypass the policy, is to use AWS Service Catalog.
Question 5:
A company has enabled Amazon GuardDuty across all AWS regions to strengthen its cloud security monitoring. Within one of the company’s Virtual Private Clouds (VPCs), there is an Amazon EC2 instance acting as an FTP server that receives numerous connection requests from a wide range of global IP addresses, which is expected behavior.
However, GuardDuty is flagging these connections as a Brute Force Attack, generating multiple alerts. The company has reviewed the situation and confirmed that the activity is legitimate, marking the findings as false positives, but GuardDuty continues to trigger these alerts.
What is the most effective way to reduce the unnecessary alerts while maintaining visibility into genuine security threats?
A. Disable the FTP detection rule in GuardDuty for the region where the FTP server is located.
B. Add the FTP server to a trusted IP list and deploy this list in GuardDuty to stop further notifications.
C. Create a suppression rule in GuardDuty to automatically archive new findings that match the specified criteria.
D. Implement an AWS Lambda function with permissions to automatically delete any new findings whenever they are generated.
Correct answer: C
Explanation:
When Amazon GuardDuty is enabled, it continuously monitors your AWS environment for unusual or potentially malicious activity, generating findings for activities that match known threat models or anomalous patterns. However, there are scenarios where legitimate behavior — such as an FTP server receiving frequent global connection attempts — can resemble threat patterns like brute force attacks. In such cases, the activity can lead to repeated false positives, unnecessarily cluttering your security alerts and reducing operational efficiency.
The company's goal in this scenario is to reduce the noise from false positives while continuing to retain visibility into legitimate security threats. This balance of control and oversight is critical in a managed detection and response environment.
Let’s evaluate the provided options:
Option A suggests disabling the FTP detection rule in GuardDuty for the affected region. This is not a recommended or even a supported capability in GuardDuty — individual detection rules cannot be disabled. GuardDuty is a managed service with a fixed set of threat detection types that operate globally across your environment. Disabling a rule also carries the risk of missing legitimate brute force attacks in other parts of your infrastructure. So, this is not only technically invalid but also introduces a security gap.
Option B proposes adding the FTP server to a trusted IP list. However, trusted IP lists in GuardDuty apply to the source of traffic, not to destination endpoints like the EC2 instance in question. The trusted IP list tells GuardDuty to exclude findings that originate from known good IP sources — for example, IPs from your organization or partners. Since the false positives are triggered by connections from various global IPs to your FTP server, and not the server itself, this method wouldn’t stop GuardDuty from flagging the inbound traffic. Therefore, this option is technically incorrect for the use case.
Option C — creating a suppression rule — is the most effective and appropriate solution. Suppression rules in GuardDuty allow you to automatically archive findings that match specific criteria such as resource type, threat type, severity level, or even tags. In this case, you can define a suppression rule that targets the EC2 instance ID or specific threat types (like "BruteForce:FTP/AnonymousLogin") that are known to be false positives. This ensures that similar alerts are automatically archived, significantly reducing noise. At the same time, GuardDuty continues to monitor and alert on other types of threats across your account and VPCs, preserving your overall threat visibility.
Option D proposes using a Lambda function to delete new findings as they occur. This is not only inefficient but also risky. Automating deletion of findings prevents any retrospective analysis and can lead to loss of critical security information, especially if legitimate threats are incorrectly deleted. Furthermore, GuardDuty findings are not meant to be deleted manually, as they are part of an immutable audit trail within the service. This is also a non-recommended practice from a security governance standpoint.
The best practice in this scenario is to use GuardDuty suppression rules, which are specifically designed to handle known, repeated false positives by archiving them automatically. This maintains the integrity and usefulness of the service while tailoring it to your environment.
Question 6:
A company uses Amazon ECS with EC2 instances to run a set of internal microservices, and it relies on Amazon ECR private repositories to store container images. To increase the security of their environment, a security engineer is tasked with:
Encrypting the ECR repositories using AWS Key Management Service (KMS).
Implementing a solution to detect CVEs (Common Vulnerabilities and Exposures) in the container images.
What is the best solution to achieve both objectives?
A. Enable KMS encryption on the existing ECR repositories, install the Amazon Inspector agent on ECS container instances, and perform a CVE scan.
B. Recreate the ECR repositories with KMS encryption enabled and activate ECR image scanning for CVE detection after the next image push.
C. Recreate the ECR repositories with KMS encryption and scanning enabled, and deploy the AWS Systems Manager Agent on the ECS container instances to generate an inventory report.
D. Enable KMS encryption on the existing ECR repositories and use AWS Trusted Advisor to inspect ECS instances for CVEs.
Correct answer: B
Explanation:
The scenario requires the company to meet two distinct security objectives:
Encrypt Amazon ECR repositories using AWS KMS, and
Implement CVE scanning to detect known vulnerabilities in container images.
Let’s assess what’s required for each and examine the options against those needs.
Objective 1: Encrypting ECR repositories with AWS KM
Amazon Elastic Container Registry (ECR) supports image encryption at rest using AWS KMS. When you create or update an ECR repository, you can specify a KMS key (either the default key managed by AWS or a customer-managed key) to protect image data.
If an existing repository was created without KMS encryption, you cannot retroactively change its encryption settings to KMS. You must recreate the repository with encryption enabled.
This makes recreating the repository a prerequisite for KMS-based encryption if it wasn't originally configured.
Objective 2: Scanning for CVEs in container images
Amazon ECR offers native image scanning that integrates with Amazon Inspector, which automatically scans pushed container images for known CVEs using a vulnerability database. For this to function:
Image scanning must be enabled per repository, either at creation or manually afterward.
Scanning is triggered when images are pushed to the repository.
Therefore, to meet this requirement, the engineer must enable image scanning and push images to the repository to initiate the vulnerability analysis.
Now let’s evaluate each option:
Option A: Suggests enabling KMS encryption on existing repositories and installing the Amazon Inspector agent. This is not feasible, because you cannot enable KMS encryption on existing repositories retroactively — they must be recreated. Also, while Amazon Inspector provides CVE scanning, you don’t need to install the Inspector agent on ECS container instances to scan ECR images. The scanning occurs in ECR, not within the running containers. Thus, this option is incorrect.
Option B: This correctly recommends:
Recreating ECR repositories with KMS encryption enabled (required for encryption),
Activating image scanning, and
Pushing new images to trigger CVE scanning.
This fulfills both encryption and CVE detection requirements in the most direct, AWS-supported way. Therefore, this is the correct answer.
Option C: Suggests using AWS Systems Manager Agent to generate an inventory report. While Systems Manager provides useful insights for EC2 instances, it does not scan container images for CVEs and is unrelated to image scanning in ECR. Additionally, as with Option B, recreating the repositories is valid, but the suggested scanning mechanism is incorrect. So, this option is partially correct but ultimately not valid.
Option D: Proposes enabling KMS encryption on existing repositories, which again is not possible. Furthermore, AWS Trusted Advisor does not provide CVE scanning for ECS or container images. Trusted Advisor is more suited for account-level best practices, such as cost optimization and general security recommendations. This option is technically invalid.
To satisfy both goals — enabling KMS encryption for ECR and performing CVE scans — the repositories must be recreated with encryption and scanning features enabled, and images must be pushed to initiate vulnerability analysis.
Question 7:
A company needs to configure a contractor's IAM user account with access restricted to Amazon EC2 Console only. No other AWS services should be accessible, and the contractor should not be able to gain access to any additional services even if added to IAM groups or given additional permissions in the future.
What is the most secure and effective way to ensure this strict access control?
A. Attach an inline policy directly to the contractor's IAM user, granting EC2 access only.
B. Use a permissions boundary policy that restricts the contractor’s IAM user to EC2 access, and attach it to the user.
C. Place the contractor's IAM user in an IAM group with a policy that grants EC2 access only.
D. Create a role with EC2 access and no access to other services, and require the contractor to assume this role.
Correct answer: B
Explanation:
This scenario requires a highly restrictive IAM configuration to ensure a contractor can only access the Amazon EC2 Console, and cannot obtain broader permissions — even if mistakenly or maliciously added to an IAM group or given additional policies later. This level of control requires more than just basic allow policies; it requires a boundary on all possible permissions the user can ever receive.
Let’s evaluate the options:
Option A involves attaching an inline policy directly to the IAM user that grants access only to EC2. While inline policies are tightly scoped to the individual user, they do not prevent the user from gaining broader access later. If someone attaches an additional managed policy or adds the user to a group with broader permissions, those permissions will also apply, potentially violating the principle of least privilege. This solution does not fulfill the security requirement.
Option B is the correct and most secure approach. Permissions boundaries define the maximum permissions a user (or role) can have, no matter what additional policies are attached. Think of them as a safety guardrail — even if someone adds the user to a powerful IAM group, the user will not be able to perform any action outside what the boundary allows. In this case, by attaching a permissions boundary that only allows EC2 actions, the contractor is permanently restricted to EC2 access, regardless of future changes. This satisfies the condition: “even if added to IAM groups or given additional permissions in the future.”
A permissions boundary is not a typical allow/deny policy — it limits the effective permissions of the user by intersecting with whatever policies they have. This makes it ideal for enforcing strict access controls, especially for contractors, external users, or temporary accounts.
Option C puts the user in an IAM group with EC2-only access. While this controls access at the group level, it doesn't prevent future privilege escalation. The user could be added to another group with broader access, or have additional policies attached directly to the user. Therefore, this option does not ensure future-proof access restrictions, and thus is not secure enough for the scenario.
Option D recommends using a role that the contractor must assume. While using IAM roles can be an effective way to delegate access and manage permissions, it does not prevent the contractor's IAM user from being granted additional permissions directly or being added to groups with wider access. Additionally, enforcing that the contractor can only assume the role — and never use any directly assigned permissions — would require additional guardrails, such as restricting console access entirely until the role is assumed, which is not practical or foolproof for this use case.
Why Option B is Best:
Enforces a security boundary that limits what the user can ever do, regardless of group memberships or future policy attachments.
Prevents privilege escalation, intentional or accidental.
Is aligned with best practices for managing external users and high-risk IAM principals.
Is the only option that addresses the explicit requirement that the user should never gain access to more than EC2, even if IAM group or policy changes occur.
Question 8:
A company uses AWS Organizations to manage multiple AWS accounts. They have configured centralized logging using AWS CloudTrail with logs forwarded to an Amazon S3 bucket in the management account. However, some member accounts are not sending their logs to this centralized bucket, and the security team is concerned that future accounts may also fail to comply. The security team needs to ensure three key things: (1) every current account has at least one active CloudTrail, (2) all new accounts automatically forward CloudTrail logs to the centralized S3 bucket, and (3) users are not allowed to stop or delete CloudTrail.
Which of the following actions should the security team take to meet these requirements with minimal management effort?
A. Create a new CloudTrail and configure it to forward logs to Amazon S3. Use Amazon EventBridge to trigger notifications if the trail is deleted or stopped.
B. Implement an AWS Lambda function in each account to check for existing trails and create one if necessary.
C. Modify the current trail in the management account to apply to all accounts in the organization.
D. Create a Service Control Policy (SCP) to deny cloudtrail:Delete* and cloudtrail:Stop* actions, and apply it to all accounts.
Answer: C
Explanation:
To meet the security team’s requirements with minimal overhead, it is important to understand the capabilities of AWS Organizations and AWS CloudTrail in a multi-account setup. Let’s break down each requirement and how option C addresses them:
1. Every current member account must have at least one active CloudTrail.
With AWS Organizations, administrators can create a single organization trail in the management (payer) account and configure it to apply to all existing and future member accounts. When an organization trail is created and set to be organization-wide, AWS automatically enables it across all accounts in the organization. This ensures that all current accounts are covered, regardless of whether they have their own local trails.
2. All new accounts must automatically forward CloudTrail logs to the central S3 bucket.
By design, an organization trail applies to new accounts that join the organization as long as they are in an Organizational Unit (OU) where the trail is enabled. When new accounts are added to the AWS Organization, CloudTrail automatically starts logging their activity and sends the logs to the centralized Amazon S3 bucket specified in the management account’s trail configuration. No additional setup or manual Lambda functions are required in each new account.
3. Users must not be able to stop or delete the CloudTrail configuration.
Although CloudTrail trails created at the organization level are visible to all accounts, only the management account has the ability to stop or delete the organization trail. Member accounts cannot delete or stop an organization-wide CloudTrail. This prevents users in individual accounts from disabling security logging. This built-in protection eliminates the need to deploy Service Control Policies (SCPs) solely for this purpose.
Let’s now look at why the other options are less suitable:
A suggests using EventBridge to monitor deletions or stoppage of CloudTrail, but that approach is reactive, not preventative. It also adds operational overhead by requiring alerts and responses to user actions, which is not ideal for enforcing compliance.
B involves deploying a Lambda function into every account, increasing complexity and administrative burden. It also only addresses the creation of trails, not protection from modification or the forwarding of logs to a centralized S3 bucket.
D involves using Service Control Policies (SCPs) to deny cloudtrail:Delete* and cloudtrail:Stop*. While SCPs are effective at preventing unauthorized actions, they are not sufficient on their own to ensure that CloudTrail is enabled or that logs are centrally collected. SCPs are also best used in conjunction with other tools, not as the sole enforcement mechanism.
In contrast, C provides a centralized, organization-wide solution that automatically includes all current and future accounts, ensures logging to a central location, and restricts modification rights to the management account. This approach meets all requirements with minimal operational complexity, making it the most efficient and secure solution.
Question 9:
A company runs an application across multiple AWS accounts and wants to ensure that all EC2 instances are continuously monitored for security vulnerabilities. They are particularly focused on making sure that newly launched EC2 instances are securely configured and automatically scanned.
What is the most efficient solution to continuously monitor and scan all EC2 instances in this multi-account environment for security vulnerabilities?
A. Enable Amazon Inspector across all accounts to scan all EC2 instances on a regular basis and create an AWS Lambda function to notify the security team when vulnerabilities are found.
B. Use AWS Config rules to enforce compliance for EC2 instances and automatically initiate a vulnerability scan whenever an instance is launched.
C. Set up a centralized Amazon Inspector configuration that scans all EC2 instances across accounts and sends findings to AWS Security Hub for centralized management.
D. Enable Amazon GuardDuty for all accounts to continuously monitor EC2 instances for malicious activity, while also using Amazon S3 to store instance metadata for later review.
Answer: C
Explanation:
To meet the requirement of continuous, automated vulnerability scanning of EC2 instances across multiple AWS accounts, and to do so in a way that is efficient and scalable, the company should rely on purpose-built AWS security services that are designed to work across AWS Organizations. Among the listed options, C provides the most comprehensive and efficient solution.
Let’s analyze each component of the problem and why C is optimal:
1. Centralized and continuous vulnerability scanning:
Amazon Inspector is a native AWS service designed to automatically scan Amazon EC2 instances (as well as container images in ECR) for known vulnerabilities (CVEs) and unintended network exposure. It performs automated, continuous scanning as long as it is enabled. With the updated Amazon Inspector (version 2), there is native multi-account support via AWS Organizations, allowing centralized configuration and consolidated findings management. This ensures that all EC2 instances — current and future — are automatically scanned without manual triggers or per-account setup.
2. Coverage for all accounts and instances, including new ones:
When Amazon Inspector is integrated with AWS Organizations, it supports delegated administration, where a single account (typically a security or management account) can manage Amazon Inspector settings across all member accounts. Once enabled, any new EC2 instance launched in any account within the organization is automatically included in the vulnerability scanning. This meets the requirement of automatically scanning new instances without human intervention.
3. Centralized visibility of findings:
The findings from Amazon Inspector can be routed to AWS Security Hub, another AWS service designed for aggregating, organizing, and prioritizing security findings from across AWS accounts and services. This allows the security team to centrally monitor and manage vulnerabilities from one place, making incident response more efficient and reducing operational overhead.
Now, let's consider why the other options are less effective:
A is a reasonable approach, but it introduces unnecessary overhead. While Amazon Inspector can notify about findings via Amazon EventBridge and AWS Lambda, creating a Lambda function for notifications is not required, since Security Hub (used in C) already provides a centralized alerting mechanism. Also, enabling Inspector “in each account” is less scalable than using delegated administration with centralized configuration.
B uses AWS Config, which is great for compliance checks, but AWS Config does not perform vulnerability scans. It tracks configuration changes and evaluates compliance against predefined rules but doesn’t detect CVEs or security flaws in packages running on EC2 instances. Config and Inspector serve different roles, and Config can't trigger a vulnerability scan on its own.
D refers to Amazon GuardDuty, which is focused on detecting threats and malicious activity, such as unusual API calls, port scans, and anomalous behavior. It does not perform vulnerability scanning or check for CVEs on EC2 instances. Also, storing instance metadata in Amazon S3 for review is not a valid approach for vulnerability management and introduces unnecessary complexity without solving the core issue.
Therefore, C delivers a scalable, centralized, automated solution tailored for vulnerability management across multiple AWS accounts. It leverages Amazon Inspector’s native multi-account capabilities and integrates seamlessly with AWS Security Hub for comprehensive visibility and alerting, making it the most efficient choice among the options.
Question 10:
A company is using AWS Organizations to manage multiple AWS accounts and has set up IAM roles for cross-account access. However, recent security audits have raised concerns that their current method may expose sensitive resources to unauthorized access.
What is the most effective way to reduce the security risks associated with cross-account access while still allowing necessary access between accounts?
A. Use AWS Identity and Access Management (IAM) Access Analyzer to identify and mitigate overly permissive cross-account IAM roles.
B. Implement Service Control Policies (SCPs) to restrict cross-account role usage, ensuring only authorized accounts can access specific resources.
C. Create separate IAM policies for each account and use AWS Secrets Manager to securely store access credentials for each IAM role.
D. Use a custom AWS Lambda function to automatically revoke cross-account access to resources once the security audit is completed.
Answer: A
Explanation:
Cross-account access in AWS is a powerful capability that enables centralized services, shared resources, and centralized management of multiple accounts under AWS Organizations. However, improperly configured IAM roles used for cross-account access can lead to significant security vulnerabilities, including the risk of exposing sensitive resources to unauthorized access. This is particularly true when roles have overly permissive trust policies or attached permissions that are broader than necessary.
Among the provided options, A offers the most efficient and purpose-built approach to reduce the security risk without removing legitimate access that accounts may need. Let’s examine why this is the case and compare it to the other choices.
AWS IAM Access Analyzer is specifically designed to detect unintended access to resources across account boundaries. When enabled, Access Analyzer uses logic-based reasoning to analyze resource-based policies (like those on S3 buckets, IAM roles, KMS keys, etc.) and determine whether those policies allow access from external entities, including other AWS accounts, federated users, or public access.
Key benefits of using IAM Access Analyzer:
Proactive monitoring: It continuously monitors for newly created or modified resource policies that grant external access.
Actionable findings: It provides clear, detailed findings that show which resources are accessible and by whom, helping teams quickly identify and mitigate over-permissive configurations.
Tunable scope: You can scope analysis to your AWS Organization or to individual accounts, which is ideal in multi-account setups.
Integration: Findings from IAM Access Analyzer can be integrated into Security Hub, EventBridge, or custom workflows for automated remediation.
In contrast:
B suggests using Service Control Policies (SCPs). SCPs are extremely powerful and provide guardrails across accounts, but they are not granular. While SCPs can restrict actions or services at an account level, they do not analyze IAM policies or trust relationships. SCPs are best for broad restrictions, not for fine-tuning specific cross-account role configurations.
C proposes managing separate IAM policies per account and storing credentials in AWS Secrets Manager. This approach is not secure nor recommended. IAM roles should not rely on long-term credentials, and storing static credentials violates best practices. This method adds complexity and increases the attack surface.
D involves using a Lambda function to revoke access after an audit. This is a manual, reactive approach and does not ensure that cross-account access is reviewed or refined on an ongoing basis. Once access is revoked, it may need to be re-established later, introducing operational inefficiencies and human error risks.
Overall, A is the most security-focused, automated, and scalable solution for managing and refining cross-account IAM role access. It allows the company to maintain necessary access while ensuring that no resource is unintentionally exposed. By leveraging IAM Access Analyzer, security teams can continuously audit, receive alerts, and take corrective action — significantly reducing the risk of unauthorized cross-account access in a complex AWS environment.