freefiles

Amazon AWS Certified Developer - Associate DVA-C02 Exam Dumps & Practice Test Questions


Question No 1:

A developer is building an application that interacts with Amazon S3 to upload and retrieve files using HTTP. The application uses the PutObject API to upload files, and the company has a policy requiring that all data stored in S3 must be encrypted at rest using server-side encryption with Amazon S3 managed keys (SSE-S3). The developer needs to ensure this encryption is automatically applied when objects are uploaded.

What should the developer do to ensure compliance with this encryption policy?

A. Create a customer-managed AWS KMS key and associate it with the S3 bucket
B. Set the x-amz-server-side-encryption header with the value AES256 when calling the PutObject API operation
C. Provide the encryption key in the HTTP request header for each upload request
D. Use TLS (HTTPS) to encrypt the transmission of data between the application and S3

Correct Answer: B

Explanation:

To fulfill the requirement of encrypting objects at rest using SSE-S3, the developer must instruct Amazon S3 to apply server-side encryption using S3-managed keys. SSE-S3, which stands for server-side encryption with Amazon S3 managed keys, is the simplest method because Amazon handles the key management and encryption process automatically.

Amazon S3 supports three main types of server-side encryption:

  • SSE-S3, where S3 manages the encryption keys

  • SSE-KMS, which uses AWS Key Management Service for key control

  • SSE-C, where customers supply their own keys with each request

Since the question specifically requires SSE-S3, the correct method is to use the HTTP header x-amz-server-side-encryption with the value AES256. This header signals to S3 that it should encrypt the object using its internal key management system before storing the file.

Option A is incorrect because it applies to SSE-KMS. Creating and associating a KMS key enables more granular control and auditing, but it is not required or appropriate when SSE-S3 is mandated.
Option C is incorrect because providing an encryption key in the HTTP header relates to SSE-C. This option requires the user to manage their own encryption keys, which is not in line with the requirement of using Amazon-managed keys.
Option D focuses on encrypting data in transit using TLS/HTTPS. While this is a good security practice, it does not address encryption at rest, which is the specific concern here.

Therefore, by setting the appropriate HTTP header to AES256, the developer ensures that every object uploaded via PutObject is automatically encrypted using SSE-S3. This approach aligns with the security policy, meets compliance requirements, and avoids the complexity of managing encryption keys manually.

Question No 2:

A developer needs to simulate traffic from different parts of the world to test the performance of an API hosted on AWS. To do this, they must provision infrastructure across several AWS Regions. The goal is to automate the entire process without writing new application logic or custom code.

Which approach will allow the developer to achieve this goal in the most efficient and scalable way?

A. Deploy AWS Lambda functions in each target AWS Region. Configure each function to launch a CloudFormation stack in that Region upon invocation
B. Develop an AWS CloudFormation template for the infrastructure and use the AWS CLI create-stack-set command to deploy it across the required Regions
C. Write an AWS Systems Manager document that defines the resources, then use it to deploy infrastructure in each Region
D. Run the AWS CLI deploy command separately for each Region using the same CloudFormation template

Correct Answer: B

Explanation:

The most efficient and scalable way to deploy infrastructure across multiple AWS Regions without writing custom code is to use AWS CloudFormation StackSets. This feature allows users to create, update, and manage stacks across multiple accounts and Regions from a single template and interface.

Using the AWS CLI create-stack-set command, a developer can define the infrastructure in a CloudFormation template once and then deploy it simultaneously across multiple Regions. This approach eliminates the need for manual deployment steps or complex automation scripts. It also ensures consistency, as the same stack is replicated across all specified Regions.

Option B is optimal because it uses existing AWS tools to automate a multi-Region deployment without additional logic. It also supports centralized management and updates, making maintenance easier.

Option A adds unnecessary complexity by requiring a Lambda function in each Region to initiate stack creation. This setup needs extra permissions, coordination, and possibly monitoring logic to ensure that the stacks deploy correctly.

Option C involves AWS Systems Manager, which is typically used for configuration management, automation scripts, or operational tasks. It is not ideal for provisioning infrastructure at scale across multiple Regions, especially if the objective is load testing with resource creation like EC2 instances.

Option D is inefficient and error-prone because it requires manual execution of the deploy command for each Region. This approach doesn't scale well and increases the chances of inconsistencies between deployments.

By choosing option B, the developer can focus on defining infrastructure once, use AWS-native tooling to roll it out globally, and ensure uniformity across test environments. It is the most automated, scalable, and maintainable solution for conducting geographically distributed load testing on AWS.

Question No 3:

A developer is creating an application that provides a RESTful API using Amazon API Gateway in the us-east-2 (Ohio) region. To improve performance and ensure global accessibility, the developer wants to route the API through a custom domain using Amazon CloudFront. The developer already has an SSL/TLS certificate for the domain from a third-party certificate authority. 

What is the appropriate way to set up this custom domain for the API?

A Import the SSL/TLS certificate into AWS Certificate Manager (ACM) in the same region as the API (us-east-2). Then create a DNS A record pointing to the custom domain.
B Import the SSL/TLS certificate into Amazon CloudFront. Then create a DNS CNAME record pointing to the custom domain.
C Import the SSL/TLS certificate into AWS Certificate Manager (ACM) in the same region as the API. Then create a DNS CNAME record for the custom domain.
D Import the SSL/TLS certificate into AWS Certificate Manager (ACM) in the us-east-1 Region. Then create a DNS CNAME record for the custom domain.

Correct Answer: D

Explanation:

To configure a custom domain name for an API hosted in Amazon API Gateway, it is essential to understand how the backend infrastructure operates. When a developer sets up a custom domain for an API Gateway endpoint, API Gateway automatically uses Amazon CloudFront to distribute the API globally. Because of this integration, the certificate requirements for API Gateway must match those required by CloudFront.

Amazon CloudFront only supports SSL/TLS certificates from AWS Certificate Manager that are in the us-east-1 region, regardless of where the API Gateway API is deployed. Even if the API is running in us-east-2 (Ohio), the certificate used for the custom domain must reside in ACM in the us-east-1 region. Importing the certificate into any other region, including the one where the API resides, will result in an invalid configuration, and the custom domain will not function as intended.

In this case, the developer must take the SSL/TLS certificate obtained from the third-party certificate authority and import it into ACM in the us-east-1 region. After doing so, they can create a custom domain name in API Gateway and associate it with the API. When this is done, API Gateway provisions the underlying CloudFront distribution that will deliver the API content. The next step is to update the DNS configuration. The developer needs to create a DNS CNAME record that maps their custom domain (for example, api.example.com) to the CloudFront distribution domain name provided by API Gateway.

This setup ensures secure communication via HTTPS and global content distribution. The other options listed do not align with how API Gateway and CloudFront work together:

A incorrectly suggests importing the certificate into the us-east-2 region, which CloudFront cannot use.
B suggests importing the certificate directly into CloudFront, which is not possible for API Gateway-managed distributions.
C repeats the mistake of importing the certificate into the API’s region, which again, CloudFront does not support.

Therefore, only D ensures compatibility between API Gateway’s custom domain feature and CloudFront’s certificate requirements.

Question No 4:

A developer is assigned to deploy a serverless application using an infrastructure-as-code strategy. The architecture includes Amazon API Gateway for HTTP access, AWS Lambda for backend processing, and Amazon DynamoDB for data storage. The developer wants to write the infrastructure code in YAML and use a tool that simplifies serverless deployments while integrating with AWS CloudFormation.

Which AWS service should the developer use?

A CloudFormation serverless intrinsic functions
B AWS Elastic Beanstalk
C AWS Serverless Application Model (AWS SAM)
D AWS Cloud Development Kit (AWS CDK)

Correct Answer: C

Explanation:

The best option for deploying a serverless application on AWS using YAML format is AWS Serverless Application Model (AWS SAM). This framework is tailored for developers working with serverless technologies such as AWS Lambda, API Gateway, and DynamoDB. It allows infrastructure and application components to be described in a concise and readable way using YAML syntax that extends AWS CloudFormation.

AWS SAM simplifies the declaration of commonly used serverless resources by using a shorthand syntax. For example, to define a Lambda function, developers can use the AWS::Serverless::Function resource type, which is far more compact than the full equivalent in standard CloudFormation. Similarly, the AWS::Serverless::Api and AWS::Serverless::Table resource types help define API Gateway and DynamoDB components in a few lines.

A major advantage of SAM is its tight integration with CloudFormation. Under the hood, SAM templates are transformed into standard CloudFormation templates. This allows developers to leverage all the benefits of CloudFormation, such as infrastructure version control, stack management, and rollback support. Moreover, SAM provides a CLI (Command Line Interface) tool called SAM CLI that enables local testing, debugging, and deployment of serverless applications. This toolchain enhances developer productivity by simulating the cloud environment locally before pushing to AWS.

Now, let’s analyze why the other options are less suitable:

A refers to intrinsic functions in CloudFormation, which are specific syntax elements used within templates. They are not a deployment tool or framework on their own and do not simplify the definition of serverless resources.
B AWS Elastic Beanstalk is more appropriate for traditional web applications and managed environments, not serverless applications. It abstracts infrastructure management but does not support event-driven architectures like Lambda and API Gateway.
D AWS Cloud Development Kit (AWS CDK) is a powerful option but relies on programming languages like Python, JavaScript, or TypeScript, not YAML. While CDK can define serverless infrastructure, it does not match the question’s requirement for YAML-based configuration.

Given these factors, AWS SAM is the most appropriate and streamlined solution for defining and deploying serverless infrastructure in YAML format.

Question No 5:

A developer needs to create a serverless solution that captures metadata from newly uploaded files to an Amazon S3 bucket and automatically stores that metadata in an Amazon DynamoDB table. 

The goal is to build an event-driven, scalable solution that requires minimal maintenance and setup.

A Use Amazon EventBridge to monitor the S3 bucket and trigger a rule that inserts data into the DynamoDB table
B Configure the S3 bucket to trigger an AWS Lambda function upon object creation, and use that function to insert the data into the DynamoDB table
C Develop an AWS Lambda function that regularly polls the S3 bucket to check for new files and insert the metadata into the DynamoDB table
D Set up a scheduled cron job using Amazon CloudWatch Events to insert records into the DynamoDB table at regular intervals

Correct answer: B

Explanation

The most effective and scalable approach for this use case is to configure Amazon S3 to automatically invoke an AWS Lambda function when a new object is created. This solution is serverless, event-driven, and has virtually no operational overhead. It leverages the native integration between S3 and Lambda, allowing real-time responses to new uploads.

When an object is uploaded to the S3 bucket, it triggers a Lambda function automatically. The Lambda function receives metadata about the object, such as the file name, size, time of upload, and any custom tags if configured. The function can then parse this information and insert it directly into a DynamoDB table. This design pattern is recommended because it offers immediate execution, scales with the number of uploaded files, and does not require persistent infrastructure management.

Option A is less efficient because although EventBridge can work with S3, it does not natively support all object-level events unless additional configurations are made. Using EventBridge for this purpose adds unnecessary complexity and cost when S3 already supports Lambda triggers directly.

Option C is inefficient. Polling requires periodic checks, which may miss events or introduce delays. It also consumes resources constantly, whether or not new files have been added. This goes against the event-driven architecture principle and can result in increased AWS costs and reduced responsiveness.

Option D relies on scheduled time-based invocations rather than reacting in real time to new file uploads. A cron job would run at fixed intervals, potentially missing uploads or causing delays in processing. It is also more complex to manage and does not offer the scalability of event-based solutions.

In conclusion, option B adheres to AWS best practices for building serverless, scalable, and event-driven applications using S3, Lambda, and DynamoDB. It ensures timely processing with minimal setup and operational burden.

Question No 6:

A development team uses a single AWS CloudFormation template to deploy a web application that includes EC2 instances and an RDS database. This stack is deployed across environments like development, test, and production. Recently, a developer made a change in the development environment that caused the RDS database to be deleted and recreated, resulting in data loss. The team now wants to ensure that such accidental deletions cannot happen in future deployments. 

What are the two best solutions to prevent this?

A Add a DeletionPolicy attribute with the value Retain to the database resource in the CloudFormation template
B Update the CloudFormation stack policy to prevent updates to the database
C Modify the RDS database configuration to use a Multi-AZ deployment
D Create a CloudFormation StackSet for deploying the web application and database resources
E Add a DeletionPolicy attribute with the value Retain to the CloudFormation stack

Correct answers: A, B

Explanation

To protect a critical resource like an RDS database from accidental deletion or replacement during stack updates or deletions, AWS CloudFormation provides several built-in safeguards.

Option A is correct because the DeletionPolicy attribute set to Retain ensures that the specific resource, in this case the RDS database, will not be deleted when the CloudFormation stack is deleted or updated. This policy causes the resource to remain in place even if other parts of the stack are removed, preserving the data and configuration of the database. This is especially useful in environments like development or production, where data integrity is important.

Option B is also correct. A stack policy can be defined to restrict changes to specific resources within a CloudFormation stack. By applying a policy that denies updates to the RDS resource, you can prevent unintentional changes such as updates or replacements. This acts as a permissions boundary at the stack level, ensuring that even if a developer attempts to update the RDS database, the action is explicitly blocked.

Option C helps with availability and failover by enabling replication across Availability Zones, but it does not prevent deletion or unintended changes to the database. Therefore, while useful, it does not directly address the issue in this scenario.

Option D involves deploying stacks across multiple accounts or regions and is unrelated to resource protection within a single stack. It does not offer deletion prevention features and adds unnecessary complexity for this use case.

Option E is incorrect because the DeletionPolicy attribute must be applied to individual resources within the CloudFormation template, not at the stack level. Applying it to the entire stack would not work as intended and would not provide the desired protection for the RDS database specifically.

In summary, combining resource-level retention through DeletionPolicy and access control via stack policies provides a comprehensive strategy to prevent accidental deletion or modification of critical database resources in AWS CloudFormation deployments.

Question No 7:

A developer needs to allow several AWS accounts to retrieve data from an S3 bucket using the GetObject operation. The S3 bucket uses AWS KMS for encryption at rest, and the company also wants to ensure that all data access is encrypted in transit. To meet these requirements, the developer must enforce that all requests use HTTPS.

What is the best way to ensure that all S3 GetObject requests use secure transport?

A. Attach a resource-based policy to the S3 bucket that denies access if the request does not use secure transport by checking if aws:SecureTransport is false
B. Attach a resource-based policy to the S3 bucket that allows access only if the request uses insecure transport by checking if aws:SecureTransport is false
C. Attach a role-based policy to the IAM roles in the other AWS accounts that denies access if aws:SecureTransport is false
D. Attach a resource-based policy to the KMS key that denies access if aws:SecureTransport is false

Correct Answer: A

Explanation:

In Amazon S3, it is common practice to enforce encryption in transit by using policies that reject any requests made over insecure protocols. AWS provides a policy condition key called aws:SecureTransport that can be used within a bucket policy to help enforce this requirement.

The condition evaluates to true when a request does not use HTTPS. By creating a bucket policy with a deny effect for requests where aws:SecureTransport is false, you can ensure that only encrypted (HTTPS) requests are processed. This approach effectively blocks any data access attempts made over HTTP, regardless of which AWS account the request comes from.

A resource-based policy on the bucket is particularly suitable here because it allows you to define rules that apply across all accounts, including those that do not belong to your organization. This method is more flexible and robust than trying to enforce the condition through role-based policies in external accounts, which may not be manageable or enforceable if you don't control those accounts.

On the other hand, option B misinterprets the logic of the condition. It talks about allowing access when aws:SecureTransport is false, which would enable insecure requests—exactly the opposite of what’s intended. Option C only applies to IAM roles in other accounts and does not offer a global enforcement mechanism. Additionally, role-based policies are limited to the scope of the role and require coordination with external account holders. Option D focuses on the KMS key rather than the S3 bucket. Although KMS is part of the encryption at rest strategy, the aws:SecureTransport condition is not relevant or effective in a KMS policy—it specifically applies to Amazon S3.

Therefore, the most appropriate and secure solution is to use a resource-based policy on the S3 bucket that denies any GetObject request not made over a secure (HTTPS) connection. This ensures compliance with encryption in transit requirements across all accessing accounts.

Question No 8:

A web application hosted on an Amazon EC2 instance is supposed to display a list of files from an S3 bucket. The application uses the EC2 instance's IAM role to access AWS services. During testing, the app fails to show any files, although the bucket contains objects. 

There are no error messages. How should this issue be resolved in the most secure way?

A. Modify the IAM instance profile attached to the EC2 instance to grant s3:* permission on the S3 bucket
B. Modify the IAM instance profile attached to the EC2 instance to include the s3:ListBucket permission for the S3 bucket
C. Modify the developer's IAM user policy to include s3:ListBucket permission on the S3 bucket
D. Modify the S3 bucket policy to include s3:ListBucket permission with the Principal set to the EC2 instance's account number

Correct Answer: B

Explanation:

Applications running on Amazon EC2 instances access AWS services by assuming the permissions of the IAM role attached to the instance via an instance profile. This avoids the need for hardcoded credentials and follows AWS security best practices.

When an application tries to list files in an S3 bucket, it must have the s3:ListBucket permission specifically granted on the bucket itself. Without this permission, the API call to list objects (even if the objects themselves are accessible with s3:GetObject) will fail silently or return no data, as in this scenario.

Option B correctly addresses this issue by recommending an update to the IAM role attached to the EC2 instance, adding the s3:ListBucket permission. This grants the application the exact access it needs—nothing more, nothing less—and adheres to the principle of least privilege.

Option A, while functional, is insecure because it grants full access (s3:*) to all S3 actions on the bucket. This includes operations such as deleting objects or modifying access controls, which are not necessary for the application to function. Over-permissioned roles pose a security risk, especially in production environments.

Option C is not relevant to this use case because the application is running under the EC2 instance's IAM role, not the IAM user policy of the developer. Changing the developer's policy has no impact on the application’s ability to access S3.

Option D suggests editing the bucket policy to allow list permissions for the EC2 instance’s account number. While bucket policies can be used for access control, they are generally not the recommended way to manage access for EC2-based applications. IAM roles provide a more scalable, secure, and manageable method for granting access, especially when permissions are specific to a compute resource like an EC2 instance.

In summary, modifying the EC2 instance's IAM role to include s3:ListBucket permission is the most secure and effective way to resolve the problem while adhering to best practices for access control and privilege minimization.

Question No 9:

When deploying a serverless application on AWS Lambda that interacts with DynamoDB, which method ensures that the Lambda function has the minimum permissions required to access the table?

A Attach a policy granting full DynamoDB access to the Lambda execution role.
B Use a policy with specific DynamoDB table and action permissions in the Lambda execution role.
C Allow public access to the DynamoDB table for Lambda to read/write data.
D Grant the Lambda function temporary security credentials manually for DynamoDB access.

Correct Answer: B

Explanation:

This question addresses the principle of least privilege in AWS when allowing a Lambda function to interact with DynamoDB. The best practice is always to grant the Lambda execution role only the specific permissions needed, not full or overly broad access.

Option A suggests giving full DynamoDB permissions to the Lambda role, which is not secure or efficient. Broad permissions increase the risk of unintended actions or security breaches.

Option B is the correct approach. You create an IAM policy that specifies the exact DynamoDB actions the Lambda function needs, such as GetItem, PutItem, or UpdateItem, and restricts it to the specific table(s). This follows the principle of least privilege, enhancing security and compliance.

Option C—making the DynamoDB table publicly accessible—is a major security risk. DynamoDB tables should never be exposed publicly to protect sensitive data.

Option D involves manual credential management, which is not recommended since Lambda execution roles and IAM policies automate permissions and improve security. Temporary credentials are better managed through AWS IAM roles.

Ensuring that Lambda has only the minimum required permissions improves your application's security posture and aligns with AWS best practices. IAM roles for Lambda functions simplify permission management, reduce the risk of credential leakage, and enforce granular control over resource access.

Question No 10:

Which AWS service allows a developer to automate deployment pipelines with integration to AWS Lambda, CloudFormation, and CodeCommit repositories?

A AWS CodeBuild
B AWS CodePipeline
C AWS CodeDeploy
D AWS CloudWatch

Correct Answer: B

Explanation:

This question focuses on choosing the AWS service that facilitates continuous integration and continuous deployment (CI/CD) pipelines integrating Lambda, CloudFormation, and CodeCommit.

Option A refers to AWS CodeBuild, a build service that compiles source code and runs tests but does not manage full deployment pipelines.

Option B is the correct answer. AWS CodePipeline automates the entire release process, integrating source control (like CodeCommit), build (CodeBuild), and deployment (CodeDeploy, Lambda, CloudFormation). It enables developers to define pipelines that automatically move code through different stages, improving release speed and quality.

Option C AWS CodeDeploy automates deployments to EC2 instances or Lambda but is more narrowly focused on deployment rather than orchestrating an entire pipeline.

Option D AWS CloudWatch provides monitoring and alerting, not deployment automation.

Using CodePipeline, developers can build complex workflows that deploy serverless applications, infrastructure as code (CloudFormation), and application code from repositories, all in an automated, repeatable process. This approach reduces manual errors and accelerates delivery.