freefiles

Google Associate Cloud Engineer Exam Dumps & Practice Test Questions

Question 1:

You are building an application hosted on Google App Engine that must send and receive data through Cloud Pub/Sub. However, the Cloud Pub/Sub API is currently not active in your Google Cloud project. You intend to authenticate the application using a service account.

What is the most appropriate step to enable communication between App Engine and Cloud Pub/Sub?

A. Manually activate the Cloud Pub/Sub API through the API Library in the Google Cloud Console.
B. No action is needed; the API will auto-enable when accessed using the service account.
C. Use Deployment Manager to deploy your app and let it handle enabling necessary APIs.
D. Assign the Pub/Sub Admin role to the App Engine default service account, allowing it to activate the API upon initial use.

Correct Answer:  A

Explanation:

In order to enable communication between Google App Engine and Cloud Pub/Sub, the Cloud Pub/Sub API must first be enabled in the Google Cloud project. Here’s an explanation of each option:

The Correct Answer:

A. Manually activate the Cloud Pub/Sub API through the API Library in the Google Cloud Console.
The correct first step is to manually enable the Cloud Pub/Sub API through the API Library in the Google Cloud Console. This is because APIs in Google Cloud are typically not enabled by default for new projects. To use the Cloud Pub/Sub API, the administrator or project owner must enable it via the console. Once enabled, the App Engine application can authenticate using a service account to interact with Cloud Pub/Sub.

Why the Other Options are Incorrect:

B. No action is needed; the API will auto-enable when accessed using the service account.
This option is incorrect because Google Cloud services do not automatically enable APIs when accessed by a service account. APIs must be explicitly enabled, either manually or through an automation tool (like Deployment Manager). Without enabling the Cloud Pub/Sub API, the App Engine application will not be able to send or receive data through Cloud Pub/Sub.

C. Use Deployment Manager to deploy your app and let it handle enabling necessary APIs.
While Deployment Manager can be used to automate infrastructure management, it does not automatically enable APIs like Cloud Pub/Sub unless explicitly configured to do so in the deployment scripts. The Cloud Pub/Sub API still needs to be manually enabled or explicitly listed in the configuration for the deployment.

D. Assign the Pub/Sub Admin role to the App Engine default service account, allowing it to activate the API upon initial use.
This option is incorrect because assigning roles (like Pub/Sub Admin) to the service account does not automatically enable the Cloud Pub/Sub API. It provides the necessary permissions to interact with Cloud Pub/Sub, but API activation is a separate step that must be done manually through the Google Cloud Console.

To enable communication between App Engine and Cloud Pub/Sub, the Cloud Pub/Sub API must be manually activated first in the Google Cloud project. Therefore, the correct answer is A.



Question 2:

You are managing infrastructure resources distributed across several GCP projects. To consolidate logging and metrics into a single view using Google Cloud Monitoring, you aim to centralize monitoring into one unified dashboard.

Which strategy best supports this objective?

A. Configure a Shared VPC across the projects and enable Monitoring in one project.
B. Establish individual Monitoring accounts per project, then grant cross-project access using service accounts.
C. Create one central Monitoring workspace and link all projects to it.
D. Set up Monitoring in one project and use resource groups to include the other projects based on their names.

Correct Answer: C

Explanation:

In Google Cloud, centralizing logging and metrics for resources spread across multiple projects is a common need for administrators, especially when managing large or complex infrastructures. To achieve this in Google Cloud Monitoring, the best approach is to utilize a centralized workspace that can collect data from all projects and give you a unified view.

Correct Strategy:

C. Create one central Monitoring workspace and link all projects to it.
This is the most efficient and recommended method for consolidating monitoring data from multiple projects into a single view in Google Cloud Monitoring. A Monitoring workspace acts as the central point where all metrics, logs, and alerts are aggregated. By linking all of your GCP projects to this centralized workspace, you can monitor resources across multiple projects from one unified dashboard. This approach is supported by Google Cloud’s Operations Suite (formerly Stackdriver), which allows you to view and analyze data from various projects in a consolidated manner. This is the most straightforward and scalable solution for centralizing monitoring across several projects.

Why the Other Options are Incorrect:

A. Configure a Shared VPC across the projects and enable Monitoring in one project.
While a Shared VPC can help with network connectivity between projects, it does not address the centralized monitoring of resources across multiple projects. Google Cloud Monitoring is separate from VPC configurations and requires linking to a centralized workspace for consolidated monitoring. Shared VPC focuses on network traffic, not monitoring or logging.

B. Establish individual Monitoring accounts per project, then grant cross-project access using service accounts.
This approach is inefficient because it creates multiple individual Monitoring accounts per project, making it harder to manage and consolidate data in one place. Granting cross-project access using service accounts also adds unnecessary complexity and is not the best way to unify monitoring. A better approach is to use a central Monitoring workspace as mentioned in option C.

D. Set up Monitoring in one project and use resource groups to include the other projects based on their names.
While resource groups can help organize resources within Google Cloud, they are not the ideal method for centralizing monitoring across projects. Google Cloud Monitoring requires linking projects directly to a central workspace, and using resource groups based on names does not provide the level of aggregation or functionality needed for cross-project monitoring.

The most appropriate strategy for centralizing monitoring across multiple GCP projects is to create one central Monitoring workspace and link all the projects to it, which ensures that data from all projects is collected and displayed in a single dashboard. This approach is scalable, easier to manage, and aligned with best practices for Google Cloud Monitoring. Therefore, the correct answer is C.


Question 3:

You need to deploy a highly available application to Compute Engine using a managed instance group. This application must always run exactly one instance, ensuring no duplication, while automatically recovering if it crashes.

Which setup will guarantee this behavior?

A. Turn on autoscaling and set both minimum and maximum instance count to 1.
B. Keep autoscaling off and set the instance count range to exactly 1.
C. Enable autoscaling with 1 as the minimum and 2 as the maximum instance count.
D. Turn off autoscaling, but configure the group to allow up to 2 instances.

Correct Answer: B

Explanation:

When using a Managed Instance Group (MIG) in Google Cloud Compute Engine, it's important to ensure that the application always runs exactly one instance, and can recover automatically in case of a failure. Here's an explanation of the different options:

Correct Strategy:

B. Keep autoscaling off and set the instance count range to exactly 1.
This setup is the most straightforward and effective solution for ensuring that exactly one instance is always running, with no duplication and automatic recovery in case of failure. By turning off autoscaling and setting the instance count range to exactly 1, the managed instance group will always maintain exactly one instance, regardless of load or other factors. If the instance crashes, the MIG will automatically attempt to recreate the instance to keep the desired state of 1 instance.

Why the Other Options are Incorrect:

A. Turn on autoscaling and set both minimum and maximum instance count to 1.
While this might seem like a good option because the minimum and maximum instance count are both set to 1, autoscaling is unnecessary in this scenario. Autoscaling is designed to dynamically scale the number of instances based on load. Since you want exactly one instance always running, autoscaling is not needed, and keeping it on could add complexity. Disabling autoscaling (as in option B) is a cleaner and more appropriate approach.

C. Enable autoscaling with 1 as the minimum and 2 as the maximum instance count.
This option introduces autoscaling and allows up to 2 instances to run, which is not what is required. Even though the minimum is set to 1, allowing for 2 instances means that the application could scale up to 2 instances under certain conditions, violating the requirement to always run exactly one instance. It also introduces unnecessary complexity with autoscaling.

D. Turn off autoscaling, but configure the group to allow up to 2 instances.
Turning off autoscaling is a step in the right direction, but allowing up to 2 instances introduces the possibility of having duplicate instances, which contradicts the requirement. The goal is to ensure exactly one instance is running at all times, so the configuration should enforce this by limiting the number of instances to exactly 1 (as in option B).

The best approach to guarantee that exactly one instance of the application runs at all times, with automatic recovery if the instance crashes, is to turn off autoscaling and set the instance count range to exactly 1. This configuration ensures that only one instance is maintained and provides the necessary automatic recovery in case of failure. Therefore, the correct answer is B.


Question 4:

You are performing an IAM audit for a GCP project named "my-project" and want a complete overview of which users, groups, or service accounts hold specific roles within the project.

What is the most effective and thorough method to collect this information?

A. Execute gcloud iam roles list and inspect the output for user-role associations.
B. Use gcloud iam service-accounts list to review roles linked to service accounts.
C. Access the IAM page in the GCP Console for "my-project" and view the assigned roles per member.
D. Check the Roles tab in the Console to examine available roles and their use.

Correct Answer: C

Explanation:

When performing an IAM audit in Google Cloud, the goal is to gather detailed information about the users, groups, and service accounts assigned to specific roles within the project. Let's evaluate the options:

Correct Strategy:

C. Access the IAM page in the GCP Console for "my-project" and view the assigned roles per member.
The most effective and thorough method for conducting an IAM audit in Google Cloud is to access the IAM page in the Console. This page allows you to view all IAM members (users, groups, service accounts) and the roles they are assigned within the project. It provides a clear overview of who has access to what in terms of permissions, and it enables you to quickly review and export this information for auditing purposes. The Console presents this data in a centralized, easy-to-understand format, making it the ideal choice for a comprehensive IAM audit.

Why the Other Options are Incorrect:

A. Execute gcloud iam roles list and inspect the output for user-role associations.
The gcloud iam roles list command lists roles available in the project but does not provide details about which users, groups, or service accounts are assigned to those roles. This command focuses on the roles themselves, not the assignments of those roles to members, making it an incomplete method for auditing IAM permissions.

B. Use gcloud iam service-accounts list to review roles linked to service accounts.
While the gcloud iam service-accounts list command is useful for listing service accounts within a project, it does not provide a comprehensive overview of roles assigned to those service accounts. It only provides information about the service accounts themselves, and you would need additional steps or commands (like gcloud projects get-iam-policy) to correlate roles with specific service accounts.

D. Check the Roles tab in the Console to examine available roles and their use.
The Roles tab in the GCP Console displays the available roles in a project, but it does not show the members or users who have been assigned those roles. This makes it insufficient for an IAM audit where you need to know who has been granted specific roles rather than just understanding the roles themselves.

The most thorough and effective method for conducting an IAM audit in Google Cloud is to use the IAM page in the GCP Console, where you can view the roles assigned to all members (users, groups, service accounts) within the project. This provides the most comprehensive view for auditing permissions and access control in the project. Therefore, the correct answer is C.



Question 5:

You have been asked to create a new billing account on GCP and connect it to an existing project.

Which combination of permissions and steps is required to complete this action?

A. Have the Project Billing Manager role for the project, then link it to an existing billing account.
B. As a Project Billing Manager, create a new billing account and associate it with the current project.
C. With Billing Account Administrator privileges, create a new project and link it to the existing billing account.
D. As a Billing Account Administrator, create a new billing account and attach it to the existing project.

Correct Answer: D

Explanation:

To complete the task of creating a new billing account and linking it to an existing project in Google Cloud Platform (GCP), the required steps and permissions depend on the specific actions involved. Here's a breakdown of the correct approach and why it is the best solution:

Correct Strategy:

D. As a Billing Account Administrator, create a new billing account and attach it to the existing project.
To create a new billing account in GCP, you must have the Billing Account Administrator role. This role allows you to manage billing accounts, including creating new ones. Once the billing account is created, you can then attach it to the existing project. The Billing Account Administrator can link the billing account to an existing project, ensuring that the project's costs are tracked under the newly created account.

Why the Other Options are Incorrect:

A. Have the Project Billing Manager role for the project, then link it to an existing billing account.
While the Project Billing Manager role allows the user to link a project to an existing billing account, it does not have the permissions necessary to create a new billing account. This option is only useful if the billing account already exists, but the task here specifies creating a new billing account, which requires the Billing Account Administrator role.

B. As a Project Billing Manager, create a new billing account and associate it with the current project.
The Project Billing Manager role does not have the permissions to create a billing account. It only allows the user to link a project to an existing billing account. To create a new billing account, the user needs the Billing Account Administrator role, not just the Project Billing Manager.

C. With Billing Account Administrator privileges, create a new project and link it to the existing billing account.
The Billing Account Administrator role does allow for creating and managing billing accounts, but it does not have the ability to create projects. The task here involves creating a billing account and attaching it to an existing project, not creating a new project. Therefore, this option is not relevant to the task.

To create a new billing account and link it to an existing project, you need the Billing Account Administrator role. This role gives the necessary permissions to create the billing account and then associate it with the project. Therefore, the correct answer is D.


Question 6:

In your GCP environment, you have two projects: one for managing service accounts (proj-sa) and another for running Compute Engine VMs (proj-vm). You want a service account from proj-sa to have permission to take VM snapshots in proj-vm, following IAM best practices.

What is the most suitable way to accomplish this securely?

A. Download the service account’s private key and add it to each VM’s custom metadata.
B. Add the private key to the SSH configuration of each VM.
C. Assign the Compute Storage Admin role to the service account within the proj-vm project.
D. Set the Compute Engine API scope to read/write during VM creation.

Correct Answer: C

Explanation:

To ensure that a service account from one project (proj-sa) can perform actions (such as taking snapshots) in another project (proj-vm), it’s essential to follow best practices for identity and access management (IAM) in Google Cloud. Specifically, this involves using roles and permissions that grant the necessary access to the service account in the correct project, without resorting to less secure or improper configurations.

Correct Strategy:

C. Assign the Compute Storage Admin role to the service account within the proj-vm project.
To allow the service account in proj-sa to manage VM snapshots in proj-vm, the most appropriate method is to assign the Compute Storage Admin role to that service account within proj-vm. This role provides the necessary permissions to manage snapshots, such as creating and deleting snapshots, while adhering to IAM best practices. Importantly, this approach uses role-based access control (RBAC), where the service account from proj-sa gets access to perform specific actions in proj-vm without needing to expose private keys or modify VM metadata.

Why the Other Options are Incorrect:

A. Download the service account’s private key and add it to each VM’s custom metadata.
This is not a secure or recommended approach. Private keys should never be exposed or shared across projects or VMs, as this can introduce significant security risks. Storing sensitive keys in VM metadata makes it vulnerable to exposure, and it does not follow best practices for managing access to resources. Additionally, this method does not scale well and can lead to mismanagement of credentials.

B. Add the private key to the SSH configuration of each VM.
This is similar to option A and also not recommended. Embedding private keys into SSH configurations is an insecure practice, as it would give unauthorized users access if the SSH configuration is compromised. This method is not scalable and goes against the principles of least privilege and secure access.

D. Set the Compute Engine API scope to read/write during VM creation.
Setting API scopes is used for controlling access to Google Cloud APIs from the VM, but it does not provide the specific permissions required to take snapshots. API scopes determine what the VM itself can access but do not grant permissions to a service account in a separate project (proj-sa) for actions in another project (proj-vm). This option does not address the problem correctly.

The most secure and effective way to allow a service account in proj-sa to take VM snapshots in proj-vm is to assign it the Compute Storage Admin role within the proj-vm project. This follows IAM best practices, ensures the principle of least privilege, and uses role-based access control to manage permissions. Therefore, the correct answer is C.



Question 7:

You initially deployed a Google App Engine application in the us-central region. Later, you decide that users in Asia would be better served if the app ran in the asia-northeast1 region

What is the correct way to move the application to this new region?

A. Change the default region for the current project to asia-northeast1.
B. Reconfigure the existing App Engine app to run in the new region.
C. Add a second App Engine deployment in the new region within the same project.
D. Create a new project and redeploy the App Engine application to the desired region.

Correct Answer: D

Explanation:

Google App Engine (GAE) does not support changing the region of an existing application after it has been deployed. The region in which the application is deployed is set when the application is first created, and it cannot be modified later.

Here’s the explanation of the answer choices:

Correct Strategy:

D. Create a new project and redeploy the App Engine application to the desired region.
Since Google App Engine does not allow changing the region of an existing application, you will need to create a new App Engine application in a new project or within the same project, but this will require redeployment in the desired region. In this case, the asia-northeast1 region would be the new target for the application. You can then migrate or copy any necessary configuration or data to the new deployment, but the region cannot be changed for an already existing App Engine instance.

Why the Other Options are Incorrect:

A. Change the default region for the current project to asia-northeast1.
This option is incorrect because Google App Engine does not allow the region of an already-deployed app to be changed. The region is set when the app is initially deployed and cannot be updated later. Changing the "default region" for a project doesn't impact the existing App Engine application.

B. Reconfigure the existing App Engine app to run in the new region.
This is not possible. Once an App Engine app is deployed to a specific region, it cannot be reconfigured to run in another region. The region is set during the initial deployment process and is tied to that deployment. You would need to create a new app in the desired region.

C. Add a second App Engine deployment in the new region within the same project.
While you can deploy multiple services or versions of an App Engine application within the same project, the application cannot be "moved" to a new region within the same deployment. You would need to deploy a new instance of the app in the desired region (asia-northeast1), but this would not be a simple "add-on" to the existing deployment.

To move your Google App Engine application to a different region, you must redeploy it in the new region (asia-northeast1). Since the region cannot be changed for an existing deployment, you should create a new project or use the same project and deploy the app again in the desired region. Therefore, the correct answer is D.



Question 8:

You need to provide three users with the ability to read and modify table data in a specific Cloud Spanner instance.

What is the most appropriate way to grant these permissions?

A. Run gcloud iam roles describe roles/spanner.databaseUser and apply the role to each individual.
B. Use gcloud iam roles describe roles/spanner.databaseUser, create a user group, assign users to the group, and then apply the role to that group.
C. Execute gcloud iam roles describe roles/spanner.viewer --project my-project and grant each user that role.
D. Assign the roles/spanner.viewer role to a group after adding the users, using the described command.

Correct Answer: B

Explanation:

In this scenario, the goal is to grant the necessary permissions to three users to allow them to read and modify table data in Cloud Spanner. The role that gives the appropriate permissions to read and modify table data in Cloud Spanner is roles/spanner.databaseUser, which allows users to perform administrative and data manipulation tasks such as reading, writing, and modifying data in Cloud Spanner databases.

Let's examine the answer choices:

A. Run gcloud iam roles describe roles/spanner.databaseUser and apply the role to each individual.
This is not the most efficient solution. While this approach will work (applying the role directly to each individual user), it is not the most scalable or manageable solution, especially when you have multiple users. It would be better to group the users into a user group and apply the role to the group instead of individual users.

B. Use gcloud iam roles describe roles/spanner.databaseUser, create a user group, assign users to the group, and then apply the role to that group.
This is the most efficient approach. By creating a user group and assigning the roles/spanner.databaseUser role to that group, you can manage permissions more effectively. This way, all users in the group inherit the same permissions, making it easier to manage and update permissions for multiple users at once. Assigning roles to groups is considered a best practice for scalability and ease of management in large organizations.

C. Execute gcloud iam roles describe roles/spanner.viewer --project my-project and grant each user that role.
The roles/spanner.viewer role only provides read-only access to Cloud Spanner resources, which doesn't meet the requirement of allowing users to modify table data. The appropriate role for both read and modify permissions is roles/spanner.databaseUser, not roles/spanner.viewer.

D. Assign the roles/spanner.viewer role to a group after adding the users, using the described command.
This option is not correct for the same reason as option C: the roles/spanner.viewer role is read-only, and users need the ability to modify table data, which is not allowed with the viewer role.

The correct answer is B, because creating a user group and applying the necessary roles/spanner.databaseUser role to the group allows for better management and scalability when granting the appropriate permissions to users.


Question 9:

Your company uses multiple GCP services, and you are tasked with enforcing encryption of data at rest using customer-managed encryption keys (CMEK). You want to apply this policy across all Cloud Storage buckets.

What is the best way to enforce CMEK for Cloud Storage?

A. Create a CMEK in Cloud KMS and manually assign it to each bucket.
B. Enable CMEK on the billing account, and it will apply to all buckets automatically.
C. Apply a uniform bucket-level policy to force all uploads to use CMEK.
D. Create an organization policy that enforces CMEK usage for Cloud Storage.

Correct Answer: D

Explanation:

In Google Cloud, data stored in Cloud Storage is encrypted by default, but you have the option to use customer-managed encryption keys (CMEK) to manage encryption and decryption operations. To enforce the use of CMEK across all Cloud Storage buckets in your organization, you need a scalable, organization-wide policy that applies uniformly to all buckets without requiring individual configurations for each one.

Let's break down each option:

A. Create a CMEK in Cloud KMS and manually assign it to each bucket.
While this option would allow you to use CMEK for specific Cloud Storage buckets, it is not efficient for applying the policy across all buckets in an organization. Manually assigning a CMEK to each bucket could be error-prone and difficult to manage at scale, especially when dealing with multiple buckets and projects.

B. Enable CMEK on the billing account, and it will apply to all buckets automatically.
This is incorrect. CMEK configurations cannot be applied at the billing account level for Cloud Storage. Enabling CMEK at the billing account level does not automatically enforce it on all buckets. You would still need to apply CMEK at the individual bucket or project level.

C. Apply a uniform bucket-level policy to force all uploads to use CMEK.
While uniform bucket-level policies allow you to manage bucket-level settings consistently, this option does not fully enforce the use of CMEK. It mainly applies to managing access control and other bucket settings, not encryption policies across all buckets in an organization.

D. Create an organization policy that enforces CMEK usage for Cloud Storage.
This is the correct and most efficient approach. By creating an organization policy, you can enforce the use of CMEK across all Cloud Storage buckets in the entire organization. Organization policies are powerful tools that allow you to centrally enforce compliance and security requirements at the organization or project level. Using an organization policy ensures that all buckets within the organization follow the same encryption policy and prevents users from creating buckets without CMEK.

The best way to enforce customer-managed encryption keys (CMEK) for all Cloud Storage buckets is by creating an organization policy that enforces CMEK usage, ensuring consistent application across all buckets in your organization. Therefore, the correct answer is D.


Question 10:

You are managing GCP resources through Terraform and want to ensure that no developer can create Compute Engine instances without using specific labels. This requirement must be enforced at the organization level.

Which approach best enforces this policy?

A. Define a Terraform module that includes labels, and make developers use it.
B. Create an organization policy constraint that requires specific labels for Compute Engine resources.
C. Use Cloud Logging to alert if any instance is created without labels.
D. Set IAM conditions to allow instance creation only when labels are specified.

Correct Answer: B

Explanation:

When managing resources at the organization level in Google Cloud, especially using tools like Terraform, it is essential to enforce compliance policies to ensure consistency and governance. In this case, the requirement is to ensure that all Compute Engine instances have specific labels applied, and this needs to be enforced at the organization level.

Let's examine each option:

A. Define a Terraform module that includes labels, and make developers use it.
This approach involves defining a Terraform module with predefined labels and requiring developers to use this module when provisioning resources. While this approach can ensure that labels are used within Terraform-managed infrastructure, it is not enforceable at the organization level. Developers could bypass this requirement by directly provisioning instances without using the prescribed module, or they might use other tools besides Terraform.

B. Create an organization policy constraint that requires specific labels for Compute Engine resources.
This is the most appropriate approach. Google Cloud provides organization policies, which allow administrators to enforce rules across all resources in an organization. By creating an organization policy constraint, you can specify requirements for labels on Compute Engine instances, ensuring that all instances must comply with the label policy. This method is central and ensures consistent enforcement across all developers and Terraform configurations, making it the most effective way to meet the requirement.

C. Use Cloud Logging to alert if any instance is created without labels.
While Cloud Logging can be used to monitor and alert on resources that do not comply with specific requirements, it is a reactive approach. It only provides visibility after the fact and does not prevent the creation of resources without labels. It would be better to have a proactive enforcement mechanism, such as an organization policy, rather than relying on alerts alone.

D. Set IAM conditions to allow instance creation only when labels are specified.
IAM conditions are used to enforce access control policies based on specific conditions, such as the presence of labels. However, IAM conditions are not suitable for enforcing specific labels on resources directly. IAM roles control who can perform actions, but they don't allow for the enforcement of resource configuration requirements, such as requiring specific labels on instances.

The most effective and enforceable method to ensure that no developer can create Compute Engine instances without using specific labels at the organization level is to create an organization policy constraint that mandates specific labels for Compute Engine resources. Therefore, the correct answer is B.