ExamGecko
Home Home / Google / Associate Cloud Engineer

Google Associate Cloud Engineer Practice Test - Questions Answers, Page 27

Question list
Search
Search

List of questions

Search

Related questions











Your company is moving its continuous integration and delivery (CI/CD) pipeline to Compute Engine instances. The pipeline will manage the entire cloud infrastructure through code. How can you ensure that the pipeline has appropriate permissions while your system is following security best practices?

A.
* Add a step for human approval to the CI/CD pipeline before the execution of the infrastructure provisioning. * Use the human approvals IAM account for the provisioning.
A.
* Add a step for human approval to the CI/CD pipeline before the execution of the infrastructure provisioning. * Use the human approvals IAM account for the provisioning.
Answers
B.
* Attach a single service account to the compute instances. * Add minimal rights to the service account. * Allow the service account to impersonate a Cloud Identity user with elevated permissions to create, update, or delete resources.
B.
* Attach a single service account to the compute instances. * Add minimal rights to the service account. * Allow the service account to impersonate a Cloud Identity user with elevated permissions to create, update, or delete resources.
Answers
C.
* Attach a single service account to the compute instances. * Add all required Identity and Access Management (IAM) permissions to this service account to create, update, or delete resources
C.
* Attach a single service account to the compute instances. * Add all required Identity and Access Management (IAM) permissions to this service account to create, update, or delete resources
Answers
D.
* Create multiple service accounts, one for each pipeline with the appropriate minimal Identity and Access Management (IAM) permissions. * Use a secret manager service to store the key files of the service accounts. * Allow the CI/CD pipeline to request the appropriate secrets during the execution of the pipeline.
D.
* Create multiple service accounts, one for each pipeline with the appropriate minimal Identity and Access Management (IAM) permissions. * Use a secret manager service to store the key files of the service accounts. * Allow the CI/CD pipeline to request the appropriate secrets during the execution of the pipeline.
Answers
Suggested answer: B

Explanation:

The best option is to attach a single service account to the compute instances and add minimal rights to the service account. Then, allow the service account to impersonate a Cloud Identity user with elevated permissions to create, update, or delete resources. This way, the service account can use short-lived access tokens to authenticate to Google Cloud APIs without needing to manage service account keys. This option follows the principle of least privilege and reduces the risk of credential leakage and misuse.

Option A is not recommended because it requires human intervention, which can slow down the CI/CD pipeline and introduce human errors. Option C is not secure because it grants all required IAM permissions to a single service account, which can increase the impact of a compromised key. Option D is not cost-effective because it requires creating and managing multiple service accounts and keys, as well as using a secret manager service.

1: https://cloud.google.com/iam/docs/impersonating-service-accounts

2: https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys

3: https://cloud.google.com/iam/docs/understanding-service-accounts

You recently discovered that your developers are using many service account keys during their development process. While you work on a long term improvement, you need to quickly implement a process to enforce short-lived service account credentials in your company. You have the following requirements:

* All service accounts that require a key should be created in a centralized project called pj-sa.

* Service account keys should only be valid for one day.

You need a Google-recommended solution that minimizes cost. What should you do?

A.
Implement a Cloud Run job to rotate all service account keys periodically in pj-sa. Enforce an org policy to deny service account key creation with an exception to pj-sa.
A.
Implement a Cloud Run job to rotate all service account keys periodically in pj-sa. Enforce an org policy to deny service account key creation with an exception to pj-sa.
Answers
B.
Implement a Kubernetes Cronjob to rotate all service account keys periodically. Disable attachment of service accounts to resources in all projects with an exception to pj-sa.
B.
Implement a Kubernetes Cronjob to rotate all service account keys periodically. Disable attachment of service accounts to resources in all projects with an exception to pj-sa.
Answers
C.
Enforce an org policy constraint allowing the lifetime of service account keys to be 24 hours. Enforce an org policy constraint denying service account key creation with an exception on pj-sa.
C.
Enforce an org policy constraint allowing the lifetime of service account keys to be 24 hours. Enforce an org policy constraint denying service account key creation with an exception on pj-sa.
Answers
D.
Enforce a DENY org policy constraint over the lifetime of service account keys for 24 hours. Disable attachment of service accounts to resources in all projects with an exception to pj-sa.
D.
Enforce a DENY org policy constraint over the lifetime of service account keys for 24 hours. Disable attachment of service accounts to resources in all projects with an exception to pj-sa.
Answers
Suggested answer: C

Explanation:

According to the Google Cloud documentation, you can use organization policy constraints to control the creation and expiration of service account keys. The constraints are:

constraints/iam.allowServiceAccountKeyCreation: This constraint allows you to specify which projects or folders can create service account keys. You can set the value totrueorfalse, or use a condition to apply the constraint to specific service accounts. By setting this constraint tofalsefor the organization and adding an exception for the pj-sa project, you can prevent developers from creating service account keys in other projects.

constraints/iam.serviceAccountKeyMaxLifetime: This constraint allows you to specify the maximum lifetime of service account keys. You can set the value to a duration in seconds, such as86400for one day. By setting this constraint to86400for the organization, you can ensure that all service account keys expire after one day.

These constraints are recommended by Google Cloud as best practices to minimize the risk of service account key misuse or compromise. They also help you reduce the cost of managing service account keys, as you do not need to implement a custom solution to rotate or delete them.

1: Associate Cloud Engineer Certification Exam Guide | Learn - Google Cloud

5: Create and delete service account keys - Google Cloud

Organization policy constraints for service accounts

You have deployed an application on a Compute Engine instance. An external consultant needs to access the Linux-based instance. The consultant is connected to your corporate network through a VPN connection, but the consultant has no Google account. What should you do?

A.
Instruct the external consultant to use the gcloud compute ssh command line tool by using Identity-Aware Proxy to access the instance.
A.
Instruct the external consultant to use the gcloud compute ssh command line tool by using Identity-Aware Proxy to access the instance.
Answers
B.
Instruct the external consultant to use the gcloud compute ssh command line tool by using the public IP address of the instance to access it.
B.
Instruct the external consultant to use the gcloud compute ssh command line tool by using the public IP address of the instance to access it.
Answers
C.
Instruct the external consultant to generate an SSH key pair, and request the public key from the consultant. Add the public key to the instance yourself, and have the consultant access the instance through SSH with their private key.
C.
Instruct the external consultant to generate an SSH key pair, and request the public key from the consultant. Add the public key to the instance yourself, and have the consultant access the instance through SSH with their private key.
Answers
D.
Instruct the external consultant to generate an SSH key pair, and request the private key from the consultant. Add the private key to the instance yourself, and have the consultant access the instance through SSH with their public key.
D.
Instruct the external consultant to generate an SSH key pair, and request the private key from the consultant. Add the private key to the instance yourself, and have the consultant access the instance through SSH with their public key.
Answers
Suggested answer: C

Explanation:

The best option is to instruct the external consultant to generate an SSH key pair, and request the public key from the consultant. Then, add the public key to the instance yourself, and have the consultant access the instance through SSH with their private key. This way, you can grant the consultant access to the instance without requiring a Google account or exposing the instance's public IP address.This option also follows the best practice of using user-managed SSH keys instead of service account keys for SSH access1.

Option A is not feasible because the external consultant does not have a Google account, and therefore cannot use Identity-Aware Proxy (IAP) to access the instance.IAP requires the user to authenticate with a Google account and have the appropriate IAM permissions to access the instance2. Option B is not secure because it exposes the instance's public IP address, which can increase the risk of unauthorized access or attacks. Option D is not correct because it reverses the roles of the public and private keys. The public key should be added to the instance, and the private key should be kept by the consultant.Sharing the private key with anyone else can compromise the security of the SSH connection3.

1: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys

2: https://cloud.google.com/iap/docs/using-tcp-forwarding

3: https://cloud.google.com/compute/docs/instances/connecting-advanced#sshbetweeninstances

You just installed the Google Cloud CLI on your new corporate laptop. You need to list the existing instances of your company on Google Cloud. What must you do before you run the gcloud compute instances list command?

Choose 2 answers

A.
Run gcloud auth login, enter your login credentials in the dialog window, and paste the received login token to gcloud CLI.
A.
Run gcloud auth login, enter your login credentials in the dialog window, and paste the received login token to gcloud CLI.
Answers
B.
Create a Google Cloud service account, and download the service account key. Place the key file in a folder on your machine where gcloud CLI can find it.
B.
Create a Google Cloud service account, and download the service account key. Place the key file in a folder on your machine where gcloud CLI can find it.
Answers
C.
Download your Cloud Identity user account key. Place the key file in a folder on your machine where gcloud CLI can find it.
C.
Download your Cloud Identity user account key. Place the key file in a folder on your machine where gcloud CLI can find it.
Answers
D.
Run gcloud config set compute/zone $my_zone to set the default zone for gcloud CLI.
D.
Run gcloud config set compute/zone $my_zone to set the default zone for gcloud CLI.
Answers
E.
Run gcloud config set project $my_project to set the default project for gcloud CLI.
E.
Run gcloud config set project $my_project to set the default project for gcloud CLI.
Answers
Suggested answer: A, E

Explanation:

Before you run the gcloud compute instances list command, you need to do two things: authenticate with your user account and set the default project for gcloud CLI.

To authenticate with your user account, you need to run gcloud auth login, enter your login credentials in the dialog window, and paste the received login token to gcloud CLI.This will authorize the gcloud CLI to access Google Cloud resources on your behalf1.

To set the default project for gcloud CLI, you need to run gcloud config set project $my_project, where $my_project is the ID of the project that contains the instances you want to list.This will save you from having to specify the project flag for every gcloud command2.

Option B is not recommended, because using a service account key increases the risk of credential leakage and misuse.It is also not necessary, because you can use your user account to authenticate to the gcloud CLI3. Option C is not correct, because there is no such thing as a Cloud Identity user account key.Cloud Identity is a service that provides identity and access management for Google Cloud users and groups4. Option D is not required, because the gcloud compute instances list command does not depend on the default zone. You can list instances from all zones or filter by a specific zone using the --filter flag.

1: https://cloud.google.com/sdk/docs/authorizing

2: https://cloud.google.com/sdk/gcloud/reference/config/set

3: https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys

4: https://cloud.google.com/identity/docs/overview

: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list


During a recent audit of your existing Google Cloud resources, you discovered several users with email addresses outside of your Google Workspace domain.

You want to ensure that your resources are only shared with users whose email addresses match your domain. You need to remove any mismatched users, and you want to avoid having to audit your resources to identify mismatched users. What should you do?

A.
Create a Cloud Scheduler task to regularly scan your projects and delete mismatched users.
A.
Create a Cloud Scheduler task to regularly scan your projects and delete mismatched users.
Answers
B.
Create a Cloud Scheduler task to regularly scan your resources and delete mismatched users.
B.
Create a Cloud Scheduler task to regularly scan your resources and delete mismatched users.
Answers
C.
Set an organizational policy constraint to limit identities by domain to automatically remove mismatched users.
C.
Set an organizational policy constraint to limit identities by domain to automatically remove mismatched users.
Answers
D.
Set an organizational policy constraint to limit identities by domain, and then retroactively remove the existing mismatched users.
D.
Set an organizational policy constraint to limit identities by domain, and then retroactively remove the existing mismatched users.
Answers
Suggested answer: D

Explanation:

https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints This list constraint defines the set of domains that email addresses added to Essential Contacts can have. By default, email addresses with any domain can be added to Essential Contacts. The allowed/denied list must specify one or more domains of the form @example.com. If this constraint is active and configured with allowed values, only email addresses with a suffix matching one of the entries from the list of allowed domains can be added in Essential Contacts. This constraint has no effect on updating or removing existing contacts. constraints/essentialcontacts.allowedContactDomains

You are responsible for a web application on Compute Engine. You want your support team to be notified automatically if users experience high latency for at least 5 minutes. You need a Google-recommended solution with no development cost. What should you do?

A.
Create an alert policy to send a notification when the HTTP response latency exceeds the specified threshold.
A.
Create an alert policy to send a notification when the HTTP response latency exceeds the specified threshold.
Answers
B.
Implement an App Engine service which invokes the Cloud Monitoring API and sends a notification in case of anomalies.
B.
Implement an App Engine service which invokes the Cloud Monitoring API and sends a notification in case of anomalies.
Answers
C.
Use the Cloud Monitoring dashboard to observe latency and take the necessary actions when the response latency exceeds the specified threshold.
C.
Use the Cloud Monitoring dashboard to observe latency and take the necessary actions when the response latency exceeds the specified threshold.
Answers
D.
Export Cloud Monitoring metrics to BigQuery and use a Looker Studio dashboard to monitor your web applications latency.
D.
Export Cloud Monitoring metrics to BigQuery and use a Looker Studio dashboard to monitor your web applications latency.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/monitoring/alerts#alerting-example

Your team is building a website that handles votes from a large user population. The incoming votes will arrive at various rates. You want to optimize the storage and processing of the votes. What should you do?

A.
Save the incoming votes to Firestore. Use Cloud Scheduler to trigger a Cloud Functions instance to periodically process the votes.
A.
Save the incoming votes to Firestore. Use Cloud Scheduler to trigger a Cloud Functions instance to periodically process the votes.
Answers
B.
Use a dedicated instance to process the incoming votes. Send the votes directly to this instance.
B.
Use a dedicated instance to process the incoming votes. Send the votes directly to this instance.
Answers
C.
Save the incoming votes to a JSON file on Cloud Storage. Process the votes in a batch at the end of the day.
C.
Save the incoming votes to a JSON file on Cloud Storage. Process the votes in a batch at the end of the day.
Answers
D.
Save the incoming votes to Pub/Sub. Use the Pub/Sub topic to trigger a Cloud Functions instance to process the votes.
D.
Save the incoming votes to Pub/Sub. Use the Pub/Sub topic to trigger a Cloud Functions instance to process the votes.
Answers
Suggested answer: D

Explanation:

Pub/Sub is a scalable and reliable messaging service that can handle large volumes of data from different sources at different rates. It allows you to decouple the producers and consumers of the data, and provides a durable and persistent storage for the messages until they are delivered. Cloud Functions is a serverless platform that can execute code in response to events, such as messages published to a Pub/Sub topic. It can scale automatically based on the load, and you only pay for the resources you use. By using Pub/Sub and Cloud Functions, you can optimize the storage and processing of the votes, as you can handle the variable rates of incoming votes, process them in real time or near real time, and avoid managing servers or VMs.Reference:

Pub/Sub documentation

Cloud Functions documentation

Choosing a messaging service for Google Cloud

Your team has developed a stateless application which requires it to be run directly on virtual machines. The application is expected to receive a fluctuating amount of traffic and needs to scale automatically. You need to deploy the application. What should you do?

A.
Deploy the application on a managed instance group and configure autoscaling.
A.
Deploy the application on a managed instance group and configure autoscaling.
Answers
B.
Deploy the application on a Kubernetes Engine cluster and configure node pool autoscaling.
B.
Deploy the application on a Kubernetes Engine cluster and configure node pool autoscaling.
Answers
C.
Deploy the application on Cloud Functions and configure the maximum number instances.
C.
Deploy the application on Cloud Functions and configure the maximum number instances.
Answers
D.
Deploy the application on Cloud Run and configure autoscaling.
D.
Deploy the application on Cloud Run and configure autoscaling.
Answers
Suggested answer: A

Explanation:

A managed instance group (MIG) is a group of identical virtual machines (VMs) that you can manage as a single entity. You can use a MIG to deploy and maintain a stateless application that runs directly on VMs. A MIG can automatically scale the number of VMs based on the load or a schedule. A MIG can also automatically heal the VMs if they become unhealthy or unavailable. A MIG is suitable for applications that need to run on VMs rather than containers or serverless platforms.

B is incorrect because Kubernetes Engine is a managed service for running containerized applications on a cluster of nodes. It is not necessary to use Kubernetes Engine if the application does not use containers and can run directly on VMs.

C is incorrect because Cloud Functions is a serverless platform for running event-driven code in response to triggers. It is not suitable for applications that need to run continuously and handle HTTP requests.

D is incorrect because Cloud Run is a serverless platform for running stateless containerized applications. It is not suitable for applications that do not use containers and can run directly on VMs.

Managed instance groups documentation

Choosing a compute option for Google Cloud

A colleague handed over a Google Cloud project for you to maintain. As part of a security checkup, you want to review who has been granted the Project Owner role. What should you do?

A.
In the Google Cloud console, validate which SSH keys have been stored as project-wide keys.
A.
In the Google Cloud console, validate which SSH keys have been stored as project-wide keys.
Answers
B.
Navigate to Identity-Aware Proxy and check the permissions for these resources.
B.
Navigate to Identity-Aware Proxy and check the permissions for these resources.
Answers
C.
Enable Audit logs on the 1AM & admin page for all resources, and validate the results.
C.
Enable Audit logs on the 1AM & admin page for all resources, and validate the results.
Answers
D.
Use the gcloud projects get-iam-policy command to view the current role assignments.
D.
Use the gcloud projects get-iam-policy command to view the current role assignments.
Answers
Suggested answer: D

Explanation:

The gcloud projects get-iam-policy command displays the IAM policy for a project, which includes the roles and members assigned to those roles. The Project Owner role grants full access to all resources and actions in the project. By using this command, you can review who has been granted this role and make any necessary changes.Reference:

1: Associate Cloud Engineer Certification Exam Guide | Learn - Google Cloud

2: gcloud projects get-iam-policy | Cloud SDK Documentation | Google Cloud

3: Understanding roles | Cloud IAM Documentation | Google Cloud

You are deploying a web application using Compute Engine. You created a managed instance group (MIG) to host the application. You want to follow Google-recommended practices to implement a secure and highly available solution. What should you do?

A.
Use SSL proxy load balancing for the MIG and an A record in your DNS private zone with the load balancer's IP address.
A.
Use SSL proxy load balancing for the MIG and an A record in your DNS private zone with the load balancer's IP address.
Answers
B.
Use SSL proxy load balancing for the MIG and a CNAME record in your DNS public zone with the load balancer's IP address.
B.
Use SSL proxy load balancing for the MIG and a CNAME record in your DNS public zone with the load balancer's IP address.
Answers
C.
Use HTTP(S) load balancing for the MIG and a CNAME record in your DNS private zone with the load balancer's IP address.
C.
Use HTTP(S) load balancing for the MIG and a CNAME record in your DNS private zone with the load balancer's IP address.
Answers
D.
Use HTTP(S) load balancing for the MIG and an A record in your DNS public zone with the load balancer's IP address.
D.
Use HTTP(S) load balancing for the MIG and an A record in your DNS public zone with the load balancer's IP address.
Answers
Suggested answer: D

Explanation:

HTTP(S) load balancing is a Google-recommended practice for distributing web traffic across multiple regions and zones, and providing high availability, scalability, and security for web applications. It supports both IPv4 and IPv6 addresses, and can handle SSL/TLS termination and encryption. It also integrates with Cloud CDN, Cloud Armor, and Cloud Identity-Aware Proxy for enhanced performance and protection. A MIG can be used as a backend service for HTTP(S) load balancing, and can automatically scale and heal the VM instances that host the web application.

To configure DNS for HTTP(S) load balancing, you need to create an A record in your DNS public zone with the load balancer's IP address. This will map your domain name to the load balancer's IP address, and allow users to access your web application using the domain name. A CNAME record is not recommended, as it can cause latency and DNS resolution issues. A private zone is not suitable, as it is only visible within your VPC network, and not to the public internet.

HTTP(S) Load Balancing documentation

Setting up DNS records for HTTP(S) load balancing

Choosing a load balancer

Total 289 questions
Go to page: of 29