ExamGecko
Home Home / Google / Associate Cloud Engineer

Google Associate Cloud Engineer Practice Test - Questions Answers, Page 22

Question list
Search
Search

List of questions

Search

Related questions











You created a cluster.YAML file containing

resources:

name: cluster

type: container.v1.cluster

properties:

zone: europe-west1-b

cluster:

description: My GCP ACE cluster

initialNodeCount: 2

You want to use Cloud Deployment Manager to create this cluster in GKE. What should you do?

A.
gcloud deployment-manager deployments create my-gcp-ace-cluster --config cluster.yaml
A.
gcloud deployment-manager deployments create my-gcp-ace-cluster --config cluster.yaml
Answers
B.
gcloud deployment-manager deployments create my-gcp-ace-cluster --type container.v1.cluster --config cluster.yaml
B.
gcloud deployment-manager deployments create my-gcp-ace-cluster --type container.v1.cluster --config cluster.yaml
Answers
C.
gcloud deployment-manager deployments apply my-gcp-ace-cluster --type container.v1.cluster --config cluster.yaml
C.
gcloud deployment-manager deployments apply my-gcp-ace-cluster --type container.v1.cluster --config cluster.yaml
Answers
D.
gcloud deployment-manager deployments apply my-gcp-ace-cluster --config cluster.yaml
D.
gcloud deployment-manager deployments apply my-gcp-ace-cluster --config cluster.yaml
Answers
Suggested answer: D

Explanation:

gcloud deployment-manager deployments create creates deployments based on the configuration file. (Infrastructure as code). All the configuration related to the artifacts is in the configuration file. This command correctly creates a cluster based on the provided cluster.yaml configuration file.

Ref:https://cloud.google.com/sdk/gcloud/reference/deployment-manager/deployments/create

You created a Kubernetes deployment by running kubectl run nginx image=nginx labels=app=prod. Your Kubernetes cluster is also used by a number of other deployments. How can you find the identifier of the pods for this nginx deployment?

A.
kubectl get deployments --output=pods
A.
kubectl get deployments --output=pods
Answers
B.
gcloud get pods --selector=''app=prod''
B.
gcloud get pods --selector=''app=prod''
Answers
C.
kubectl get pods -I ''app=prod''
C.
kubectl get pods -I ''app=prod''
Answers
D.
gcloud list gke-deployments -filter={pod }
D.
gcloud list gke-deployments -filter={pod }
Answers
Suggested answer: C

Explanation:

This command correctly lists pods that have the label app=prod. When creating the deployment, we used the label app=prod so listing pods that have this label retrieve the pods belonging to nginx deployments. You can list pods by using Kubernetes CLI kubectl get pods.

Ref:https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/

Ref:https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/#list-containers-filtering-by-pod-label

You created a Kubernetes deployment by runningkubectl run nginx image=nginx replicas=1. After a few days, you decided you no longer want this deployment. You identified the pod and deleted it by running kubectl delete pod. You noticed the pod got recreated.

$ kubectl get pods

NAME READY STATUS RESTARTS AGE

nginx-84748895c4-nqqmt 1/1 Running 0 9m41s

$ kubectl delete pod nginx-84748895c4-nqqmt

pod nginx-84748895c4-nqqmt deleted

$ kubectl get pods

NAME READY STATUS RESTARTS AGE

nginx-84748895c4-k6bzl 1/1 Running 0 25s

What should you do to delete the deployment and avoid pod getting recreated?

A.
kubectl delete deployment nginx
A.
kubectl delete deployment nginx
Answers
B.
kubectl delete --deployment=nginx
B.
kubectl delete --deployment=nginx
Answers
C.
kubectl delete pod nginx-84748895c4-k6bzl --no-restart 2
C.
kubectl delete pod nginx-84748895c4-k6bzl --no-restart 2
Answers
D.
kubectl delete inginx
D.
kubectl delete inginx
Answers
Suggested answer: A

Explanation:

This command correctly deletes the deployment. Pods are managed by kubernetes workloads (deployments). When a pod is deleted, the deployment detects the pod is unavailable and brings up another pod to maintain the replica count. The only way to delete the workload is by deleting the deployment itself using the kubectl delete deployment command.

$ kubectl delete deployment nginx

deployment.apps nginx deleted

Ref:https://kubernetes.io/docs/reference/kubectl/cheatsheet/#deleting-resources

You have a number of applications that have bursty workloads and are heavily dependent on topics to decouple publishing systems from consuming systems. Your company would like to go serverless to enable developers to focus on writing code without worrying about infrastructure. Your solution architect has already identified Cloud Pub/Sub as a suitable alternative for decoupling systems. You have been asked to identify a suitable GCP Serverless service that is easy to use with Cloud Pub/Sub. You want the ability to scale down to zero when there is no traffic in order to minimize costs. You want to follow Google recommended practices. What should you suggest?

A.
Cloud Run for Anthos
A.
Cloud Run for Anthos
Answers
B.
Cloud Run
B.
Cloud Run
Answers
C.
App Engine Standard
C.
App Engine Standard
Answers
D.
Cloud Functions.
D.
Cloud Functions.
Answers
Suggested answer: D

Explanation:

Cloud Functions is Google Cloud's event-driven serverless compute platform that lets you run your code locally or in the cloud without having to provision servers. Cloud Functions scales up or down, so you pay only for compute resources you use. Cloud Functions have excellent integration with Cloud Pub/Sub, lets you scale down to zero and is recommended by Google as the ideal serverless platform to use when dependent on Cloud Pub/Sub. 'If you're building a simple API (a small set of functions to be accessed via HTTP or Cloud Pub/Sub), we recommend using Cloud Functions.' Ref:https://cloud.google.com/serverless-options

You have been asked to migrate a docker application from datacenter to cloud. Your solution architect has suggested uploading docker images to GCR in one project and running an application in a GKE cluster in a separate project. You want to store images in the project img-278322 and run the application in the project prod-278986. You want to tag the image as acme_track_n_trace:v1. You want to follow Google-recommended practices. What should you do?

A.
Run gcloud builds submit --tag gcr.io/img-278322/acme_track_n_trace
A.
Run gcloud builds submit --tag gcr.io/img-278322/acme_track_n_trace
Answers
B.
Run gcloud builds submit --tag gcr.io/img-278322/acme_track_n_trace:v1
B.
Run gcloud builds submit --tag gcr.io/img-278322/acme_track_n_trace:v1
Answers
C.
Run gcloud builds submit --tag gcr.io/prod-278986/acme_track_n_trace
C.
Run gcloud builds submit --tag gcr.io/prod-278986/acme_track_n_trace
Answers
D.
Run gcloud builds submit --tag gcr.io/prod-278986/acme_track_n_trace:v1
D.
Run gcloud builds submit --tag gcr.io/prod-278986/acme_track_n_trace:v1
Answers
Suggested answer: B

Explanation:

Run gcloud builds submit tag gcr.io/img-278322/acme_track_n_trace:v1. is the right answer.

This command correctly tags the image as acme_track_n_trace:v1 and uploads the image to the img-278322 project.

Ref:https://cloud.google.com/sdk/gcloud/reference/builds/submit

You have files in a Cloud Storage bucket that you need to share with your suppliers. You want to restrict the time that the files are available to your suppliers to 1 hour. You want to follow Google recommended practices. What should you do?

A.
Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -m 1h gs:///*.
A.
Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -m 1h gs:///*.
Answers
B.
Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -d 1h gs:///**.
B.
Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -d 1h gs:///**.
Answers
C.
Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -p 60m gs:///.
C.
Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -p 60m gs:///.
Answers
D.
Create a JSON key for the Default Compute Engine Service Account. Execute the command gsutil signurl -t 60m gs:///***
D.
Create a JSON key for the Default Compute Engine Service Account. Execute the command gsutil signurl -t 60m gs:///***
Answers
Suggested answer: B

Explanation:

This command correctly specifies the duration that the signed url should be valid for by using the -d flag. The default is 1 hour so omitting the -d flag would have also resulted in the same outcome. Times may be specified with no suffix (default hours), or with s = seconds, m = minutes, h = hours, d = days. The max duration allowed is 7d. Ref:https://cloud.google.com/storage/docs/gsutil/commands/signurl

You have a managed instance group comprised of preemptible VM's. All of the VM's keepdeleting and recreating themselves every minute. What is a possible cause of thisbehavior?

A.
Your zonal capacity is limited, causing all preemptible VM's to be shutdown torecover capacity. Try deploying your group to another zone.
A.
Your zonal capacity is limited, causing all preemptible VM's to be shutdown torecover capacity. Try deploying your group to another zone.
Answers
B.
You have hit your instance quota for the region.
B.
You have hit your instance quota for the region.
Answers
C.
Your managed instance group's VM's are toggled to only last 1 minute inpreemptible settings.
C.
Your managed instance group's VM's are toggled to only last 1 minute inpreemptible settings.
Answers
D.
Your managed instance group's health check is repeatedly failing, either to amisconfigured health check or misconfigured firewall rules not allowing the healthcheck to access the instance
D.
Your managed instance group's health check is repeatedly failing, either to amisconfigured health check or misconfigured firewall rules not allowing the healthcheck to access the instance
Answers
Suggested answer: D

Explanation:

as the instances (normal or preemptible) would be terminated and relaunched if the health check fails either due to application not configured properly or the instances firewall do not allow health check to happen.

GCP provides health check systems that connect to virtual machine (VM) instances on a configurable, periodic basis. Each connection attempt is called a probe. GCP records the success or failure of each probe.

Health checks and load balancers work together. Based on a configurable number of sequential successful or failed probes, GCP computes an overall health state for each VM in the load balancer. VMs that respond successfully for the configured number of times are considered healthy. VMs that fail to respond successfully for a separate number of times are unhealthy.

GCP uses the overall health state of each VM to determine its eligibility for receiving new requests. In addition to being able to configure probe frequency and health state thresholds, you can configure the criteria that define a successful probe.

You deployed an application on a managed instance group in Compute Engine. The application accepts Transmission Control Protocol (TCP) traffic on port 389 and requires you to preserve the IP address of the client who is making a request. You want to expose the application to the internet by using a load balancer. What should you do?

A.
Expose the application by using an external TCP Network Load Balancer.
A.
Expose the application by using an external TCP Network Load Balancer.
Answers
B.
Expose the application by using a TCP Proxy Load Balancer.
B.
Expose the application by using a TCP Proxy Load Balancer.
Answers
C.
Expose the application by using an SSL Proxy Load Balancer.
C.
Expose the application by using an SSL Proxy Load Balancer.
Answers
D.
Expose the application by using an internal TCP Network Load Balancer.
D.
Expose the application by using an internal TCP Network Load Balancer.
Answers
Suggested answer: B

You are building a multi-player gaming application that will store game information in a database. As the popularity of the application increases, you are concerned about delivering consistent performance. You need to ensure an optimal gaming performance for global users, without increasing the management complexity. What should you do?

A.
Use Cloud SQL database with cross-region replication to store game statistics in the EU, US, and APAC regions.
A.
Use Cloud SQL database with cross-region replication to store game statistics in the EU, US, and APAC regions.
Answers
B.
Use Cloud Spanner to store user data mapped to the game statistics.
B.
Use Cloud Spanner to store user data mapped to the game statistics.
Answers
C.
Use BigQuery to store game statistics with a Redis on Memorystore instance in the front to provide global consistency.
C.
Use BigQuery to store game statistics with a Redis on Memorystore instance in the front to provide global consistency.
Answers
D.
Store game statistics in a Bigtable database partitioned by username.
D.
Store game statistics in a Bigtable database partitioned by username.
Answers
Suggested answer: B

Your company has multiple projects linked to a single billing account in Google Cloud. You need to visualize the costs with specific metrics that should be dynamically calculated based on company-specific criteria. You want to automate the process. What should you do?


A.
In the Google Cloud console, visualize the costs related to the projects in the Reports section.
A.
In the Google Cloud console, visualize the costs related to the projects in the Reports section.
Answers
B.
In the Google Cloud console, visualize the costs related to the projects in the Cost breakdown section.
B.
In the Google Cloud console, visualize the costs related to the projects in the Cost breakdown section.
Answers
C.
In the Google Cloud console, use the export functionality of the Cost table. Create a Looker Studiodashboard on top of the CSV export.
C.
In the Google Cloud console, use the export functionality of the Cost table. Create a Looker Studiodashboard on top of the CSV export.
Answers
D.
Configure Cloud Billing data export to BigOuery for the billing account. Create a Looker Studio dashboard on top of the BigOuery export.
D.
Configure Cloud Billing data export to BigOuery for the billing account. Create a Looker Studio dashboard on top of the BigOuery export.
Answers
Suggested answer: D
Total 289 questions
Go to page: of 29