ExamGecko
Home Home / Google / Associate Cloud Engineer

Associate Cloud Engineer: Associate Cloud Engineer

Associate Cloud Engineer
Vendor:

Google

Associate Cloud Engineer Exam Questions: 296
Associate Cloud Engineer   2.370 Learners
Take Practice Tests
Comming soon
PDF | VPLUS

The Associate Cloud Engineer exam is crucial for IT professionals aiming to validate their skills in managing and securing cloud solutions on the Google Cloud Platform. To increase your chances of passing, practicing with real exam questions shared by those who have succeeded can be invaluable. In this guide, we’ll provide you with practice test questions and answers offering insights directly from candidates who have already passed the exam.

Exam Details:

  • Exam Name: Associate Cloud Engineer

  • Length of test: 2 hours (120 minutes)

  • Exam Format: Multiple-choice and multiple-select questions

  • Exam Language: English

  • Number of questions in the actual exam: 50-60 questions

  • Passing Score: 70%

Why Use Associate Cloud Engineer Practice Test?

  • Real Exam Experience: Our practice tests accurately replicate the format and difficulty of the actual Associate Cloud Engineer exam, providing you with a realistic preparation experience.

  • Identify Knowledge Gaps: Practicing with these tests helps you identify areas where you need more study, allowing you to focus your efforts effectively.

  • Boost Confidence: Regular practice with exam-like questions builds your confidence and reduces test anxiety.

  • Track Your Progress: Monitor your performance over time to see your improvement and adjust your study plan accordingly.

Key Features of Associate Cloud Engineer Practice Test:

  • Up-to-Date Content: Our community ensures that the questions are regularly updated to reflect the latest exam objectives and technology trends.

  • Detailed Explanations: Each question comes with detailed explanations, helping you understand the correct answers and learn from any mistakes.

  • Comprehensive Coverage: The practice tests cover all key topics of the Associate Cloud Engineer exam, including cloud infrastructure, security, and deployment strategies.

  • Customizable Practice: Create your own practice sessions based on specific topics or difficulty levels to tailor your study experience to your needs.

Use the member-shared Associate Cloud Engineer Practice Tests to ensure you're fully prepared for your certification exam. Start practicing today and take a significant step towards achieving your certification goals!

Related questions

You've deployed a microservice called myapp1 to a Google Kubernetes Engine cluster using the YAML file specified below:

You need to refactor this configuration so that the database password is not stored in plain text. You want to follow Google-recommended practices. What should you do?

A.
Store the database password inside the Docker image of the container, not in the YAML file.
A.
Store the database password inside the Docker image of the container, not in the YAML file.
Answers
B.
Store the database password inside a Secret object. Modify the YAML file to populate the DB_PASSWORD environment variable from the Secret.
B.
Store the database password inside a Secret object. Modify the YAML file to populate the DB_PASSWORD environment variable from the Secret.
Answers
C.
Store the database password inside a ConfigMap object. Modify the YAML file to populate the DB_PASSWORD environment variable from the ConfigMap.
C.
Store the database password inside a ConfigMap object. Modify the YAML file to populate the DB_PASSWORD environment variable from the ConfigMap.
Answers
D.
Store the database password in a file inside a Kubernetes persistent volume, and use a persistent volume claim to mount the volume to the container.
D.
Store the database password in a file inside a Kubernetes persistent volume, and use a persistent volume claim to mount the volume to the container.
Answers
Suggested answer: B

Explanation:

https://cloud.google.com/config-connector/docs/how-to/secrets#gcloud

asked 18/09/2024
Sterling White
47 questions

You create a Deployment with 2 replicas in a Google Kubernetes Engine cluster that has a single preemptible node pool. After a few minutes, you use kubectl to examine the status of your Pod and observe that one of them is still in Pending status:

What is the most likely cause?

A.
The pending Pod's resource requests are too large to fit on a single node of the cluster.
A.
The pending Pod's resource requests are too large to fit on a single node of the cluster.
Answers
B.
Too many Pods are already running in the cluster, and there are not enough resources left to schedule the pending Pod.
B.
Too many Pods are already running in the cluster, and there are not enough resources left to schedule the pending Pod.
Answers
C.
The node pool is configured with a service account that does not have permission to pull the container image used by the pending Pod.
C.
The node pool is configured with a service account that does not have permission to pull the container image used by the pending Pod.
Answers
D.
The pending Pod was originally scheduled on a node that has been preempted between the creation of the Deployment and your verification of the Pods' status. It is currently being rescheduled on a new node.
D.
The pending Pod was originally scheduled on a node that has been preempted between the creation of the Deployment and your verification of the Pods' status. It is currently being rescheduled on a new node.
Answers
Suggested answer: B

Explanation:

The pending Pods resource requests are too large to fit on a single node of the cluster. Too many Pods are already running in the cluster, and there are not enough resources left to schedule the pending Pod. is the right answer.

When you have a deployment with some pods in running and other pods in the pending state, more often than not it is a problem with resources on the nodes. Heres a sample output of this use case. We see that the problem is with insufficient CPU on the Kubernetes nodes so we have to either enable auto-scaling or manually scale up the nodes.

asked 18/09/2024
Re na
31 questions

You need to create a copy of a custom Compute Engine virtual machine (VM) to facilitate an expected increase in application traffic due to a business acquisition. What should you do?

A.
Create a Compute Engine snapshot of your base VM. Create your images from that snapshot.
A.
Create a Compute Engine snapshot of your base VM. Create your images from that snapshot.
Answers
B.
Create a Compute Engine snapshot of your base VM. Create your instances from that snapshot.
B.
Create a Compute Engine snapshot of your base VM. Create your instances from that snapshot.
Answers
C.
Create a custom Compute Engine image from a snapshot. Create your images from that image.
C.
Create a custom Compute Engine image from a snapshot. Create your images from that image.
Answers
D.
Create a custom Compute Engine image from a snapshot. Create your instances from that image.
D.
Create a custom Compute Engine image from a snapshot. Create your instances from that image.
Answers
Suggested answer: D

Explanation:

A custom image belongs only to your project. To create an instance with a custom image, you must first have a custom image.

Preparing your instance for an image

You can create an image from a disk even while it is attached to a running VM instance. However, your image will be more reliable if you put the instance in a state that is easier for the image to capture. Use one of the following processes to prepare your boot disk for the image:

Stop the instance so that it can shut down and stop writing any data to the persistent disk.

If you can't stop your instance before you create the image, minimize the amount of writes to the disk and sync your file system.

Pause apps or operating system processes that write data to that persistent disk.

Run an app flush to disk if necessary. For example, MySQL has a FLUSH statement. Other apps might have similar processes.

Stop your apps from writing to your persistent disk.

Run sudo sync.

After you prepare the instance, create the image.

https://cloud.google.com/compute/docs/images/create-delete-deprecate-private-images#prepare_instance_for_image

asked 18/09/2024
Paul Tierney
41 questions

Your Dataproc cluster runs in a single Virtual Private Cloud (VPC) network in a single subnet with range 172.16.20.128/25. There are no private IP addresses available in the VPC network. You want to add new VMs to communicate with your cluster using the minimum number of steps. What should you do?

A.
Modify the existing subnet range to 172.16.20.0/24.
A.
Modify the existing subnet range to 172.16.20.0/24.
Answers
B.
Create a new Secondary IP Range in the VPC and configure the VMs to use that range.
B.
Create a new Secondary IP Range in the VPC and configure the VMs to use that range.
Answers
C.
Create a new VPC network for the VMs. Enable VPC Peering between the VMs' VPC network and the Dataproc cluster VPC network.
C.
Create a new VPC network for the VMs. Enable VPC Peering between the VMs' VPC network and the Dataproc cluster VPC network.
Answers
D.
Create a new VPC network for the VMs with a subnet of 172.32.0.0/16. Enable VPC network Peering between the Dataproc VPC network and the VMs VPC network. Configure a custom Route exchange.
D.
Create a new VPC network for the VMs with a subnet of 172.32.0.0/16. Enable VPC network Peering between the Dataproc VPC network and the VMs VPC network. Configure a custom Route exchange.
Answers
Suggested answer: A

Explanation:

/25:

CIDR to IP Range

Result

CIDR Range 172.16.20.128/25

Netmask 255.255.255.128

Wildcard Bits 0.0.0.127

First IP 172.16.20.128

First IP (Decimal) 2886734976

Last IP 172.16.20.255

Last IP (Decimal) 2886735103

Total Host 128

CIDR

172.16.20.128/25

/24:

CIDR to IP Range

Result

CIDR Range 172.16.20.128/24

Netmask 255.255.255.0

Wildcard Bits 0.0.0.255

First IP 172.16.20.0

First IP (Decimal) 2886734848

Last IP 172.16.20.255

Last IP (Decimal) 2886735103

Total Host 256

CIDR

172.16.20.128/24

asked 18/09/2024
Ryan John Ricafranca
46 questions

You are building an archival solution for your data warehouse and have selected Cloud Storage to archive your dat

A.
Your users need to be able to access this archived data once a quarter for some regulatory requirements. You want to select a cost-efficient option. Which storage option should you use?
A.
Your users need to be able to access this archived data once a quarter for some regulatory requirements. You want to select a cost-efficient option. Which storage option should you use?
Answers
B.
Coldline Storage
B.
Coldline Storage
Answers
C.
Nearline Storage
C.
Nearline Storage
Answers
D.
Regional Storage
D.
Regional Storage
Answers
E.
Multi-Regional Storage
E.
Multi-Regional Storage
Answers
Suggested answer: A

Explanation:

Coldline Storage is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is ideal for data you plan to read or modify at most once a quarter. Since we have a requirement to access data once a quarter and want to go with the most cost-efficient option, we should select Coldline Storage.

Ref:https://cloud.google.com/storage/docs/storage-classes#coldline

asked 18/09/2024
Simone Somacal
31 questions

Users of your application are complaining of slowness when loading the application. You realize the slowness is because the App Engine deployment serving the application is deployed in us-central whereas all users of this application are closest to europe-west3. You want to change the region of the App Engine application to europe-west3 to minimize latency. What's the best way to change the App Engine region?

A.
Create a new project and create an App Engine instance in europe-west3
A.
Create a new project and create an App Engine instance in europe-west3
Answers
B.
Use the gcloud app region set command and supply the name of the new region.
B.
Use the gcloud app region set command and supply the name of the new region.
Answers
C.
From the console, under the App Engine page, click edit, and change the region drop-down.
C.
From the console, under the App Engine page, click edit, and change the region drop-down.
Answers
D.
Contact Google Cloud Support and request the change.
D.
Contact Google Cloud Support and request the change.
Answers
Suggested answer: A

Explanation:

App engine is a regional service, which means the infrastructure that runs your app(s) is located in a specific region and is managed by Google to be redundantly available across all the zones within that region. Once an app engine deployment is created in a region, it cant be changed. The only way is to create a new project and create an App Engine instance in europe-west3, send all user traffic to this instance and delete the app engine instance in us-central.

Ref:https://cloud.google.com/appengine/docs/locations

asked 18/09/2024
sailakshmi KM
41 questions

You have deployed an application on a single Compute Engine instance. The application writes logs to disk. Users start reporting errors with the application. You want to diagnose the problem. What should you do?

A.
Navigate to Cloud Logging and view the application logs.
A.
Navigate to Cloud Logging and view the application logs.
Answers
B.
Connect to the instance's serial console and read the application logs.
B.
Connect to the instance's serial console and read the application logs.
Answers
C.
Configure a Health Check on the instance and set a Low Healthy Threshold value.
C.
Configure a Health Check on the instance and set a Low Healthy Threshold value.
Answers
D.
Install and configure the Cloud Logging Agent and view the logs from Cloud Logging.
D.
Install and configure the Cloud Logging Agent and view the logs from Cloud Logging.
Answers
Suggested answer: D

Explanation:

Cloud Loging knows nothing about applications installed on the system without an agent collecting logs. Using the serial console is not a best-practice and is impractical on a large scale.

The VM images for Compute Engine and Amazon Elastic Compute Cloud (EC2) don't include the Logging agent, so you must complete these steps to install it on those instances. The agent runs under both Linux and Windows. Source: https://cloud.google.com/logging/docs/agent/logging/installation

asked 18/09/2024
Mark David
44 questions

You want to configure 10 Compute Engine instances for availability when maintenance occurs. Your requirements state that these instances should attempt to automatically restart if they crash. Also, the instances should be highly available including during system maintenance. What should you do?

A.
Create an instance template for the instances. Set the 'Automatic Restart' to on. Set the 'On-host maintenance' to Migrate VM instance. Add the instance template to an instance group.
A.
Create an instance template for the instances. Set the 'Automatic Restart' to on. Set the 'On-host maintenance' to Migrate VM instance. Add the instance template to an instance group.
Answers
B.
Create an instance template for the instances. Set 'Automatic Restart' to off. Set 'On-host maintenance' to Terminate VM instances. Add the instance template to an instance group.
B.
Create an instance template for the instances. Set 'Automatic Restart' to off. Set 'On-host maintenance' to Terminate VM instances. Add the instance template to an instance group.
Answers
C.
Create an instance group for the instances. Set the 'Autohealing' health check to healthy (HTTP).
C.
Create an instance group for the instances. Set the 'Autohealing' health check to healthy (HTTP).
Answers
D.
Create an instance group for the instance. Verify that the 'Advanced creation options' setting for 'do not retry machine creation' is set to off.
D.
Create an instance group for the instance. Verify that the 'Advanced creation options' setting for 'do not retry machine creation' is set to off.
Answers
Suggested answer: A

Explanation:

Ref:https://cloud.google.com/compute/docs/instances/setting-instance-scheduling-options#autorestart

Enabling the Migrate VM Instance option migrates your instance away from an infrastructure maintenance event, and your instance remains running during the migration. Your instance might experience a short period of decreased performance, although generally, most instances should not notice any difference. This is ideal for instances that require constant uptime and can tolerate a short period of decreased performance. Ref:https://cloud.google.com/compute/docs/instances/setting-instance-scheduling-options#live_migrate

asked 18/09/2024
Raed Alshehri
50 questions

You have downloaded and installed the gcloud command line interface (CLI) and have authenticated with your Google Account. Most of your Compute Engine instances in your project run in the europe-west1-d zone. You want to avoid having to specify this zone with each CLI command when managing these instances. What should you do?

A.
Set the europe-west1-d zone as the default zone using the gcloud config subcommand.
A.
Set the europe-west1-d zone as the default zone using the gcloud config subcommand.
Answers
B.
In the Settings page for Compute Engine under Default location, set the zone to europe--west1-d.
B.
In the Settings page for Compute Engine under Default location, set the zone to europe--west1-d.
Answers
C.
In the CLI installation directory, create a file called default.conf containing zone=europe--west1--d.
C.
In the CLI installation directory, create a file called default.conf containing zone=europe--west1--d.
Answers
D.
Create a Metadata entry on the Compute Engine page with key compute/zone and value europe--west1--d.
D.
Create a Metadata entry on the Compute Engine page with key compute/zone and value europe--west1--d.
Answers
Suggested answer: A

Explanation:

Change your default zone and region in the metadata server Note: This only applies to the default configuration. You can change the default zone and region in your metadata server by making a request to the metadata server. For example: gcloud compute project-info add-metadata \ --metadata google-compute-default-region=europe-west1,google-compute-default-zone=europe-west1-b The gcloud command-line tool only picks up on new default zone and region changes after you rerun the gcloud init command. After updating your default metadata, run gcloud init to reinitialize your default configuration. https://cloud.google.com/compute/docs/gcloud-compute#change_your_default_zone_and_region_in_the_metadata_server

asked 18/09/2024
Sairam Emmidishetti
36 questions

You are analyzing Google Cloud Platform service costs from three separate projects. You want to use this information to create service cost estimates by service type, daily and monthly, for the next six months using standard query syntax. What should you do?

A.
Export your bill to a Cloud Storage bucket, and then import into Cloud Bigtable for analysis.
A.
Export your bill to a Cloud Storage bucket, and then import into Cloud Bigtable for analysis.
Answers
B.
Export your bill to a Cloud Storage bucket, and then import into Google Sheets for analysis.
B.
Export your bill to a Cloud Storage bucket, and then import into Google Sheets for analysis.
Answers
C.
Export your transactions to a local file, and perform analysis with a desktop tool.
C.
Export your transactions to a local file, and perform analysis with a desktop tool.
Answers
D.
Export your bill to a BigQuery dataset, and then write time window-based SQL queries for analysis.
D.
Export your bill to a BigQuery dataset, and then write time window-based SQL queries for analysis.
Answers
Suggested answer: D

Explanation:

'...we recommend that you enable Cloud Billing data export to BigQuery at the same time that you create a Cloud Billing account. ' https://cloud.google.com/billing/docs/how-to/export-data-bigquery

https://medium.com/google-cloud/analyzing-google-cloud-billing-data-with-big-query-30bae1c2aae4

asked 18/09/2024
David Hartnett
45 questions