ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 127 - Professional Cloud DevOps Engineer discussion

Report
Export

Your team is building a service that performs compute-heavy processing on batches of data The data is processed faster based on the speed and number of CPUs on the machine These batches of data vary in size and may arrive at any time from multiple third-party sources You need to ensure that third parties are able to upload their data securely. You want to minimize costs while ensuring that the data is processed as quickly as possible What should you do?

A.
* Provide a secure file transfer protocol (SFTP) server on a Compute Engine instance so that third parties can upload batches of data and provide appropriate credentials to the server * Create a Cloud Function with a google.storage, object, finalize Cloud Storage trigger Write code so that the function can scale up a Compute Engine autoscaling managed instance group * Use an image pre-loaded with the data processing software that terminates the instances when processing completes
Answers
A.
* Provide a secure file transfer protocol (SFTP) server on a Compute Engine instance so that third parties can upload batches of data and provide appropriate credentials to the server * Create a Cloud Function with a google.storage, object, finalize Cloud Storage trigger Write code so that the function can scale up a Compute Engine autoscaling managed instance group * Use an image pre-loaded with the data processing software that terminates the instances when processing completes
B.
* Provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate Identity and Access Management (1AM) access to the bucket * Use a standard Google Kubernetes Engine (GKE) cluster and maintain two services one that processes the batches of data and one that monitors Cloud Storage for new batches of data * Stop the processing service when there are no batches of data to process
Answers
B.
* Provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate Identity and Access Management (1AM) access to the bucket * Use a standard Google Kubernetes Engine (GKE) cluster and maintain two services one that processes the batches of data and one that monitors Cloud Storage for new batches of data * Stop the processing service when there are no batches of data to process
C.
* Provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate identity and Access Management (1AM) access to the bucket * Create a Cloud Function with a google, storage, object .finalise Cloud Storage trigger Write code so that the function can scale up a Compute Engine autoscaling managed instance group * Use an image pre-loaded with the data processing software that terminates the instances when processing completes
Answers
C.
* Provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate identity and Access Management (1AM) access to the bucket * Create a Cloud Function with a google, storage, object .finalise Cloud Storage trigger Write code so that the function can scale up a Compute Engine autoscaling managed instance group * Use an image pre-loaded with the data processing software that terminates the instances when processing completes
D.
* Provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate Identity and Access Management (1AM) access to the bucket * Use Cloud Monitoring to detect new batches of data in the bucket and trigger a Cloud Function that processes the data * Set a Cloud Function to use the largest CPU possible to minimize the runtime of the processing
Answers
D.
* Provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate Identity and Access Management (1AM) access to the bucket * Use Cloud Monitoring to detect new batches of data in the bucket and trigger a Cloud Function that processes the data * Set a Cloud Function to use the largest CPU possible to minimize the runtime of the processing
Suggested answer: C

Explanation:

The best option for ensuring that third parties are able to upload their data securely and minimizing costs while ensuring that the data is processed as quickly as possible is to provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate Identity and Access Management (IAM) access to the bucket; create a Cloud Function with a google.storage.object.finalize Cloud Storage trigger; write code so that the function can scale up a Compute Engine autoscaling managed instance group; use an image pre-loaded with the data processing software that terminates the instances when processing completes. A Cloud Storage bucket is a resource that allows you to store and access data in Google Cloud. You can provide a Cloud Storage bucket so that third parties can upload batches of data securely and conveniently. You can also provide appropriate IAM access to the bucket by using roles and policies to control who can read or write data to the bucket. A Cloud Function is a serverless function that executes code in response to an event, such as a change in a Cloud Storage bucket. A google.storage.object.finalize trigger is a type of trigger that fires when a new object is created or an existing object is overwritten in a Cloud Storage bucket. You can create a Cloud Function with a google.storage.object.finalize trigger so that the function runs whenever a new batch of data is uploaded to the bucket. You can write code so that the function can scale up a Compute Engine autoscaling managed instance group, which is a group of VM instances that automatically adjusts its size based on load or custom metrics. You can use an image pre-loaded with the data processing software that terminates the instances when processing completes, which means that the instances only run when there is data to process and stop when they are done. This way, you can minimize costs while ensuring that the data is processed as quickly as possible.

asked 18/09/2024
Tresor Garcia
40 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first