ExamGecko
Home Home / Google / Professional Cloud Architect

Google Professional Cloud Architect Practice Test - Questions Answers, Page 7

Question list
Search
Search

List of questions

Search

Related questions











Your company has successfully migrated to the cloud and wants to analyze their data stream to optimize operations. They do not have any existing code for this analysis, so they are exploring all their options. These options include a mix of batch and stream processing, as they are running some hourly jobs and live-processing some data as it comes in.

Which technology should they use for this?

A.
Google Cloud Dataproc
A.
Google Cloud Dataproc
Answers
B.
Google Cloud Dataflow
B.
Google Cloud Dataflow
Answers
C.
Google Container Engine with Bigtable
C.
Google Container Engine with Bigtable
Answers
D.
Google Compute Engine with Google BigQuery
D.
Google Compute Engine with Google BigQuery
Answers
Suggested answer: B

Explanation:

Section: [none]

Explanation:

Cloud Dataflow is a fully-managed service for transforming and enriching data in stream (real time) and batch (historical) modes with equal reliability and expressiveness -- no more complex workarounds or compromises needed.

References: https://cloud.google.com/dataflow/

Your customer is receiving reports that their recently updated Google App Engine application is taking approximately 30 seconds to load for some of their users. This behavior was not reported before the update.

What strategy should you take?

A.
Work with your ISP to diagnose the problem
A.
Work with your ISP to diagnose the problem
Answers
B.
Open a support ticket to ask for network capture and flow data to diagnose the problem, then roll back your application
B.
Open a support ticket to ask for network capture and flow data to diagnose the problem, then roll back your application
Answers
C.
Roll back to an earlier known good release initially, then use Stackdriver Trace and Logging to diagnose the problem in a development/test/staging environment
C.
Roll back to an earlier known good release initially, then use Stackdriver Trace and Logging to diagnose the problem in a development/test/staging environment
Answers
D.
Roll back to an earlier known good release, then push the release again at a quieter period to investigate. Then use Stackdriver Trace and Logging to diagnose the problem
D.
Roll back to an earlier known good release, then push the release again at a quieter period to investigate. Then use Stackdriver Trace and Logging to diagnose the problem
Answers
Suggested answer: C

Explanation:

Stackdriver Logging allows you to store, search, analyze, monitor, and alert on log data and events from Google Cloud Platform and Amazon Web Services (AWS). Our API also allows ingestion of any custom log data from any source.

Stackdriver Logging is a fully managed service that performs at scale and can ingest application and system log data from thousands of VMs. Even better, you can analyze all that log data in real time. References: https://cloud.google.com/logging/

A production database virtual machine on Google Compute Engine has an ext4-formatted persistent disk for data files. The database is about to run out of storage space.

How can you remediate the problem with the least amount of downtime?

A.
In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.
A.
In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.
Answers
B.
Shut down the virtual machine, use the Cloud Platform Console to increase the persistent disk size, then restart the virtual machine
B.
Shut down the virtual machine, use the Cloud Platform Console to increase the persistent disk size, then restart the virtual machine
Answers
C.
In the Cloud Platform Console, increase the size of the persistent disk and verify the new space is ready to use with the fdisk command in Linux
C.
In the Cloud Platform Console, increase the size of the persistent disk and verify the new space is ready to use with the fdisk command in Linux
Answers
D.
In the Cloud Platform Console, create a new persistent disk attached to the virtual machine, format and mount it, and configure the database service to move the files to the new disk
D.
In the Cloud Platform Console, create a new persistent disk attached to the virtual machine, format and mount it, and configure the database service to move the files to the new disk
Answers
E.
In the Cloud Platform Console, create a snapshot of the persistent disk restore the snapshot to a new larger disk, unmount the old disk, mount the new disk and restart the database service
E.
In the Cloud Platform Console, create a snapshot of the persistent disk restore the snapshot to a new larger disk, unmount the old disk, mount the new disk and restart the database service
Answers
Suggested answer: A

Explanation:

On Linux instances, connect to your instance and manually resize your partitions and file systems to use the additional disk space that you added.

Extend the file system on the disk or the partition to use the added space. If you grew a partition on your disk, specify the partition. If your disk does not have a partition table, specify only the disk ID.

sudo resize2fs /dev/[DISK_ID][PARTITION_NUMBER] where [DISK_ID] is the device name and [PARTITION_NUMBER] is the partition number for the device where you are resizing the file system. References: https://cloud.google.com/compute/docs/disks/add-persistent-disk

Your application needs to process credit card transactions. You want the smallest scope of Payment Card Industry (PCI) compliance without compromising the ability to analyze transactional data and trends relating to which payment methods are used.

How should you design your architecture?

A.
Create a tokenizer service and store only tokenized data
A.
Create a tokenizer service and store only tokenized data
Answers
B.
Create separate projects that only process credit card data
B.
Create separate projects that only process credit card data
Answers
C.
Create separate subnetworks and isolate the components that process credit card data
C.
Create separate subnetworks and isolate the components that process credit card data
Answers
D.
Streamline the audit discovery phase by labeling all of the virtual machines (VMs) that process PCI data
D.
Streamline the audit discovery phase by labeling all of the virtual machines (VMs) that process PCI data
Answers
E.
Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor
E.
Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor
Answers
Suggested answer: A

Explanation:

Reference:

https://www.sans.org/reading-room/whitepapers/compliance/ways-reduce-pci-dss-audit-scope-tokenizing-cardholder-data-33194

You have been asked to select the storage system for the click-data of your company's large portfolio of websites. This data is streamed in from a custom website analytics package at a typical rate of 6,000 clicks per minute. With bursts of up to 8,500 clicks per second. It must have been stored for future analysis by your data science and user experience teams.

Which storage infrastructure should you choose?

A.
Google Cloud SQL
A.
Google Cloud SQL
Answers
B.
Google Cloud Bigtable
B.
Google Cloud Bigtable
Answers
C.
Google Cloud Storage
C.
Google Cloud Storage
Answers
D.
Google Cloud Datastore
D.
Google Cloud Datastore
Answers
Suggested answer: B

Explanation:

Google Cloud Bigtable is a scalable, fully-managed NoSQL wide-column database that is suitable for both real-time access and analytics workloads. Good for:

Low-latency read/write access

High-throughput analytics

Native time series support Common workloads: IoT, finance, adtech

Personalization, recommendations

Monitoring

Geospatial datasets Graphs

Incorrect Answers:

C: Google Cloud Storage is a scalable, fully-managed, highly reliable, and cost-efficient object / blob store. Is good for:

Images, pictures, and videos

Objects and blobs Unstructured data

D: Google Cloud Datastore is a scalable, fully-managed NoSQL document database for your web and mobile applications. Is good for:

Semi-structured application data

Hierarchical data Durable key-value data Common workloads:

User profiles

Product catalogs

Game state

References: https://cloud.google.com/storage-options/

You are creating a solution to remove backup files older than 90 days from your backup Cloud Storage bucket. You want to optimize ongoing Cloud Storage spend.

What should you do?

A.
Write a lifecycle management rule in XML and push it to the bucket with gsutil
A.
Write a lifecycle management rule in XML and push it to the bucket with gsutil
Answers
B.
Write a lifecycle management rule in JSON and push it to the bucket with gsutil
B.
Write a lifecycle management rule in JSON and push it to the bucket with gsutil
Answers
C.
Schedule a cron script using gsutil ls -lr gs://backups/** to find and remove items older than 90 days
C.
Schedule a cron script using gsutil ls -lr gs://backups/** to find and remove items older than 90 days
Answers
D.
Schedule a cron script using gsutil ls -l gs://backups/** to find and remove items older than 90 days and schedule it with cron
D.
Schedule a cron script using gsutil ls -l gs://backups/** to find and remove items older than 90 days and schedule it with cron
Answers
Suggested answer: B

Your company is forecasting a sharp increase in the number and size of Apache Spark and Hadoop jobs being run on your local datacenter. You want to utilize the cloud to help you scale this upcoming demand with the least amount of operations work and code change.

Which product should you use?

A.
Google Cloud Dataflow
A.
Google Cloud Dataflow
Answers
B.
Google Cloud Dataproc
B.
Google Cloud Dataproc
Answers
C.
Google Compute Engine
C.
Google Compute Engine
Answers
D.
Google Kubernetes Engine
D.
Google Kubernetes Engine
Answers
Suggested answer: B

Explanation:

Google Cloud Dataproc is a fast, easy-to-use, low-cost and fully managed service that lets you run the Apache Spark and Apache Hadoop ecosystem on Google Cloud Platform. Cloud Dataproc provisions big or small clusters rapidly, supports many popular job types, and is integrated with other Google Cloud Platform services, such as Google Cloud Storage and Stackdriver Logging, thus helping you reduce TCO. References: https://cloud.google.com/dataproc/docs/ resources/faq

The database administration team has asked you to help them improve the performance of their new database server running on Google Compute Engine. The database is for importing and normalizing their performance statistics and is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD persistent disk.

What should they change to get better performance from this system?

A.
Increase the virtual machine's memory to 64 GB
A.
Increase the virtual machine's memory to 64 GB
Answers
B.
Create a new virtual machine running PostgreSQL
B.
Create a new virtual machine running PostgreSQL
Answers
C.
Dynamically resize the SSD persistent disk to 500 GB
C.
Dynamically resize the SSD persistent disk to 500 GB
Answers
D.
Migrate their performance metrics warehouse to BigQuery
D.
Migrate their performance metrics warehouse to BigQuery
Answers
E.
Modify all of their batch jobs to use bulk inserts into the database
E.
Modify all of their batch jobs to use bulk inserts into the database
Answers
Suggested answer: C

You want to optimize the performance of an accurate, real-time, weather-charting application. The data comes from 50,000 sensors sending 10 readings a second, in the format of a timestamp and sensor reading.

Where should you store the data?

A.
Google BigQuery
A.
Google BigQuery
Answers
B.
Google Cloud SQL
B.
Google Cloud SQL
Answers
C.
Google Cloud Bigtable
C.
Google Cloud Bigtable
Answers
D.
Google Cloud Storage
D.
Google Cloud Storage
Answers
Suggested answer: C

Explanation:

Google Cloud Bigtable is a scalable, fully-managed NoSQL wide-column database that is suitable for both real-time access and analytics workloads.

Good for:

Low-latency read/write access

High-throughput analytics

Native time series support Common workloads: IoT, finance, adtech

Personalization, recommendations

Monitoring

Geospatial datasets

Graphs

References: https://cloud.google.com/storage-options/

Your company's user-feedback portal comprises a standard LAMP stack replicated across two zones. It is deployed in the us-central1 region and uses autoscaled managed instance groups on all layers, except the database. Currently, only a small group of select customers have access to the portal. The portal meets a 99,99% availability SLA under these conditions. However next quarter, your company will be making the portal available to all users, including unauthenticated users. You need to develop a resiliency testing strategy to ensure the system maintains the SLA once they introduce additional user load.

What should you do?

A.
Capture existing users input, and replay captured user load until autoscale is triggered on all layers. At the same time, terminate all resources in one of the zones
A.
Capture existing users input, and replay captured user load until autoscale is triggered on all layers. At the same time, terminate all resources in one of the zones
Answers
B.
Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce "chaos" to the system by terminating random resources on both zones
B.
Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce "chaos" to the system by terminating random resources on both zones
Answers
C.
Expose the new system to a larger group of users, and increase group size each day until autoscale logic is triggered on all layers. At the same time, terminate random resources on both zones
C.
Expose the new system to a larger group of users, and increase group size each day until autoscale logic is triggered on all layers. At the same time, terminate random resources on both zones
Answers
D.
Capture existing users input, and replay captured user load until resource utilization crosses 80%. Also, derive estimated number of users based on existing user's usage of the app, and deploy enough resources to handle 200%ofexpected load
D.
Capture existing users input, and replay captured user load until resource utilization crosses 80%. Also, derive estimated number of users based on existing user's usage of the app, and deploy enough resources to handle 200%ofexpected load
Answers
Suggested answer: B
Total 285 questions
Go to page: of 29