ExamGecko
Home / Google / Professional Cloud Architect / List of questions
Ask Question

Google Professional Cloud Architect Practice Test - Questions Answers, Page 8

List of questions

Question 71

Report
Export
Collapse

One of the developers on your team deployed their application in Google Container Engine with the Dockerfile below. They report that their application deployments are taking too long.

Google Professional Cloud Architect image Question 35 28376 09182024191247000000

You want to optimize this Dockerfile for faster deployment times without adversely affecting the app's functionality.

Which two actions should you take? Choose 2 answers.

Remove Python after running pip
Remove Python after running pip
Remove dependencies from requirements.txt
Remove dependencies from requirements.txt
Use a slimmed-down base image like Alpine Linux
Use a slimmed-down base image like Alpine Linux
Use larger machine types for your Google Container Engine node pools
Use larger machine types for your Google Container Engine node pools
Copy the source after he package dependencies (Python and pip) are installed
Copy the source after he package dependencies (Python and pip) are installed
Suggested answer: C, E

Explanation:

The speed of deployment can be changed by limiting the size of the uploaded app, limiting the complexity of the build necessary in the Dockerfile, if present, and by ensuring a fast and reliable internet connection.

Note: Alpine Linux is built around musl libc and busybox. This makes it smaller and more resource efficient than traditional GNU/Linux distributions. A container requires no more than 8 MB and a minimal installation to disk requires around 130 MB of storage. Not only do you get a fully-fledged Linux environment but a large selection of packages from the repository.

References: https://groups.google.com/forum/#!topic/google-appengine/hZMEkmmObDU https://www.alpinelinux.org/about/

asked 18/09/2024
Alberto Castillo
35 questions

Question 72

Report
Export
Collapse

Your solution is producing performance bugs in production that you did not see in staging and test environments. You want to adjust your test and deployment procedures to avoid this problem in the future.

What should you do?

Deploy fewer changes to production
Deploy fewer changes to production
Deploy smaller changes to production
Deploy smaller changes to production
Increase the load on your test and staging environments
Increase the load on your test and staging environments
Deploy changes to a small subset of users before rolling out to production
Deploy changes to a small subset of users before rolling out to production
Suggested answer: D
asked 18/09/2024
Oren Dahan
45 questions

Question 73

Report
Export
Collapse

A small number of API requests to your microservices-based application take a very long time. You know that each request to the API can traverse many services. You want to know which service takes the longest in those cases.

What should you do?

Set timeouts on your application so that you can fail requests faster
Set timeouts on your application so that you can fail requests faster
Send custom metrics for each of your requests to Stackdriver Monitoring
Send custom metrics for each of your requests to Stackdriver Monitoring
Use Stackdriver Monitoring to look for insights that show when your API latencies are high
Use Stackdriver Monitoring to look for insights that show when your API latencies are high
Instrument your application with Stackdriver Trace in order to break down the request latencies at each microservice
Instrument your application with Stackdriver Trace in order to break down the request latencies at each microservice
Suggested answer: D

Explanation:

References: https://cloud.google.com/trace/docs/quickstart#find_a_trace

asked 18/09/2024
Johan Wu
27 questions

Question 74

Report
Export
Collapse

During a high traffic portion of the day, one of your relational databases crashes, but the replica is never promoted to a master. You want to avoid this in the future.

What should you do?

Use a different database
Use a different database
Choose larger instances for your database
Choose larger instances for your database
Create snapshots of your database more regularly
Create snapshots of your database more regularly
Implement routinely scheduled failovers of your databases
Implement routinely scheduled failovers of your databases
Suggested answer: D
asked 18/09/2024
EduBP srl EduBP
39 questions

Question 75

Report
Export
Collapse

Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible legal proceedings.

Which approach should you use?

Grant the security team access to the logs in each Project
Grant the security team access to the logs in each Project
Configure Stackdriver Monitoring for all Projects, and export to BigQuery
Configure Stackdriver Monitoring for all Projects, and export to BigQuery
Configure Stackdriver Monitoring for all Projects with the default retention policies
Configure Stackdriver Monitoring for all Projects with the default retention policies
Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage
Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage
Suggested answer: B

Explanation:

Stackdriver Logging provides you with the ability to filter, search, and view logs from your cloud and open source application services. Allows you to define metrics based on log contents that are incorporated into dashboards and alerts.

Enables you to export logs to BigQuery, Google Cloud Storage, and Pub/Sub. References: https://cloud.google.com/stackdriver/

asked 18/09/2024
Ntombifuthi Shabangu
28 questions

Question 76

Report
Export
Collapse

Your company has decided to build a backup replica of their on-premises user authentication PostgreSQL database on Google Cloud Platform. The database is 4 TB, and large updates are frequent. Replication requires private address space communication.

Which networking approach should you use?

Google Cloud Dedicated Interconnect
Google Cloud Dedicated Interconnect
Google Cloud VPN connected to the data center network
Google Cloud VPN connected to the data center network
A NAT and TLS translation gateway installed on-premises
A NAT and TLS translation gateway installed on-premises
A Google Compute Engine instance with a VPN server installed connected to the data center network
A Google Compute Engine instance with a VPN server installed connected to the data center network
Suggested answer: A

Explanation:

Google Cloud Dedicated Interconnect provides direct physical connections and RFC 1918 communication between your on-premises network and Google's network. Dedicated Interconnect enables you to transfer large amounts of data between networks, which can be more cost effective than purchasing additional bandwidth over the public Internet or using VPN tunnels.

Benefits:

Traffic between your on-premises network and your VPC network doesn't traverse the public Internet. Traffic traverses a dedicated connection with fewer hops, meaning there are less points of failure where traffic might get dropped or disrupted.

Your VPC network's internal (RFC 1918) IP addresses are directly accessible from your on-premises network. You don't need to use a NAT device or VPN tunnel to reach internal IP addresses. Currently, you can only reach internal IP addresses over a dedicated connection. To reach Google external IP addresses, you must use a separate connection.

You can scale your connection to Google based on your needs. Connection capacity is delivered over one or more 10 Gbps Ethernet connections, with a maximum of eight connections (80 Gbps total per interconnect).

The cost of egress traffic from your VPC network to your on-premises network is reduced. A dedicated connection is generally the least expensive method if you have a high-volume of traffic to and from Google's network. References:

https://cloud.google.com/interconnect/docs/details/dedicated

asked 18/09/2024
ozgur yilmaz
30 questions

Question 77

Report
Export
Collapse

Auditors visit your teams every 12 months and ask to review all the Google Cloud Identity and Access Management (Cloud IAM) policy changes in the previous 12 months. You want to streamline and expedite the analysis and audit process.

What should you do?

Create custom Google Stackdriver alerts and send them to the auditor
Create custom Google Stackdriver alerts and send them to the auditor
Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor
Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor
Use cloud functions to transfer log entries to Google Cloud SQL and use ACLs and views to limit an auditor's view
Use cloud functions to transfer log entries to Google Cloud SQL and use ACLs and views to limit an auditor's view
Enable Google Cloud Storage (GCS) log export to audit logs into a GCS bucket and delegate access to the bucket
Enable Google Cloud Storage (GCS) log export to audit logs into a GCS bucket and delegate access to the bucket
Suggested answer: B
asked 18/09/2024
MOHAMED RIAZ MOHAMED IBRAHIM
40 questions

Question 78

Report
Export
Collapse

You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database back-end. You want to store the credentials securely.

Where should you store the credentials?

In the source code
In the source code
In an environment variable
In an environment variable
In a secret management system
In a secret management system
In a config file that has restricted access through ACLs
In a config file that has restricted access through ACLs
Suggested answer: C

Explanation:

References: https://cloud.google.com/kms/docs/secret-management

asked 18/09/2024
Guillermo Fontaine
47 questions

Question 79

Report
Export
Collapse

A lead engineer wrote a custom tool that deploys virtual machines in the legacy data center. He wants to migrate the custom tool to the new cloud environment. You want to advocate for the adoption of Google Cloud Deployment Manager.

What are two business risks of migrating to Cloud Deployment Manager? Choose 2 answers.

Cloud Deployment Manager uses Python
Cloud Deployment Manager uses Python
Cloud Deployment Manager APIs could be deprecated in the future
Cloud Deployment Manager APIs could be deprecated in the future
Cloud Deployment Manager is unfamiliar to the company's engineers
Cloud Deployment Manager is unfamiliar to the company's engineers
Cloud Deployment Manager requires a Google APIs service account to run
Cloud Deployment Manager requires a Google APIs service account to run
Cloud Deployment Manager can be used to permanently delete cloud resources
Cloud Deployment Manager can be used to permanently delete cloud resources
Cloud Deployment Manager only supports automation of Google Cloud resources
Cloud Deployment Manager only supports automation of Google Cloud resources
Suggested answer: B, F
asked 18/09/2024
Andrea Chichiarelli
37 questions

Question 80

Report
Export
Collapse

A development manager is building a new application. He asks you to review his requirements and identify what cloud technologies he can use to meet them. The application must:

Be based on open-source technology for cloud portability
Be based on open-source technology for cloud portability
Dynamically scale compute capacity based on demand
Dynamically scale compute capacity based on demand
Support continuous software delivery
Support continuous software delivery
Run multiple segregated copies of the same application stack
Run multiple segregated copies of the same application stack
Deploy application bundles using dynamic templates
Deploy application bundles using dynamic templates
Route network traffic to specific services based on URLWhich combination of technologies will meet all of his requirements?
Route network traffic to specific services based on URLWhich combination of technologies will meet all of his requirements?
Google Kubernetes Engine, Jenkins, and Helm
Google Kubernetes Engine, Jenkins, and Helm
Google Kubernetes Engine and Cloud Load Balancing
Google Kubernetes Engine and Cloud Load Balancing
Google Kubernetes Engine and Cloud Deployment Manager
Google Kubernetes Engine and Cloud Deployment Manager
Google Kubernetes Engine, Jenkins, and Cloud Load Balancing
Google Kubernetes Engine, Jenkins, and Cloud Load Balancing
Suggested answer: D

Explanation:

Jenkins is an open-source automation server that lets you flexibly orchestrate your build, test, and deployment pipelines. Kubernetes Engine is a hosted version of Kubernetes, a powerful cluster manager and orchestration system for containers.

When you need to set up a continuous delivery (CD) pipeline, deploying Jenkins on Kubernetes Engine provides important benefits over a standard VM-based deployment

Incorrect Answers:

A: Helm is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources.

Use Helm to:

Find and use popular software packaged as Kubernetes charts

Share your own applications as Kubernetes charts

Create reproducible builds of your Kubernetes applications

Intelligently manage your Kubernetes manifest files

Manage releases of Helm packages

References: https://cloud.google.com/solutions/jenkins-on-kubernetes-engine

asked 18/09/2024
Styliani Simoiridou
43 questions
Total 285 questions
Go to page: of 29
Search

Related questions