ExamGecko
Home Home / Google / Professional Cloud Network Engineer

Google Professional Cloud Network Engineer Practice Test - Questions Answers, Page 4

Question list
Search
Search

List of questions

Search

Related questions











You are using a 10-Gbps direct peering connection to Google together with the gsutil tool to upload files to Cloud Storage buckets from on-premises servers. The on-premises servers are 100 milliseconds away from the Google peering point. You notice that your uploads are not using the full 10-Gbps bandwidth available to you. You want to optimize the bandwidth utilization of the connection.

What should you do on your on-premises servers?

A.
Tune TCP parameters on the on-premises servers.
A.
Tune TCP parameters on the on-premises servers.
Answers
B.
Compress files using utilities like tar to reduce the size of data being sent.
B.
Compress files using utilities like tar to reduce the size of data being sent.
Answers
C.
Remove the -m flag from the gsutil command to enable single-threaded transfers.
C.
Remove the -m flag from the gsutil command to enable single-threaded transfers.
Answers
D.
Use the perfdiag parameter in your gsutil command to enable faster performance: gsutil perfdiag gs://[BUCKET NAME].
D.
Use the perfdiag parameter in your gsutil command to enable faster performance: gsutil perfdiag gs://[BUCKET NAME].
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/solutions/tcp-optimization-for-network-performance-in-gcp-and-hybrid

https://cloud.google.com/solutions/tcp-optimization-for-network-performance-in-gcp-and-hybrid

https://cloud.google.com/blog/products/gcp/5-steps-to-better-gcp-network-performance?hl=ml

You work for a multinational enterprise that is moving to GCP.

These are the cloud requirements:

• An on-premises data center located in the United States in Oregon and New York with Dedicated Interconnects connected to Cloud regions us-west1 (primary HQ) and us-east4 (backup)

• Multiple regional offices in Europe and APAC

• Regional data processing is required in europe-west1 and australia-southeast1

• Centralized Network Administration Team

Your security and compliance team requires a virtual inline security appliance to perform L7 inspection for URL filtering. You want to deploy the appliance in us-west1.

What should you do?

A.
• Create 2 VPCs in a Shared VPC Host Project.• Configure a 2-NIC instance in zone us-west1-a in the Host Project.• Attach NIC0 in VPC #1 us-west1 subnet of the Host Project.• Attach NIC1 in VPC #2 us-west1 subnet of the Host Project.• Deploy the instance.• Configure the necessary routes and firewall rules to pass traffic through the instance.
A.
• Create 2 VPCs in a Shared VPC Host Project.• Configure a 2-NIC instance in zone us-west1-a in the Host Project.• Attach NIC0 in VPC #1 us-west1 subnet of the Host Project.• Attach NIC1 in VPC #2 us-west1 subnet of the Host Project.• Deploy the instance.• Configure the necessary routes and firewall rules to pass traffic through the instance.
Answers
B.
• Create 2 VPCs in a Shared VPC Host Project.• Configure a 2-NIC instance in zone us-west1-a in the Service Project.• Attach NIC0 in VPC #1 us-west1 subnet of the Host Project.• Attach NIC1 in VPC #2 us-west1 subnet of the Host Project.• Deploy the instance.• Configure the necessary routes and firewall rules to pass traffic through the instance.
B.
• Create 2 VPCs in a Shared VPC Host Project.• Configure a 2-NIC instance in zone us-west1-a in the Service Project.• Attach NIC0 in VPC #1 us-west1 subnet of the Host Project.• Attach NIC1 in VPC #2 us-west1 subnet of the Host Project.• Deploy the instance.• Configure the necessary routes and firewall rules to pass traffic through the instance.
Answers
C.
• Create 1 VPC in a Shared VPC Host Project.• Configure a 2-NIC instance in zone us-west1-a in the Host Project.• Attach NIC0 in us-west1 subnet of the Host Project.• Attach NIC1 in us-west1 subnet of the Host Project• Deploy the instance.• Configure the necessary routes and firewall rules to pass traffic through the instance.
C.
• Create 1 VPC in a Shared VPC Host Project.• Configure a 2-NIC instance in zone us-west1-a in the Host Project.• Attach NIC0 in us-west1 subnet of the Host Project.• Attach NIC1 in us-west1 subnet of the Host Project• Deploy the instance.• Configure the necessary routes and firewall rules to pass traffic through the instance.
Answers
D.
• Create 1 VPC in a Shared VPC Service Project.• Configure a 2-NIC instance in zone us-west1-a in the Service Project.• Attach NIC0 in us-west1 subnet of the Service Project.• Attach NIC1 in us-west1 subnet of the Service Project• Deploy the instance.• Configure the necessary routes and firewall rules to pass traffic through the instance.
D.
• Create 1 VPC in a Shared VPC Service Project.• Configure a 2-NIC instance in zone us-west1-a in the Service Project.• Attach NIC0 in us-west1 subnet of the Service Project.• Attach NIC1 in us-west1 subnet of the Service Project• Deploy the instance.• Configure the necessary routes and firewall rules to pass traffic through the instance.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/vpc/docs/shared-vpc

You are designing a Google Kubernetes Engine (GKE) cluster for your organization. The current cluster size is expected to host 10 nodes, with 20 Pods per node and 150 services. Because of the migration of new services over the next 2 years, there is a planned growth for 100 nodes, 200 Pods per node, and 1500 services. You want to use VPC-native clusters with alias IP ranges, while minimizing address consumption.

How should you design this topology?

A.
Create a subnet of size/25 with 2 secondary ranges of: /17 for Pods and /21 for Services. Create a VPC-native cluster and specify those ranges.
A.
Create a subnet of size/25 with 2 secondary ranges of: /17 for Pods and /21 for Services. Create a VPC-native cluster and specify those ranges.
Answers
B.
Create a subnet of size/28 with 2 secondary ranges of: /24 for Pods and /24 for Services. Create a VPC-native cluster and specify those ranges. When the services are ready to be deployed, resize the subnets.
B.
Create a subnet of size/28 with 2 secondary ranges of: /24 for Pods and /24 for Services. Create a VPC-native cluster and specify those ranges. When the services are ready to be deployed, resize the subnets.
Answers
C.
Use gcloud container clusters create [CLUSTER NAME]--enable-ip-alias to create a VPC-native cluster.
C.
Use gcloud container clusters create [CLUSTER NAME]--enable-ip-alias to create a VPC-native cluster.
Answers
D.
Use gcloud container clusters create [CLUSTER NAME] to create a VPC-native cluster.
D.
Use gcloud container clusters create [CLUSTER NAME] to create a VPC-native cluster.
Answers
Suggested answer: A

Explanation:

The service range setting is permanent and cannot be changed. Please see

https://stackoverflow.com/questions/60957040/how-to-increase-the-service-address-range-of-agke-cluster

I think the correc tanswer is A since: Grow is expected to up to 100 nodes (that would be/25), then up to 200 pods per node (100 times 200 = 20000 so /17 is 32768), then 1500 services in a/21 (up to 2048)

https://docs.netgate.com/pfsense/en/latest/book/network/understanding-cidr-subnet-masknotation.html

Your company has recently expanded their EMEA-based operations into APAC. Globally distributed users report that their SMTP and IMAP services are slow. Your company requires end-to-end encryption, but you do not have access to the

SSL certificates.

Which Google Cloud load balancer should you use?

A.
SSL proxy load balancer
A.
SSL proxy load balancer
Answers
B.
Network load balancer
B.
Network load balancer
Answers
C.
HTTPS load balancer
C.
HTTPS load balancer
Answers
D.
TCP proxy load balancer
D.
TCP proxy load balancer
Answers
Suggested answer: D

Explanation:

https://cloud.google.com/security/encryption-in-transit/ Automatic encryption between GFEs and backends For the following load balancer types, Google automatically encrypts traffic between Google Front Ends (GFEs) and your backends that reside within Google Cloud VPC networks: HTTP(S) Load Balancing TCP Proxy Load Balancing SSL Proxy Load Balancing

Your company is working with a partner to provide a solution for a customer. Both your company and the partner organization are using GCP. There are applications in the partner's network that need access to some resources in your company's VPC. There is no CIDR overlap between the VPCs.

Which two solutions can you implement to achieve the desired results without compromising the security? (Choose two.)

A.
VPC peering
A.
VPC peering
Answers
B.
Shared VPC
B.
Shared VPC
Answers
C.
Cloud VPN
C.
Cloud VPN
Answers
D.
Dedicated Interconnect
D.
Dedicated Interconnect
Answers
E.
Cloud NAT
E.
Cloud NAT
Answers
Suggested answer: A, C

Explanation:

Google Cloud VPC Network Peering allows internal IP address connectivity across two Virtual Private Cloud (VPC) networks regardless of whether they belong to the same project or the same organization.

You have a storage bucket that contains the following objects:

- folder-a/image-a-1.jpg

- folder-a/image-a-2.jpg

- folder-b/image-b-1.jpg

- folder-b/image-b-2.jpg

Cloud CDN is enabled on the storage bucket, and all four objects have been successfully cached. You want to remove the cached copies of all the objects with the prefix folder-a, using the minimum number of commands.

What should you do?

A.
Add an appropriate lifecycle rule on the storage bucket.
A.
Add an appropriate lifecycle rule on the storage bucket.
Answers
B.
Issue a cache invalidation command with pattern /folder-a/*.
B.
Issue a cache invalidation command with pattern /folder-a/*.
Answers
C.
Make sure that all the objects with prefix folder-a are not shared publicly.
C.
Make sure that all the objects with prefix folder-a are not shared publicly.
Answers
D.
Disable Cloud CDN on the storage bucket. Wait 90 seconds. Re-enable Cloud CDN on the storage bucket.
D.
Disable Cloud CDN on the storage bucket. Wait 90 seconds. Re-enable Cloud CDN on the storage bucket.
Answers
Suggested answer: B

Explanation:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html

Your company is running out of network capacity to run a critical application in the on-premises data center. You want to migrate the application to GCP. You also want to ensure that the Security team does not lose their ability to monitor traffic to and from Compute Engine instances.

Which two products should you incorporate into the solution? (Choose two.)

A.
VPC flow logs
A.
VPC flow logs
Answers
B.
Firewall logs
B.
Firewall logs
Answers
C.
Cloud Audit logs
C.
Cloud Audit logs
Answers
D.
Stackdriver Trace
D.
Stackdriver Trace
Answers
E.
Compute Engine instance system logs
E.
Compute Engine instance system logs
Answers
Suggested answer: A, B

Explanation:

A: Using VPC Flow Logs VPC Flow Logs records a sample of network flows sent from and received by VM instances, including instances used as GKE nodes. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization.

https://cloud.google.com/vpc/docs/using-flow-logs (B): Firewall Rules Logging overview Firewall Rules Logging allows you to audit, verify, and analyze the effects of your firewall rules. For example, you can determine if a firewall rule designed to deny traffic is functioning as intended. Firewall Rules Logging is also useful if you need to determine how many connections are affected by a given firewall rule. You enable Firewall Rules Logging individually for each firewall rule whose connections you need to log. Firewall Rules Logging is an option for any firewall rule, regardless of the action (allow or deny) or direction (ingress or egress) of the rule.

https://cloud.google.com/vpc/docs/firewall-rules-logging

You want to apply a new Cloud Armor policy to an application that is deployed in Google Kubernetes Engine (GKE). You want to find out which target to use for your Cloud Armor policy.

Which GKE resource should you use?

A.
GKE Node
A.
GKE Node
Answers
B.
GKE Pod
B.
GKE Pod
Answers
C.
GKE Cluster
C.
GKE Cluster
Answers
D.
GKE Ingress
D.
GKE Ingress
Answers
Suggested answer: D

Explanation:

Cloud Armour is applied at load balancers Configuring Google Cloud Armor through Ingress.

https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features Security policy features Google Cloud Armor security policies have the following core features: You can optionally use the QUIC protocol with load balancers that use Google Cloud Armor. You can use Google Cloud Armor with external HTTP(S) load balancers that are in either Premium Tier or Standard Tier. You can use security policies with GKE and the default Ingress controller.

You need to establish network connectivity between three Virtual Private Cloud networks, Sales, Marketing, and Finance, so that users can access resources in all three VPCs. You configure VPC peering between the Sales VPC and the Finance VPC. You also configure VPC peering between the Marketing VPC and the Finance VPC. After you complete the configuration, some users cannot connect to resources in the Sales VPC and the Marketing VPC. You want to resolve the problem.

What should you do?

A.
Configure VPC peering in a full mesh.
A.
Configure VPC peering in a full mesh.
Answers
B.
Alter the routing table to resolve the asymmetric route.
B.
Alter the routing table to resolve the asymmetric route.
Answers
C.
Create network tags to allow connectivity between all three VPCs.
C.
Create network tags to allow connectivity between all three VPCs.
Answers
D.
Delete the legacy network and recreate it to allow transitive peering.
D.
Delete the legacy network and recreate it to allow transitive peering.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/vpc/docs/using-vpc-peering

You create multiple Compute Engine virtual machine instances to be used as TFTP servers.

Which type of load balancer should you use?

A.
HTTP(S) load balancer
A.
HTTP(S) load balancer
Answers
B.
SSL proxy load balancer
B.
SSL proxy load balancer
Answers
C.
TCP proxy load balancer
C.
TCP proxy load balancer
Answers
D.
Network load balancer
D.
Network load balancer
Answers
Suggested answer: D

Explanation:

"TFTP is a UDP-based protocol. Servers listen on port 69 for the initial client-to-server packet to establish the TFTP session, then use a port above 1023 for all further packets during that session.

Clients use ports above 1023" https://docstore.mik.ua/orelly/networking_2ndEd/fire/ch17_02.htm Besides, Google Cloud external TCP/UDP Network Load Balancing (after this referred to as Network Load Balancing) is a regional, non-proxied load balancer. Network Load Balancing distributes traffic among virtual machine (VM) instances in the same region in a Virtual Private Cloud (VPC) netw

Total 215 questions
Go to page: of 22