Google Professional Cloud Network Engineer Practice Test - Questions Answers, Page 16
List of questions
Related questions
You have provisioned a Dedicated Interconnect connection of 20 Gbps with a VLAN attachment of 10 Gbps. You recently noticed a steady increase in ingress traffic on the Interconnect connection from the on-premises data center. You need to ensure that your end users can achieve the full 20 Gbps throughput as quickly as possible. Which two methods can you use to accomplish this? (Choose two.)
Your company has a Virtual Private Cloud (VPC) with two Dedicated Interconnect connections in two different regions: us-west1 and us-east1. Each Dedicated Interconnect connection is attached to a Cloud Router in its respective region by a VLAN attachment. You need to configure a high availability failover path. By default, all ingress traffic from the on-premises environment should flow to the VPC using the us-west1 connection. If us-west1 is unavailable, you want traffic to be rerouted to us-east1.
How should you configure the multi-exit discriminator (MED) values to enable this failover path?
You have the following private Google Kubernetes Engine (GKE) cluster deployment:
You have a virtual machine (VM) deployed in the same VPC in the subnetwork kubernetesmanagement with internal IP address 192.168.40 2/24 and no external IP address assigned. You need to communicate with the cluster master using kubectl. What should you do?
You have the networking configuration shown In the diagram Two VLAN attachments associated With two Dedicated Interconnect connections terminate on the same Cloud Router (mycloudrouter). The Interconnect connections terminate on two separate on-premises routers. You advertise the same prefixes from the Border Gateway Protocol (BOP) sessions associated With each Of the VLAN attachments.
You notice an asymmetric traffic flow between the two Interconnect connections. Which of the following actions should you take to troubleshoot the asymmetric traffic flow?
You are in the process of deploying an internal HTTP(S) load balancer for your web server virtual machine (VM) Instances What two prerequisite tasks must be completed before creating the load balancer?
Choose 2 answers
You want Cloud CDN to serve the https://www.example.com/images/spacetime.png static image file that is hosted in a private Cloud Storage bucket, You are using the VSE ORIG.-X_NZADERS cache mode You receive an HTTP 403 error when opening the file In your browser and you see that the HTTP response has a Cache-control: private, max-age=O header How should you correct this Issue?
You are deploying an application that runs on Compute Engine instances. You need to determine how to expose your application to a new customer You must ensure that your application meets the following requirements
* Maps multiple existing reserved external IP addresses to the Instance
* Processes IP Encapsulating Security Payload (ESP) traffic
What should you do?
You are planning to use Terraform to deploy the Google Cloud infrastructure for your company The design must meet the following requirements
* Each Google Cloud project must represent an Internal project that your team Will work on
* After an internal project is finished, the infrastructure must be deleted
* Each Internal project must have Its own Google Cloud project owner to manage the Google Cloud resources-
* You have 10-100 projects deployed at a time,
While you are writing the Terraform code, you need to ensure that the deployment IS Simple, and the code IS reusable With centralized management What should you doo
Your company recently migrated to Google Cloud in a Single region. You configured separate Virtual Private Cloud (VPC) networks for two departments. Department A and Department B. Department A has requested access to resources that are part Of Department Bis VPC. You need to configure the traffic from private IP addresses to flow between the VPCs using multi-NIC virtual machines (VMS) to meet security requirements Your configuration also must
* Support both TCP and UDP protocols
* Provide fully automated failover
* Include health-checks
Require minimal manual Intervention In the client VMS
Which approach should you take?
Your company is planning a migration to Google Kubernetes Engine. Your application team informed you that they require a minimum of 60 Pods per node and a maximum of 100 Pods per node Which Pod per node CIDR range should you use?
Question