ExamGecko
Home Home / Google / Professional Cloud Network Engineer

Google Professional Cloud Network Engineer Practice Test - Questions Answers, Page 16

Question list
Search
Search

List of questions

Search

Related questions











You have provisioned a Dedicated Interconnect connection of 20 Gbps with a VLAN attachment of 10 Gbps. You recently noticed a steady increase in ingress traffic on the Interconnect connection from the on-premises data center. You need to ensure that your end users can achieve the full 20 Gbps throughput as quickly as possible. Which two methods can you use to accomplish this? (Choose two.)

A.
Configure an additional VLAN attachment of 10 Gbps in another region. Configure the on-premises router to advertise routes with the same multi-exit discriminator (MED).
A.
Configure an additional VLAN attachment of 10 Gbps in another region. Configure the on-premises router to advertise routes with the same multi-exit discriminator (MED).
Answers
B.
Configure an additional VLAN attachment of 10 Gbps in the same region. Configure the onpremises router to advertise routes with the same multi-exit discriminator (MED).
B.
Configure an additional VLAN attachment of 10 Gbps in the same region. Configure the onpremises router to advertise routes with the same multi-exit discriminator (MED).
Answers
C.
From the Google Cloud Console, modify the bandwidth of the VLAN attachment to 20 Gbps.
C.
From the Google Cloud Console, modify the bandwidth of the VLAN attachment to 20 Gbps.
Answers
D.
From the Google Cloud Console, request a new Dedicated Interconnect connection of 20 Gbps, and configure a VLAN attachment of 10 Gbps.
D.
From the Google Cloud Console, request a new Dedicated Interconnect connection of 20 Gbps, and configure a VLAN attachment of 10 Gbps.
Answers
E.
Configure Link Aggregation Control Protocol (LACP) on the on-premises router to use the 20-Gbps Dedicated Interconnect connection.
E.
Configure Link Aggregation Control Protocol (LACP) on the on-premises router to use the 20-Gbps Dedicated Interconnect connection.
Answers
Suggested answer: C, E

Your company has a Virtual Private Cloud (VPC) with two Dedicated Interconnect connections in two different regions: us-west1 and us-east1. Each Dedicated Interconnect connection is attached to a Cloud Router in its respective region by a VLAN attachment. You need to configure a high availability failover path. By default, all ingress traffic from the on-premises environment should flow to the VPC using the us-west1 connection. If us-west1 is unavailable, you want traffic to be rerouted to us-east1.

How should you configure the multi-exit discriminator (MED) values to enable this failover path?

A.
Use regional routing. Set the us-east1 Cloud Router to a base priority of 100, and set the us-west1 Cloud Router to a base priority of 1
A.
Use regional routing. Set the us-east1 Cloud Router to a base priority of 100, and set the us-west1 Cloud Router to a base priority of 1
Answers
B.
Use global routing. Set the us-east1 Cloud Router to a base priority of 100, and set the us-west1 Cloud Router to a base priority of 1
B.
Use global routing. Set the us-east1 Cloud Router to a base priority of 100, and set the us-west1 Cloud Router to a base priority of 1
Answers
C.
Use regional routing. Set the us-east1 Cloud Router to a base priority of 1000, and set the uswest1 Cloud Router to a base priority of 1
C.
Use regional routing. Set the us-east1 Cloud Router to a base priority of 1000, and set the uswest1 Cloud Router to a base priority of 1
Answers
D.
Use global routing. Set the us-east1 Cloud Router to a base priority of 1000, and set the us-west1 Cloud Router to a base priority of 1
D.
Use global routing. Set the us-east1 Cloud Router to a base priority of 1000, and set the us-west1 Cloud Router to a base priority of 1
Answers
Suggested answer: A

You have the following private Google Kubernetes Engine (GKE) cluster deployment:

You have a virtual machine (VM) deployed in the same VPC in the subnetwork kubernetesmanagement with internal IP address 192.168.40 2/24 and no external IP address assigned. You need to communicate with the cluster master using kubectl. What should you do?

A.
Add the network 192.168.40.0/24 to the masterAuthorizedNetworksConfig. Configure kubectl to communicate with the endpoint 192.168.38.2.
A.
Add the network 192.168.40.0/24 to the masterAuthorizedNetworksConfig. Configure kubectl to communicate with the endpoint 192.168.38.2.
Answers
B.
Add the network 192.168.38.0/28 to the masterAuthorizedNetworksConfig. Configure kubectl to communicate with the endpoint 192.168.38.2
B.
Add the network 192.168.38.0/28 to the masterAuthorizedNetworksConfig. Configure kubectl to communicate with the endpoint 192.168.38.2
Answers
C.
Add the network 192.168.36.0/24 to the masterAuthorizedNetworksConfig. Configure kubectl to communicate with the endpoint 192.168.38.2
C.
Add the network 192.168.36.0/24 to the masterAuthorizedNetworksConfig. Configure kubectl to communicate with the endpoint 192.168.38.2
Answers
D.
Add an external IP address to the VM, and add this IP address in the masterAuthorizedNetworksConfig. Configure kubectl to communicate with the endpoint 35.224.37.17.
D.
Add an external IP address to the VM, and add this IP address in the masterAuthorizedNetworksConfig. Configure kubectl to communicate with the endpoint 35.224.37.17.
Answers
Suggested answer: A

Explanation:


You have the networking configuration shown In the diagram Two VLAN attachments associated With two Dedicated Interconnect connections terminate on the same Cloud Router (mycloudrouter). The Interconnect connections terminate on two separate on-premises routers. You advertise the same prefixes from the Border Gateway Protocol (BOP) sessions associated With each Of the VLAN attachments.

You notice an asymmetric traffic flow between the two Interconnect connections. Which of the following actions should you take to troubleshoot the asymmetric traffic flow?

A.
From the Google Cloud console, navigate to the Hybrid Connectivity select the Cloud Router, and view BGP sessions.
A.
From the Google Cloud console, navigate to the Hybrid Connectivity select the Cloud Router, and view BGP sessions.
Answers
B.
From the Cloud CLI, run gcloud compute --protect_ID router get---status mycloudrouter ----region REGION and review the results.
B.
From the Cloud CLI, run gcloud compute --protect_ID router get---status mycloudrouter ----region REGION and review the results.
Answers
C.
From the Google Cloud console, navigate to Cloud Logging to view VPC Flow Logs and review the results
C.
From the Google Cloud console, navigate to Cloud Logging to view VPC Flow Logs and review the results
Answers
D.
From the Cloud CLI. run gcloud compute routers describe mycloudrouter --region REGION and review the results
D.
From the Cloud CLI. run gcloud compute routers describe mycloudrouter --region REGION and review the results
Answers
Suggested answer: A

Explanation:


You are in the process of deploying an internal HTTP(S) load balancer for your web server virtual machine (VM) Instances What two prerequisite tasks must be completed before creating the load balancer?

Choose 2 answers

A.
Choose a region.
A.
Choose a region.
Answers
B.
Create firewall rules for health checks
B.
Create firewall rules for health checks
Answers
C.
Reserve a static IP address for the load balancer
C.
Reserve a static IP address for the load balancer
Answers
D.
Determine the subnet mask for a proxy-only subnet.
D.
Determine the subnet mask for a proxy-only subnet.
Answers
E.
Determine the subnet mask for Serverless VPC Access.
E.
Determine the subnet mask for Serverless VPC Access.
Answers
Suggested answer: B, C

Explanation:

The correct answer is B and C. You must create firewall rules for health checks and reserve a static IP address for the load balancer before creating the internal HTTP(S) load balancer.

The other options are not correct because:

Option A is not a prerequisite task. You can choose a region when you create the load balancer, but you do not need to do it beforehand.

Option D is not a prerequisite task. You can determine the subnet mask for a proxy-only subnet when you create the subnet, but you do not need to do it beforehand.

Option E is not related to the internal HTTP(S) load balancer. Serverless VPC Access is a feature that allows you to connect your serverless applications to your VPC network, but it is not required for the load balancer.

You want Cloud CDN to serve the https://www.example.com/images/spacetime.png static image file that is hosted in a private Cloud Storage bucket, You are using the VSE ORIG.-X_NZADERS cache mode You receive an HTTP 403 error when opening the file In your browser and you see that the HTTP response has a Cache-control: private, max-age=O header How should you correct this Issue?

A.
Configure a Cloud Storage bucket permission that gives the Storage Legacy Object Reader role
A.
Configure a Cloud Storage bucket permission that gives the Storage Legacy Object Reader role
Answers
B.
Change the cache mode to cache all content.
B.
Change the cache mode to cache all content.
Answers
C.
Increase the default time-to-live (TTL) for the backend service.
C.
Increase the default time-to-live (TTL) for the backend service.
Answers
D.
Enable negative caching for the backend bucket
D.
Enable negative caching for the backend bucket
Answers
Suggested answer: A

Explanation:

The correct answer is A. Configure a Cloud Storage bucket permission that gives the Storage Legacy Object Reader role.

This answer is based on the following facts:

Cloud CDN can serve private content from Cloud Storage buckets, but you need to grant the appropriate permissions to the Google-managed service account that represents your load balancer1.

The Storage Legacy Object Reader role grants read access to objects in a bucket2.

The Cache-control: private header indicates that the object is not publicly readable and requires authentication3.

The USE_ORIGIN_HEADERS cache mode instructs Cloud CDN to cache responses based on the Cache-Control and Expires headers from the origin server4. Changing the cache mode, increasing the TTL, or enabling negative caching will not affect the 403 error.

You are deploying an application that runs on Compute Engine instances. You need to determine how to expose your application to a new customer You must ensure that your application meets the following requirements

* Maps multiple existing reserved external IP addresses to the Instance

* Processes IP Encapsulating Security Payload (ESP) traffic

What should you do?

A.
Configure a target pool, and create protocol forwarding rules for each external IP address.
A.
Configure a target pool, and create protocol forwarding rules for each external IP address.
Answers
B.
Configure a backend service, and create an external network load balancer for each external IP address
B.
Configure a backend service, and create an external network load balancer for each external IP address
Answers
C.
Configure a target instance, and create a protocol forwarding rule for each external IP address to be mapped to the instance.
C.
Configure a target instance, and create a protocol forwarding rule for each external IP address to be mapped to the instance.
Answers
D.
Configure the Compute Engine Instances' network Interface external IP address from None to Ephemeral Add as many external IP addresses as required
D.
Configure the Compute Engine Instances' network Interface external IP address from None to Ephemeral Add as many external IP addresses as required
Answers
Suggested answer: C

Explanation:

The correct answer is C. Configure a target instance, and create a protocol forwarding rule for each external IP address to be mapped to the instance.

This answer is based on the following facts:

A target instance is a Compute Engine instance that handles traffic from one or more forwarding rules1. You can use target instances to forward traffic to a single VM instance from one or more external IP addresses2.

A protocol forwarding rule specifies the IP protocol and port range for the traffic that you want to forward3. You can use protocol forwarding rules to forward traffic of any IP protocol, including ESP4.

The other options are not correct because:

Option A is not possible. You cannot create protocol forwarding rules for a target pool. A target pool is a group of instances that receives traffic from a network load balancer5.

Option B is not suitable. You do not need to create an external network load balancer for each external IP address. An external network load balancer distributes traffic among multiple backend instances based on the destination IP address and port. You can use a single load balancer with multiple forwarding rules to map multiple external IP addresses to the same backend service.

Option D is not feasible. You cannot add multiple external IP addresses to a single network interface of a Compute Engine instance. Each network interface can have only one external IP address that is either ephemeral or static. You can use alias IP ranges to assign multiple internal IP addresses to a single network interface, but not external IP addresses.

You are planning to use Terraform to deploy the Google Cloud infrastructure for your company The design must meet the following requirements

* Each Google Cloud project must represent an Internal project that your team Will work on

* After an internal project is finished, the infrastructure must be deleted

* Each Internal project must have Its own Google Cloud project owner to manage the Google Cloud resources-

* You have 10-100 projects deployed at a time,

While you are writing the Terraform code, you need to ensure that the deployment IS Simple, and the code IS reusable With centralized management What should you doo

A.
Create a Single pt0Ject and additional VPCs for each Internal project
A.
Create a Single pt0Ject and additional VPCs for each Internal project
Answers
B.
Create a Single Project and Single VPC for each internal project
B.
Create a Single Project and Single VPC for each internal project
Answers
C.
Create a single Shared VPC and attach each Google Cloud project as a service project
C.
Create a single Shared VPC and attach each Google Cloud project as a service project
Answers
D.
Create a Shared VPC and service project for each Internal project
D.
Create a Shared VPC and service project for each Internal project
Answers
Suggested answer: C

Explanation:

The correct answer is C. Create a single Shared VPC and attach each Google Cloud project as a service project.

This answer is based on the following facts:

A Shared VPC allows you to share one or more VPC networks across multiple Google Cloud projects1. This simplifies the deployment and management of the network infrastructure, as you only need to create and maintain one VPC network for all your internal projects.

A Shared VPC consists of a host project that owns the VPC network and one or more service projects that use the VPC network2. You can attach and detach service projects as needed, depending on the lifecycle of your internal projects. You can also delete service projects without affecting the host project or other service projects.

A Shared VPC allows you to delegate administrative roles to different project owners3. You can grant the Shared VPC Admin role to the owner of the host project, who can manage the VPC network and its subnets. You can also grant the Service Project Admin role to the owners of the service projects, who can manage the Google Cloud resources in their own projects.

The other options are not correct because:

Option A is not suitable. Creating a single project and additional VPCs for each internal project will increase the complexity and cost of the network infrastructure. You will need to create and maintain multiple VPC networks, firewall rules, routes, and VPN tunnels. You will also have a limit on the number of VPC networks per project4.

Option B is not feasible. Creating a single project and single VPC for each internal project will not meet the requirement of having separate project owners for each internal project. You will have only one project owner who can manage all the Google Cloud resources in the same project.

Option D is not optimal. Creating a Shared VPC and service project for each internal project will not meet the requirement of having a simple and reusable code with centralized management. You will need to create and maintain multiple Shared VPCs, which will increase the complexity and cost of the network infrastructure. You will also have more Terraform code to write and manage for each Shared VPC.

Your company recently migrated to Google Cloud in a Single region. You configured separate Virtual Private Cloud (VPC) networks for two departments. Department A and Department B. Department A has requested access to resources that are part Of Department Bis VPC. You need to configure the traffic from private IP addresses to flow between the VPCs using multi-NIC virtual machines (VMS) to meet security requirements Your configuration also must

* Support both TCP and UDP protocols

* Provide fully automated failover

* Include health-checks

Require minimal manual Intervention In the client VMS

Which approach should you take?

A.
Create the VMS In the same zone, and configure static routes With IP addresses as next hops.
A.
Create the VMS In the same zone, and configure static routes With IP addresses as next hops.
Answers
B.
Create the VMS in different zones, and configure static routes with instance names as next hops
B.
Create the VMS in different zones, and configure static routes with instance names as next hops
Answers
C.
Create an Instance template and a managed instance group. Configure a Single internal load balancer, and define a custom static route with the Internal TCP/UDP load balancer as the next hop
C.
Create an Instance template and a managed instance group. Configure a Single internal load balancer, and define a custom static route with the Internal TCP/UDP load balancer as the next hop
Answers
D.
Create an instance template and a managed instance group. Configure two separate internal TCP/IJDP load balancers for each protocol (TCP!UDP), and configure the client VIVIS to use the internal load balancers' virtual IP addresses
D.
Create an instance template and a managed instance group. Configure two separate internal TCP/IJDP load balancers for each protocol (TCP!UDP), and configure the client VIVIS to use the internal load balancers' virtual IP addresses
Answers
Suggested answer: D

Explanation:

The correct answer is D. Create an instance template and a managed instance group. Configure two separate internal TCP/UDP load balancers for each protocol (TCP/UDP), and configure the client VMs to use the internal load balancers' virtual IP addresses.

This answer is based on the following facts:

Using multi-NIC VMs as network virtual appliances (NVAs) allows you to route traffic between different VPC networks1. You can use NVAs to implement custom network policies and security requirements.

Using an instance template and a managed instance group allows you to create and manage multiple identical NVAs2. You can also use health checks and autoscaling policies to ensure high availability and reliability of your NVAs.

Using internal TCP/UDP load balancers allows you to distribute traffic from client VMs to NVAs based on the protocol and port3. You can also use health checks and failover policies to ensure that only healthy NVAs receive traffic.

Configuring the client VMs to use the internal load balancers' virtual IP addresses allows you to simplify the routing configuration and avoid manual intervention4. You do not need to create static routes or update them when NVAs are added or removed.

The other options are not correct because:

Option A is not suitable. Creating the VMs in the same zone does not provide high availability or failover. Using static routes with IP addresses as next hops requires manual intervention when NVAs are added or removed.

Option B is not optimal. Creating the VMs in different zones provides high availability, but not failover. Using static routes with instance names as next hops requires manual intervention when NVAs are added or removed.

Option C is not feasible. Creating an instance template and a managed instance group provides high availability and reliability, but using a single internal load balancer does not support both TCP and UDP protocols. You cannot define a custom static route with an internal load balancer as the next hop.

Your company is planning a migration to Google Kubernetes Engine. Your application team informed you that they require a minimum of 60 Pods per node and a maximum of 100 Pods per node Which Pod per node CIDR range should you use?

A.
/24
A.
/24
Answers
B.
/25
B.
/25
Answers
C.
/26
C.
/26
Answers
D.
/28
D.
/28
Answers
Suggested answer: B

Explanation:

The correct answer is B. /25.

This answer is based on the following facts:

The Pod per node CIDR range determines the size of the IP address range that is assigned to each node for Pods1. The Pods that run on a node are allocated IP addresses from the node's assigned CIDR range1.

The size of the CIDR range corresponds to the maximum number of Pods per node. For example, a /24 CIDR range allows up to 256 IP addresses, but the default maximum number of Pods per node for Standard clusters is 1102. A /25 CIDR range allows up to 128 IP addresses, which is enough for 100 Pods per node.

The other options are not correct because:

Option A is too large. A /24 CIDR range allows more IP addresses than needed for 100 Pods per node. This could result in inefficient use of the IP address space and limit the number of nodes that can be created in the cluster.

Option C is too small. A /26 CIDR range allows only 64 IP addresses, which is not enough for 60 Pods per node. This could result in insufficient capacity to schedule Pods on the nodes.

Option D is also too small. A /28 CIDR range allows only 16 IP addresses, which is far below the minimum requirement of 60 Pods per node. This could result in Pod scheduling failures and poor performance.

Total 215 questions
Go to page: of 22