ExamGecko
Home Home / Google / Professional Cloud Network Engineer

Google Professional Cloud Network Engineer Practice Test - Questions Answers, Page 20

Question list
Search
Search

List of questions

Search

Related questions











Your organization is deploying a mission-critical application with components in different regions due to strict compliance requirements. There are latency issues between different applications that reside in us-central1 and us-east4. The application team suspects the Google Cloud network as the source of the excessive latency despite using the Premium Network Service Tier. You need to use Google-recommended practices with the least amount of effort to verify the inter-region latency by investigating network performance. What should you do?

A.

Set up the Performance Dashboard in Network Intelligence Center. Select the traffic type (cross-zonal), the metric (latency - RTT), the time period, the desired regions (us-central1 and us-east4), and the network tier.

A.

Set up the Performance Dashboard in Network Intelligence Center. Select the traffic type (cross-zonal), the metric (latency - RTT), the time period, the desired regions (us-central1 and us-east4), and the network tier.

Answers
B.

Enable VPC Flow Logs for the VPC. Identify major bottlenecks from the application level using Flow Analyzer.

B.

Enable VPC Flow Logs for the VPC. Identify major bottlenecks from the application level using Flow Analyzer.

Answers
C.

Configure two Linux VMs in each zone for each region. Install the application, and run a load test using each zone from different regions.

C.

Configure two Linux VMs in each zone for each region. Install the application, and run a load test using each zone from different regions.

Answers
D.

Configure a VM with a probe in Network Intelligence Center in each zone for each region. Choose the traffic type (cross-zonal), metric (latency - RTT), desired regions (us-central1 and us-east4), and the network tier.

D.

Configure a VM with a probe in Network Intelligence Center in each zone for each region. Choose the traffic type (cross-zonal), metric (latency - RTT), desired regions (us-central1 and us-east4), and the network tier.

Answers
Suggested answer: A

Explanation:

The Performance Dashboard in the Network Intelligence Center provides a detailed view of network latency and performance metrics. For inter-region latency issues, you can quickly identify round-trip times (RTT) and latency using this tool by selecting the specific regions and network tiers, which allows you to diagnose any anomalies or patterns impacting performance.

You are configuring the firewall endpoints as part of the Cloud Next Generation Firewall (Cloud NGFW) intrusion prevention service in Google Cloud. You have configured a threat prevention security profile, and you now need to create an endpoint for traffic inspection. What should you do?

A.

Attach the profile to the VPC network, create a firewall endpoint within the zone, and use a firewall policy rule to apply the L7 inspection.

A.

Attach the profile to the VPC network, create a firewall endpoint within the zone, and use a firewall policy rule to apply the L7 inspection.

Answers
B.

Create a firewall endpoint within the zone, associate the endpoint to the VPC network, and use a firewall policy rule to apply the L7 inspection.

B.

Create a firewall endpoint within the zone, associate the endpoint to the VPC network, and use a firewall policy rule to apply the L7 inspection.

Answers
C.

Create a firewall endpoint within the region, associate the endpoint to the VPC network, and use a firewall policy rule to apply the L7 inspection.

C.

Create a firewall endpoint within the region, associate the endpoint to the VPC network, and use a firewall policy rule to apply the L7 inspection.

Answers
D.

Create a Private Service Connect endpoint within the zone, associate the endpoint to the VPC network, and use a firewall policy rule to apply the L7 inspection.

D.

Create a Private Service Connect endpoint within the zone, associate the endpoint to the VPC network, and use a firewall policy rule to apply the L7 inspection.

Answers
Suggested answer: C

Explanation:

For Cloud NGFW in Google Cloud, firewall endpoints are typically created at the regional level, allowing you to associate these with your VPC network for Layer 7 traffic inspection. This regional setup ensures high availability and scales the inspection service across the network.

Your company's current network architecture has three VPC Service Controls perimeters:

One perimeter (PERIMETER_PROD) to protect production storage buckets

One perimeter (PERIMETER_NONPROD) to protect non-production storage buckets

One perimeter (PERIMETER_VPC) that contains a single VPC (VPC_ONE)

In this single VPC (VPC_ONE), the IP_RANGE_PROD is dedicated to the subnets of the production workloads, and the IP_RANGE_NONPROD is dedicated to subnets of non-production workloads. Workloads cannot be created outside those two ranges. You need to ensure that production workloads can access only production storage buckets and non-production workloads can access only non-production storage buckets with minimal setup effort. What should you do?

A.

Develop a design that uses the IP_RANGE_PROD and IP_RANGE_NONPROD perimeters to create two access levels, with each access level referencing a single range. Create two ingress access policies with each access policy referencing one of the two access levels. Update the PERIMETER_PROD and PERIMETER_NONPROD perimeters.

A.

Develop a design that uses the IP_RANGE_PROD and IP_RANGE_NONPROD perimeters to create two access levels, with each access level referencing a single range. Create two ingress access policies with each access policy referencing one of the two access levels. Update the PERIMETER_PROD and PERIMETER_NONPROD perimeters.

Answers
B.

Develop a design that removes the PERIMETER_VPC perimeter. Update the PERIMETER_NONPROD perimeter to include the project containing VPC_ONE. Remove the PERIMETER_PROD perimeter.

B.

Develop a design that removes the PERIMETER_VPC perimeter. Update the PERIMETER_NONPROD perimeter to include the project containing VPC_ONE. Remove the PERIMETER_PROD perimeter.

Answers
C.

Develop a design that creates a new VPC (VPC_NONPROD) in the same project as VPC_ONE. Migrate all the non-production workloads from VPC_ONE to the PERIMETER_NONPROD perimeter. Remove the PERIMETER_VPC perimeter. Update the PERIMETER_PROD perimeter to include VPC_ONE and the PERIMETER_NONPROD perimeter to include VPC_NONPROD.

C.

Develop a design that creates a new VPC (VPC_NONPROD) in the same project as VPC_ONE. Migrate all the non-production workloads from VPC_ONE to the PERIMETER_NONPROD perimeter. Remove the PERIMETER_VPC perimeter. Update the PERIMETER_PROD perimeter to include VPC_ONE and the PERIMETER_NONPROD perimeter to include VPC_NONPROD.

Answers
D.

Develop a design that removes the PERIMETER_VPC perimeter. Update the PERIMETER_PROD perimeter to include the project containing VPC_ONE. Remove the PERIMETER_NONPROD perimeter.

D.

Develop a design that removes the PERIMETER_VPC perimeter. Update the PERIMETER_PROD perimeter to include the project containing VPC_ONE. Remove the PERIMETER_NONPROD perimeter.

Answers
Suggested answer: A

Explanation:

Using IP range-based access levels for VPC Service Controls allows segmentation of production and non-production resources within the same VPC. By creating separate access levels and ingress policies for each IP range, you ensure that only production subnets access production buckets and non-production subnets access non-production buckets, providing the required isolation.

Your organization recently exposed a set of services through a global external Application Load Balancer. After conducting some testing, you observed that responses would intermittently yield a non-HTTP 200 response. You need to identify the error. What should you do? (Choose 2 answers)

A.

Access a VM in the VPC through SSH, and try to access a backend VM directly. If the request is successful from the VM, increase the quantity of backends.

A.

Access a VM in the VPC through SSH, and try to access a backend VM directly. If the request is successful from the VM, increase the quantity of backends.

Answers
B.

Enable and review the health check logs. Review the error responses in Cloud Logging.

B.

Enable and review the health check logs. Review the error responses in Cloud Logging.

Answers
C.

Validate the health of the backend service. Enable logging on the load balancer, and identify the error response in Cloud Logging. Determine the cause of the error by reviewing the statusDetails log field.

C.

Validate the health of the backend service. Enable logging on the load balancer, and identify the error response in Cloud Logging. Determine the cause of the error by reviewing the statusDetails log field.

Answers
D.

Delete the load balancer and backend services. Create a new passthrough Network Load Balancer. Configure a failover group of VMs for the backend.

D.

Delete the load balancer and backend services. Create a new passthrough Network Load Balancer. Configure a failover group of VMs for the backend.

Answers
E.

Validate the health of the backend service. Enable logging for the backend service, and identify the error response in Cloud Logging. Determine the cause of the error by reviewing the statusDetails log field.

E.

Validate the health of the backend service. Enable logging for the backend service, and identify the error response in Cloud Logging. Determine the cause of the error by reviewing the statusDetails log field.

Answers
Suggested answer: B, C

Explanation:

To identify errors with intermittent non-HTTP 200 responses:

Enable and review health check logs for your backend to identify potential issues with backend availability or connectivity (Option B).

Enable logging on the load balancer and review Cloud Logging, particularly the statusDetails field, to gather insights on error types and sources (Option C).

These steps allow for precise error identification by leveraging both health checks and detailed logging features available through Google Cloud's external load balancer diagnostics.

Your organization recently created a sandbox environment for a new cloud deployment. To have parity with the production environment, a pair of Compute Engine instances with multiple network interfaces (NICs) were deployed. These Compute Engine instances have a NIC in the Untrusted VPC (10.0.0.0/23) and a NIC in the Trusted VPC (10.128.0.0/9). A HA VPN tunnel has been established to the on-premises environment from the Untrusted VPC. Through this pair of VPN tunnels, the on-premises environment receives the route advertisements for the Untrusted and Trusted VPCs. In return, the on-premises environment advertises a number of CIDR ranges to the Untrusted VPC. However, when you tried to access one of the test services from the on-premises environment to the Trusted VPC, you received no response. You need to configure a highly available solution to enable the on-premises users to connect to the services in the Trusted VPC. What should you do?

A.

Add both multi-NIC VMs to a new unmanaged instance group, named nva-uig. Create an internal passthrough Network Load Balancer in the Untrusted VPC, named ilb-untrusted, with the nva-uig unmanaged instance group designated as the backend. Create a custom static route in the Untrusted VPC for destination 10.123.0.0/9 and the next hop ilb-untrusted. Create an internal passthrough Network Load Balancer in the Trusted VPC, named ilb-trusted, with the nva-uig unmanaged instance group designated as the backend. Create a custom static route in the Trusted VPC for destination 0.0.0.0/0 and the next hop ilb-trusted.

A.

Add both multi-NIC VMs to a new unmanaged instance group, named nva-uig. Create an internal passthrough Network Load Balancer in the Untrusted VPC, named ilb-untrusted, with the nva-uig unmanaged instance group designated as the backend. Create a custom static route in the Untrusted VPC for destination 10.123.0.0/9 and the next hop ilb-untrusted. Create an internal passthrough Network Load Balancer in the Trusted VPC, named ilb-trusted, with the nva-uig unmanaged instance group designated as the backend. Create a custom static route in the Trusted VPC for destination 0.0.0.0/0 and the next hop ilb-trusted.

Answers
B.

Add both multi-NIC VMs to a new unmanaged instance group, named nva-uig. Create an internal passthrough Network Load Balancer in the Untrusted VPC, named ilb-untrusted, with the nva-uig unmanaged instance group designated as the backend. Create a custom static route in the Untrusted VPC for destination 10.128.0.0/9 and the next hop ilb-untrusted. Create an internal passthrough Network Load Balancer in the Trusted VPC, named ilb-trusted, with the nva-uig unmanaged instance group designated as the backend. Create a custom static route in the Trusted VPC for destination 10.0.0.0/23 and the next hop ilb-trusted.

B.

Add both multi-NIC VMs to a new unmanaged instance group, named nva-uig. Create an internal passthrough Network Load Balancer in the Untrusted VPC, named ilb-untrusted, with the nva-uig unmanaged instance group designated as the backend. Create a custom static route in the Untrusted VPC for destination 10.128.0.0/9 and the next hop ilb-untrusted. Create an internal passthrough Network Load Balancer in the Trusted VPC, named ilb-trusted, with the nva-uig unmanaged instance group designated as the backend. Create a custom static route in the Trusted VPC for destination 10.0.0.0/23 and the next hop ilb-trusted.

Answers
C.

Add both multi-NIC VMs to a new unmanaged instance group, named nva-uigO. Create an internal passthrough Network Load Balancer in the Untrusted VPC, named ilb-untrusted, with the nva-uigO as backend. Create a custom static route in the Untrusted VPC for destination 10.128.0.0/9 and the next hop ilb-untrusted. Add both multi-NIC VMs to a new unmanaged instance group, named nva-uigl. Create an internal passthrough Network Load Balancer in the Trusted VPC, named ilb-trusted, with the nva-uigl as backend. Create a custom static route in the Trusted VPC for destination 0.0.0.0/0 and the next hop ilb-trusted.

C.

Add both multi-NIC VMs to a new unmanaged instance group, named nva-uigO. Create an internal passthrough Network Load Balancer in the Untrusted VPC, named ilb-untrusted, with the nva-uigO as backend. Create a custom static route in the Untrusted VPC for destination 10.128.0.0/9 and the next hop ilb-untrusted. Add both multi-NIC VMs to a new unmanaged instance group, named nva-uigl. Create an internal passthrough Network Load Balancer in the Trusted VPC, named ilb-trusted, with the nva-uigl as backend. Create a custom static route in the Trusted VPC for destination 0.0.0.0/0 and the next hop ilb-trusted.

Answers
D.

Add both multi-NIC VMs to a new unmanaged instance group, named nva-uig. Create two custom static routes in the Untrusted VPC for destination 10.128.0.0/9 and set each of the VMs' NIC as the next hop. Create two custom static routes in the Trusted VPC for destination 10.0.0.0/23 and set each of the VMs' NIC as the next hop.

D.

Add both multi-NIC VMs to a new unmanaged instance group, named nva-uig. Create two custom static routes in the Untrusted VPC for destination 10.128.0.0/9 and set each of the VMs' NIC as the next hop. Create two custom static routes in the Trusted VPC for destination 10.0.0.0/23 and set each of the VMs' NIC as the next hop.

Answers
Suggested answer: B

Explanation:

The solution requires creating internal passthrough load balancers for both VPCs, with custom static routes pointing to each load balancer. This ensures connectivity between the on-premises environment and the Trusted VPC via the Untrusted VPC.

You are configuring the firewall endpoints as part of the Cloud Next Generation Firewall (Cloud NGFW) intrusion prevention service in Google Cloud. You have configured a threat prevention security profile, and you now need to create an endpoint for traffic inspection. What should you do?

A.

Create a Private Service Connect endpoint within the zone, associate the endpoint to the VPC network, and use a firewall policy rule to apply the L7 inspection.

A.

Create a Private Service Connect endpoint within the zone, associate the endpoint to the VPC network, and use a firewall policy rule to apply the L7 inspection.

Answers
B.

Create a firewall endpoint within the region, associate the endpoint to the VPC network, and use a firewall policy rule to apply the L7 inspection.

B.

Create a firewall endpoint within the region, associate the endpoint to the VPC network, and use a firewall policy rule to apply the L7 inspection.

Answers
C.

Create a firewall endpoint within the zone, associate the endpoint to the VPC network, and use a firewall policy rule to apply the L7 inspection.

C.

Create a firewall endpoint within the zone, associate the endpoint to the VPC network, and use a firewall policy rule to apply the L7 inspection.

Answers
D.

Attach the profile to the VPC network, create a firewall endpoint within the zone, and use a firewall policy rule to apply the L7 inspection.

D.

Attach the profile to the VPC network, create a firewall endpoint within the zone, and use a firewall policy rule to apply the L7 inspection.

Answers
Suggested answer: C

Explanation:

To apply Layer 7 (L7) inspection for intrusion prevention, you must create a firewall endpoint within the zone where the traffic inspection is required. This endpoint is then associated with the VPC network, and a firewall policy rule is applied for the L7 inspection.

Your company's current network architecture has two VPCs that are connected by a dual-NIC instance that acts as a bump-in-the-wire firewall between the two VPCs. Flows between pairs of subnets across the two VPCs are working correctly. Suddenly, you receive an alert that none of the flows between the two VPCs are working anymore. You need to troubleshoot the problem. What should you do? (Choose 2 answers)

A.

Verify that the dual-NIC instance has not been added to a backend service.

A.

Verify that the dual-NIC instance has not been added to a backend service.

Answers
B.

Verify that a public IP address has not been assigned to any network interface of the dual-NIC instance.

B.

Verify that a public IP address has not been assigned to any network interface of the dual-NIC instance.

Answers
C.

Use Cloud Logging to verify that there were no modifications to the VPC firewall rules or policies that were applied to the two network interfaces of the dual-NIC instance.

C.

Use Cloud Logging to verify that there were no modifications to the VPC firewall rules or policies that were applied to the two network interfaces of the dual-NIC instance.

Answers
D.

Verify that a VPC Service Controls perimeter has not been enabled for the project that contains the two VPCs and the dual-NIC instance.

D.

Verify that a VPC Service Controls perimeter has not been enabled for the project that contains the two VPCs and the dual-NIC instance.

Answers
E.

Verify that the dual-NIC instance has the --can-ip-forward attribute enabled.

E.

Verify that the dual-NIC instance has the --can-ip-forward attribute enabled.

Answers
Suggested answer: C, E

Explanation:

You should check Cloud Logging to see if any firewall rules or policies were modified, as these could block traffic between the VPCs. Additionally, the --can-ip-forward attribute must be enabled for the dual-NIC instance to allow forwarding traffic between the interfaces.

Your team deployed two applications in GKE that are exposed through an external Application Load Balancer. When queries are sent to www.abc123.com/sales and www.abc123.com/get-an-analysis, the correct pages are displayed. However, you have received complaints that www.abc123.com yields a 404 error. You need to resolve this error. What should you do?

A.

Review the Ingress YAML file. Define the default backend. Reapply the YAML.

A.

Review the Ingress YAML file. Define the default backend. Reapply the YAML.

Answers
B.

Review the Ingress YAML file. Add a new path rule for the * character that directs to the base service. Reapply the YAML.

B.

Review the Ingress YAML file. Add a new path rule for the * character that directs to the base service. Reapply the YAML.

Answers
C.

Review the Service YAML file. Define a default backend. Reapply the YAML.

C.

Review the Service YAML file. Define a default backend. Reapply the YAML.

Answers
D.

Review the Service YAML file. Add a new path rule for the * character that directs to the base service. Reapply the YAML.

D.

Review the Service YAML file. Add a new path rule for the * character that directs to the base service. Reapply the YAML.

Answers
Suggested answer: A

Explanation:

The 404 error is occurring because there is no default backend defined for requests to the root URL. Defining the default backend in the Ingress YAML file ensures that requests to www.abc123.com are routed to the correct service.

Your organization has resources in two different VPCs, each in different Google Cloud projects, and requires connectivity between the resources in the two VPCs. You have already determined that there is no IP address overlap; however, one VPC uses privately used public IP (PUPI) ranges. You would like to enable connectivity between these resources by using a lower cost and higher performance method. What should you do?

A.

Create an HA VPN between the two VPCs that includes the PUPI ranges in the custom route advertisements of the Cloud Router. Create the necessary ingress VPC firewall rules that target the specific resources by using IP ranges as the source filter.

A.

Create an HA VPN between the two VPCs that includes the PUPI ranges in the custom route advertisements of the Cloud Router. Create the necessary ingress VPC firewall rules that target the specific resources by using IP ranges as the source filter.

Answers
B.

Create a VPC Network Peering connection between the two VPCs that allows the export and import of custom routes for public IP addresses. Create the necessary ingress VPC firewall rules that target the specific resources by using service accounts as the source filter.

B.

Create a VPC Network Peering connection between the two VPCs that allows the export and import of custom routes for public IP addresses. Create the necessary ingress VPC firewall rules that target the specific resources by using service accounts as the source filter.

Answers
C.

Create a VPC Network Peering connection between the two VPCs that allows the export and import of subnet routes with public IP addresses. Create the necessary ingress VPC firewall rules that target the specific resources by using IP ranges as the source filter.

C.

Create a VPC Network Peering connection between the two VPCs that allows the export and import of subnet routes with public IP addresses. Create the necessary ingress VPC firewall rules that target the specific resources by using IP ranges as the source filter.

Answers
D.

Create a VPC Network Peering connection between the two VPCs that allows the export and import of subnet routes with public IP addresses. Create the necessary ingress VPC firewall rules that target the specific resources by using network tags as the source filter.

D.

Create a VPC Network Peering connection between the two VPCs that allows the export and import of subnet routes with public IP addresses. Create the necessary ingress VPC firewall rules that target the specific resources by using network tags as the source filter.

Answers
Suggested answer: C

Explanation:

VPC Network Peering is the most cost-effective and high-performance method for connecting two VPCs. Since one VPC uses privately used public IP (PUPI) ranges, you need to configure peering to allow the export and import of subnet routes with public IP addresses. Firewall rules can be used to control traffic between the resources.

You have several VMs across multiple VPCs in your cloud environment that require access to internet endpoints. These VMs cannot have public IP addresses due to security policies, so you plan to use Cloud NAT to provide outbound internet access. Within your VPCs, you have several subnets in each region. You want to ensure that only specific subnets have access to the internet through Cloud NAT. You want to avoid any unintentional configuration issues caused by other administrators and align to Google-recommended practices. What should you do?

A.

Deploy Cloud NAT in each VPC and configure a custom source range that includes the allowed subnets. Configure Cloud NAT rules to only permit the allowed subnets to egress through Cloud NAT.

A.

Deploy Cloud NAT in each VPC and configure a custom source range that includes the allowed subnets. Configure Cloud NAT rules to only permit the allowed subnets to egress through Cloud NAT.

Answers
B.

Create a firewall rule in each VPC at priority 500 that targets all instances in the network and denies egress to the internet (0.0.0.0/0). Create a firewall rule at priority 300 that targets all instances in the network, has a source filter that maps to the allowed subnets, and allows egress to the internet (0.0.0.0/0). Deploy Cloud NAT and configure all primary and secondary subnet source ranges.

B.

Create a firewall rule in each VPC at priority 500 that targets all instances in the network and denies egress to the internet (0.0.0.0/0). Create a firewall rule at priority 300 that targets all instances in the network, has a source filter that maps to the allowed subnets, and allows egress to the internet (0.0.0.0/0). Deploy Cloud NAT and configure all primary and secondary subnet source ranges.

Answers
C.

Create a firewall rule in each VPC at priority 500 that targets all instances in the network and denies egress to the internet (0.0.0.0/0). Create a firewall rule at priority 300 that targets all instances in the network, has a source filter that maps to the allowed subnets, and allows egress to the internet (0.0.0.0/0). Deploy Cloud NAT and configure a custom source range that includes the allowed subnets.

C.

Create a firewall rule in each VPC at priority 500 that targets all instances in the network and denies egress to the internet (0.0.0.0/0). Create a firewall rule at priority 300 that targets all instances in the network, has a source filter that maps to the allowed subnets, and allows egress to the internet (0.0.0.0/0). Deploy Cloud NAT and configure a custom source range that includes the allowed subnets.

Answers
D.

Create a constraints/compute.restrictCloudNATUsage organizational policy constraint. Attach the constraint to a folder that contains the associated projects. Configure the allowedValues to only contain the subnets that should have internet access. Deploy Cloud NAT and select only the allowed subnets.

D.

Create a constraints/compute.restrictCloudNATUsage organizational policy constraint. Attach the constraint to a folder that contains the associated projects. Configure the allowedValues to only contain the subnets that should have internet access. Deploy Cloud NAT and select only the allowed subnets.

Answers
Suggested answer: D

Explanation:

Using an organizational policy with the restrictCloudNATUsage constraint allows you to limit Cloud NAT usage to specific subnets, ensuring that only the necessary subnets can access the internet. This method aligns with Google-recommended practices for controlling Cloud NAT configurations across multiple VPCs and regions.

Total 215 questions
Go to page: of 22