ExamGecko
Home Home / Google / Professional Cloud Network Engineer

Google Professional Cloud Network Engineer Practice Test - Questions Answers, Page 19

Question list
Search
Search

List of questions

Search

Related questions











Recently, your networking team enabled Cloud CDN for one of the external-facing services that is exposed through an external Application Load Balancer. The application team has already defined which content should be cached within the responses. Upon testing the load balancer, you did not observe any change in performance after the Cloud CDN enablement. You need to resolve the issue. What should you do?

A.

Configure the CACHE_MAX_STATIC caching mode on Cloud CDN to ensure Cloud CDN caches content depending on responses from the backends.

A.

Configure the CACHE_MAX_STATIC caching mode on Cloud CDN to ensure Cloud CDN caches content depending on responses from the backends.

Answers
B.

Configure the USE_ORIGIN_HEADERS caching mode on Cloud CDN to ensure Cloud CDN caches content based on response headers from the backends.

B.

Configure the USE_ORIGIN_HEADERS caching mode on Cloud CDN to ensure Cloud CDN caches content based on response headers from the backends.

Answers
C.

Configure the CACHE_ALL_STATIC caching mode on Cloud CDN to ensure Cloud CDN caches all static content as well as content defined by the backends.

C.

Configure the CACHE_ALL_STATIC caching mode on Cloud CDN to ensure Cloud CDN caches all static content as well as content defined by the backends.

Answers
D.

Configure the FORCE_CACHE_ALL caching mode on Cloud CDN to ensure all appropriate content is cached.

D.

Configure the FORCE_CACHE_ALL caching mode on Cloud CDN to ensure all appropriate content is cached.

Answers
Suggested answer: B

Explanation:

When enabling Cloud CDN, for caching behavior to follow the application-defined caching headers, you need to configure the USE_ORIGIN_HEADERS caching mode. This setting ensures that the Cloud CDN respects the cache control headers specified by the backend, allowing the application-defined caching rules to dictate what content gets cached. This is often required when specific caching directives are already set by the application.

Your organization is developing a landing zone architecture with the following requirements:

No communication between production and non-production environments.

Communication between applications within an environment may be necessary.

Network administrators should centrally manage all network resources, including subnets, routes, and firewall rules.

Each application should be billed separately.

Developers of an application within a project should have the autonomy to create their compute resources.

Up to 1000 applications are expected per environment.

What should you do?

A.

Create a design that has a Shared VPC for each project. Implement hierarchical firewall policies to apply micro-segmentation between VPCs.

A.

Create a design that has a Shared VPC for each project. Implement hierarchical firewall policies to apply micro-segmentation between VPCs.

Answers
B.

Create a design where each project has its own VPC. Ensure all VPCs are connected by a Network Connectivity Center hub that is centrally managed by the network team.

B.

Create a design where each project has its own VPC. Ensure all VPCs are connected by a Network Connectivity Center hub that is centrally managed by the network team.

Answers
C.

Create a design that implements a single Shared VPC. Use VPC firewall rules with secure tags to enforce micro-segmentation between environments.

C.

Create a design that implements a single Shared VPC. Use VPC firewall rules with secure tags to enforce micro-segmentation between environments.

Answers
D.

Create a design that has one host project with a Shared VPC for the production environment, another host project with a Shared VPC for the non-production environment, and a service project that is associated with the corresponding host project for each initiative.

D.

Create a design that has one host project with a Shared VPC for the production environment, another host project with a Shared VPC for the non-production environment, and a service project that is associated with the corresponding host project for each initiative.

Answers
Suggested answer: D

Explanation:

Using separate Shared VPCs for production and non-production environments in different host projects (Option D) meets all requirements. This design allows network administrators to centrally manage resources within each Shared VPC while ensuring isolation between environments and separate billing. By associating service projects with each host project, developers can manage resources within their project without affecting the overall VPC network structure.

You need to enable Private Google Access for some subnets within your Virtual Private Cloud (VPC). Your security team set up the VPC to send all internet-bound traffic back to the on-premises data center for inspection before egressing to the internet, and is also implementing VPC Service Controls for API-level security control. You have already enabled the subnets for Private Google Access. What configuration changes should you make to enable Private Google Access while adhering to your security team's requirements?

A.

Create a private DNS zone with a CNAME record for *.googleapis.com to private.googleapis.com, with an A record pointing to Google's private API address range. Change the custom route that points the default route (0/0) to the default internet gateway as the next hop.

A.

Create a private DNS zone with a CNAME record for *.googleapis.com to private.googleapis.com, with an A record pointing to Google's private API address range. Change the custom route that points the default route (0/0) to the default internet gateway as the next hop.

Answers
B.

Create a private DNS zone with a CNAME record for *.googleapis.com to private.googleapis.com, with an A record pointing to Google's private API address range. Create a custom route that points Google's private API address range to the default internet gateway as the next hop.

B.

Create a private DNS zone with a CNAME record for *.googleapis.com to private.googleapis.com, with an A record pointing to Google's private API address range. Create a custom route that points Google's private API address range to the default internet gateway as the next hop.

Answers
C.

Create a private DNS zone with a CNAME record for *.googleapis.com to restricted.googleapis.com, with an A record pointing to Google's restricted API address range. Create a custom route that points Google's restricted API address range to the default internet gateway as the next hop.

C.

Create a private DNS zone with a CNAME record for *.googleapis.com to restricted.googleapis.com, with an A record pointing to Google's restricted API address range. Create a custom route that points Google's restricted API address range to the default internet gateway as the next hop.

Answers
D.

Create a private DNS zone with a CNAME record for *.googleapis.com to restricted.googleapis.com, with an A record pointing to Google's restricted API address range. Change the custom route that points the default route (0/0) to the default internet gateway as the next hop.

D.

Create a private DNS zone with a CNAME record for *.googleapis.com to restricted.googleapis.com, with an A record pointing to Google's restricted API address range. Change the custom route that points the default route (0/0) to the default internet gateway as the next hop.

Answers
Suggested answer: D

Explanation:

For environments requiring API security controls, use restricted.googleapis.com as it restricts access to Google APIs and enforces VPC Service Controls. The custom DNS and routing configuration ensures compliance with security policies by directing all API traffic to restricted endpoints while maintaining Private Google Access.

You reviewed the user behavior for your main application, which uses an external global Application Load Balancer, and found that the backend servers were overloaded due to erratic spikes in client requests. You need to limit concurrent sessions and return an HTTP 429 'Too Many Requests' response back to the client while following Google-recommended practices. What should you do?

A.

Create a Cloud Armor security policy, and apply the predefined Open Worldwide Application Security Project (OWASP) rules to automatically implement the rate limit per client IP address.

A.

Create a Cloud Armor security policy, and apply the predefined Open Worldwide Application Security Project (OWASP) rules to automatically implement the rate limit per client IP address.

Answers
B.

Configure the load balancer to accept only the defined amount of requests per client IP address, increase the backend servers to support more traffic, and redirect traffic to a different backend to burst traffic.

B.

Configure the load balancer to accept only the defined amount of requests per client IP address, increase the backend servers to support more traffic, and redirect traffic to a different backend to burst traffic.

Answers
C.

Configure a VM with Linux, implement the rate limit through iptables, and use a firewall rule to send an HTTP 429 response to the client application.

C.

Configure a VM with Linux, implement the rate limit through iptables, and use a firewall rule to send an HTTP 429 response to the client application.

Answers
D.

Create a Cloud Armor security policy, and associate the policy with the load balancer. Configure the security policy's settings as follows: action: throttle, conform-action: allow, exceed-action: deny-429.

D.

Create a Cloud Armor security policy, and associate the policy with the load balancer. Configure the security policy's settings as follows: action: throttle, conform-action: allow, exceed-action: deny-429.

Answers
Suggested answer: D

Explanation:

To control traffic spikes and enforce rate limits, configure Cloud Armor with throttle and deny-429 actions. This allows you to set rate limits per client IP and ensures that excess traffic receives an HTTP 429 response, effectively controlling overload situations per Google best practices.

Your organization has a new security policy that requires you to monitor all egress traffic payloads from your virtual machines in the us-west2 region. You deployed an intrusion detection system (IDS) virtual appliance in the same region to meet the new policy. You now need to integrate the IDS into the environment to monitor all egress traffic payloads from us-west2. What should you do?

A.

Enable firewall logging and forward all filtered egress firewall logs to the IDS.

A.

Enable firewall logging and forward all filtered egress firewall logs to the IDS.

Answers
B.

Create an internal HTTP(S) load balancer for Packet Mirroring, and add a packet mirroring policy filter for egress traffic.

B.

Create an internal HTTP(S) load balancer for Packet Mirroring, and add a packet mirroring policy filter for egress traffic.

Answers
C.

Create an internal TCP/UDP load balancer for Packet Mirroring, and add a packet mirroring policy filter for egress traffic.

C.

Create an internal TCP/UDP load balancer for Packet Mirroring, and add a packet mirroring policy filter for egress traffic.

Answers
D.

Enable VPC Flow Logs. Create a sink in Cloud Logging to send filtered egress VPC Flow Logs to the IDS.

D.

Enable VPC Flow Logs. Create a sink in Cloud Logging to send filtered egress VPC Flow Logs to the IDS.

Answers
Suggested answer: C

Explanation:

Packet Mirroring with an internal TCP/UDP load balancer allows for comprehensive monitoring of egress traffic, which includes payloads. This is required for integration with an IDS for detailed inspection of traffic payloads, meeting the security policy needs for monitoring and detection.

You are configuring the final elements of a migration effort where resources have been moved from on-premises to Google Cloud. While reviewing the deployed architecture, you noticed that DNS resolution is failing when queries are being sent to the on-premises environment. You log in to a Compute Engine instance, try to resolve an on-premises hostname, and the query fails. DNS queries are not arriving at the on-premises DNS server. You need to use managed services to reconfigure Cloud DNS to resolve the DNS error. What should you do?

A.

Validate that the Compute Engine instances are using the Metadata Service IP address as their resolver. Configure an outbound forwarding zone for the on-premises domain pointing to the on-premises DNS server. Configure Cloud Router to advertise the Cloud DNS proxy range to the on-premises network.

A.

Validate that the Compute Engine instances are using the Metadata Service IP address as their resolver. Configure an outbound forwarding zone for the on-premises domain pointing to the on-premises DNS server. Configure Cloud Router to advertise the Cloud DNS proxy range to the on-premises network.

Answers
B.

Validate that there is network connectivity to the on-premises environment and that the Compute Engine instances can reach other on-premises resources. If errors persist, remove the VPC Network Peerings and recreate the peerings after validating the routes.

B.

Validate that there is network connectivity to the on-premises environment and that the Compute Engine instances can reach other on-premises resources. If errors persist, remove the VPC Network Peerings and recreate the peerings after validating the routes.

Answers
C.

Review the existing Cloud DNS zones, and validate that there is a route in the VPC directing traffic destined to the IP address of the DNS servers. Recreate the existing DNS forwarding zones to forward all queries to the on-premises DNS servers.

C.

Review the existing Cloud DNS zones, and validate that there is a route in the VPC directing traffic destined to the IP address of the DNS servers. Recreate the existing DNS forwarding zones to forward all queries to the on-premises DNS servers.

Answers
D.

Ensure that the operating systems of the Compute Engine instances are configured to send DNS queries to the on-premises DNS servers directly.

D.

Ensure that the operating systems of the Compute Engine instances are configured to send DNS queries to the on-premises DNS servers directly.

Answers
Suggested answer: A

Explanation:

To resolve DNS resolution issues for on-premises domains from Google Cloud, you should use Cloud DNS outbound forwarding zones. This setup forwards DNS requests for specific domains to on-premises DNS servers. Cloud Router is needed to advertise the range for the DNS proxy service back to the on-premises environment, ensuring that DNS queries from Compute Engine instances reach the on-premises DNS servers.

Your organization wants to seamlessly migrate a global external web application from Compute Engine to GKE. You need to deploy a simple, cloud-first solution that exposes both applications and sends 10% of the requests to the new application. What should you do?

A.

Configure a global external Application Load Balancer with a Service Extension that points to an application running in a VM, which controls which requests go to each application.

A.

Configure a global external Application Load Balancer with a Service Extension that points to an application running in a VM, which controls which requests go to each application.

Answers
B.

Configure a global external Application Load Balancer with weighted traffic splitting.

B.

Configure a global external Application Load Balancer with weighted traffic splitting.

Answers
C.

Configure two separate global external Application Load Balancers, and use Cloud DNS geolocation routing policies.

C.

Configure two separate global external Application Load Balancers, and use Cloud DNS geolocation routing policies.

Answers
D.

Configure a global external Application Load Balancer with weighted request mirroring.

D.

Configure a global external Application Load Balancer with weighted request mirroring.

Answers
Suggested answer: B

Explanation:

Weighted traffic splitting allows you to gradually route a percentage of traffic to the new GKE application while still serving the majority of requests through the Compute Engine instance. This gradual transition minimizes risks and ensures seamless traffic distribution during migration.

Your organization has distributed geographic applications with significant data volumes. You need to create a design that exposes the HTTPS workloads globally and keeps traffic costs to a minimum. What should you do?

A.

Deploy a regional external Application Load Balancer with Standard Network Service Tier.

A.

Deploy a regional external Application Load Balancer with Standard Network Service Tier.

Answers
B.

Deploy a regional external Application Load Balancer with Premium Network Service Tier.

B.

Deploy a regional external Application Load Balancer with Premium Network Service Tier.

Answers
C.

Deploy a global external proxy Network Load Balancer with Standard Network Service Tier.

C.

Deploy a global external proxy Network Load Balancer with Standard Network Service Tier.

Answers
D.

Deploy a global external Application Load Balancer with Premium Network Service Tier.

D.

Deploy a global external Application Load Balancer with Premium Network Service Tier.

Answers
Suggested answer: D

Explanation:

The global external Application Load Balancer with Premium Network Service Tier provides optimized routing and lower latency for HTTPS workloads on a global scale. Premium tier minimizes costs by avoiding multiple regional configurations while ensuring reliable performance for global users.

Your organization has a hub and spoke architecture with VPC Network Peering, and hybrid connectivity is centralized at the hub. The Cloud Router in the hub VPC is advertising subnet routes, but the on-premises router does not appear to be receiving any subnet routes from the VPC spokes. You need to resolve this issue. What should you do?

A.

Create custom learned routes at the Cloud Router in the hub to advertise the subnets of the VPC spokes.

A.

Create custom learned routes at the Cloud Router in the hub to advertise the subnets of the VPC spokes.

Answers
B.

Create custom routes at the Cloud Router in the spokes to advertise the subnets of the VPC spokes.

B.

Create custom routes at the Cloud Router in the spokes to advertise the subnets of the VPC spokes.

Answers
C.

Create a BGP route policy at the Cloud Router, and ensure the subnets of the VPC spokes are being announced towards the on-premises environment.

C.

Create a BGP route policy at the Cloud Router, and ensure the subnets of the VPC spokes are being announced towards the on-premises environment.

Answers
D.

Create custom routes at the Cloud Router in the hub to advertise the subnets of the VPC spokes.

D.

Create custom routes at the Cloud Router in the hub to advertise the subnets of the VPC spokes.

Answers
Suggested answer: A

Explanation:

Creating custom learned routes at the hub's Cloud Router is required for advertising VPC spokes' subnets to the on-premises environment. This centralizes route configuration and ensures that all spoke subnet routes are propagated to the hybrid network.



Your organization has an on-premises data center. You need to provide connectivity from the on-premises data center to Google Cloud. Bandwidth must be at least 1 Gbps, and the traffic must not traverse the internet. What should you do?

A.

Configure HA VPN by using high availability gateways and tunnels.

A.

Configure HA VPN by using high availability gateways and tunnels.

Answers
B.

Configure Dedicated Interconnect by creating a VLAN attachment, activate the connection, and submit the pairing key to your service provider.

B.

Configure Dedicated Interconnect by creating a VLAN attachment, activate the connection, and submit the pairing key to your service provider.

Answers
C.

Configure Cross-Cloud Interconnect by creating a VLAN attachment, activate the connection, and then submit the pairing key to your service provider.

C.

Configure Cross-Cloud Interconnect by creating a VLAN attachment, activate the connection, and then submit the pairing key to your service provider.

Answers
D.

Configure Partner Interconnect by creating a VLAN attachment, submit the pairing key to your service provider, and activate the connection.

D.

Configure Partner Interconnect by creating a VLAN attachment, submit the pairing key to your service provider, and activate the connection.

Answers
Suggested answer: D

Explanation:

For private connectivity with at least 1 Gbps bandwidth and without using the public internet, Partner Interconnect is the suitable choice if you do not require the 10 Gbps minimum of Dedicated Interconnect. With Partner Interconnect, you create a VLAN attachment and work with a service provider that facilitates the connection between your on-premises network and Google Cloud. This solution supports connections as low as 50 Mbps and up to 10 Gbps.

Total 215 questions
Go to page: of 22