ExamGecko
Home Home / Google / Professional Cloud Network Engineer

Google Professional Cloud Network Engineer Practice Test - Questions Answers, Page 17

Question list
Search
Search

List of questions

Search

Related questions











You are designing an IP address scheme for new private Google Kubernetes Engine (GKE) clusters, Due to IP address exhaustion of the RFC 1918 address space in your enterprise, you plan to use privately used public IP space for the new dusters. You want to follow Google-recommended practices, What should you do after designing your IP scheme?

A.
Create the minimum usable RFC 1918 primary and secondary subnet IP ranges for the clusters. Re-use the secondary address range for the pods across multiple private GKE clusters.
A.
Create the minimum usable RFC 1918 primary and secondary subnet IP ranges for the clusters. Re-use the secondary address range for the pods across multiple private GKE clusters.
Answers
B.
Create the minimum usable RFC 1918 primary and secondary subnet IP ranges for the clusters Re-use the secondary address range for the services across multiple private GKE clusters.
B.
Create the minimum usable RFC 1918 primary and secondary subnet IP ranges for the clusters Re-use the secondary address range for the services across multiple private GKE clusters.
Answers
C.
Create privately used public IP primary and secondary subnet ranges for the clusters. Create a private GKE cluster With the following options selected: --enab1e-ip-a1ias and --enable-private-nodes.
C.
Create privately used public IP primary and secondary subnet ranges for the clusters. Create a private GKE cluster With the following options selected: --enab1e-ip-a1ias and --enable-private-nodes.
Answers
D.
Create privately used public IP primary and secondary subnet ranges for the clusters. Create a private GKE cluster With the following options selected and -- siable-default-snat,--enable-ip-alias, and --enable-private-nodes
D.
Create privately used public IP primary and secondary subnet ranges for the clusters. Create a private GKE cluster With the following options selected and -- siable-default-snat,--enable-ip-alias, and --enable-private-nodes
Answers
Suggested answer: D

Explanation:

The correct answer is D. Create privately used public IP primary and secondary subnet ranges for the clusters. Create a private GKE cluster with the following options selected: --disable-default-snat, --enable-ip-alias, and --enable-private-nodes.

This answer is based on the following facts:

Privately used public IP (PUPI) addresses are any public IP addresses not owned by Google that a customer can use privately on Google Cloud1. You can use PUPI addresses for GKE pods and services in private clusters to mitigate address exhaustion.

A private GKE cluster is a cluster that has no public IP addresses on the nodes2. You can use private clusters to isolate your workloads from the public internet and enhance security.

The --disable-default-snat option disables source network address translation (SNAT) for the cluster3. This option allows you to use PUPI addresses without conflicting with other public IP addresses on the internet.

The --enable-ip-alias option enables alias IP ranges for the cluster4. This option allows you to use separate subnet ranges for nodes, pods, and services, and to specify the size of those ranges.

The --enable-private-nodes option enables private nodes for the cluster5. This option ensures that the nodes have no public IP addresses and can only communicate with other Google Cloud resources in the same VPC network or peered networks.

The other options are not correct because:

Option A is not suitable. Creating RFC 1918 primary and secondary subnet IP ranges for the clusters does not solve the problem of address exhaustion. Re-using the secondary address range for pods across multiple private GKE clusters can cause IP conflicts and routing issues.

Option B is also not suitable. Creating RFC 1918 primary and secondary subnet IP ranges for the clusters does not solve the problem of address exhaustion. Re-using the secondary address range for services across multiple private GKE clusters can cause IP conflicts and routing issues.

Option C is not feasible. Creating privately used public IP primary and secondary subnet ranges for the clusters is a valid step, but creating a private GKE cluster with only --enable-ip-alias and --enable-private-nodes options is not enough. You also need to disable default SNAT to avoid IP conflicts with other public IP addresses on the internet.


Your company runs an enterprise platform on-premises using virtual machines (VMS). Your internet customers have created tens of thousands of DNS domains panting to your public IP addresses allocated to the Vtvls Typically, your customers hard-code your IP addresses In their DNS records You are now planning to migrate the platform to Compute Engine and you want to use Bring your Own IP you want to minimize disruption to the Platform What Should you do?

A.
Create a VPC and request static external IP addresses from Google Cloud Assagn the IP addresses to the Compute Engine instances. Notify your customers of the new IP addresses so they can update their DNS
A.
Create a VPC and request static external IP addresses from Google Cloud Assagn the IP addresses to the Compute Engine instances. Notify your customers of the new IP addresses so they can update their DNS
Answers
B.
Verify ownership of your IP addresses. After the verification, Google Cloud advertises and provisions the IP prefix for you_ Assign the IP addresses to the Compute Engine Instances
B.
Verify ownership of your IP addresses. After the verification, Google Cloud advertises and provisions the IP prefix for you_ Assign the IP addresses to the Compute Engine Instances
Answers
C.
Create a VPC With the same IP address range as your on-premises network Asson the IP addresses to the Compute Engine Instances.
C.
Create a VPC With the same IP address range as your on-premises network Asson the IP addresses to the Compute Engine Instances.
Answers
D.
Verify ownership of your IP addresses. Use live migration to import the prefix Assign the IP addresses to Compute Engine instances.
D.
Verify ownership of your IP addresses. Use live migration to import the prefix Assign the IP addresses to Compute Engine instances.
Answers
Suggested answer: D

Explanation:

The correct answer is D because it allows you to use your own public IP addresses in Google Cloud without disrupting the platform or requiring your customers to update their DNS records. Option A is incorrect because it involves changing the IP addresses and notifying the customers, which can cause disruption and errors. Option B is incorrect because it does not use live migration, which is a feature that lets you control when Google starts advertising routes for your prefix. Option C is incorrect because it does not involve bringing your own IP addresses, but rather using Google-provided IP addresses.

Bring your own IP addresses

Professional Cloud Network Engineer Exam Guide

Bring your own IP addresses (BYOIP) to Azure with Custom IP Prefix

You ate planning to use Terraform to deploy the Google Cloud infrastructure for your company, The design must meet the following requirements

* Each Google Cloud project must represent an Internal project that your team Will work on

* After an Internal project is finished, the infrastructure must be deleted

* Each Internal project must have Its own Google Cloud project owner to manage the Google Cloud resources.

* You have 10---100 projects deployed at a time

While you are writing the Terraform code, you need to ensure that the deployment is simple and the code is reusable With centralized management What should you do?

A.
Create a Single project and additional VPCs for each internal project
A.
Create a Single project and additional VPCs for each internal project
Answers
B.
Create a Single Shared VPC and attach each Google Cloud project as a service project
B.
Create a Single Shared VPC and attach each Google Cloud project as a service project
Answers
C.
Create a Single project and Single VPC for each internal project
C.
Create a Single project and Single VPC for each internal project
Answers
D.
Create a Shared VPC and service project for each internal project
D.
Create a Shared VPC and service project for each internal project
Answers
Suggested answer: D

Explanation:

The correct answer is D because it meets the following requirements:

Each internal project has its own Google Cloud project, which can be easily created and deleted by Terraform using the google_project resource1.

Each internal project has its own Google Cloud project owner, which can be assigned by Terraform using the google_project_iam_member resource1.

The deployment is simple and the code is reusable with centralized management, because the Shared VPC allows you to connect multiple service projects to a single host project that contains the network resources2.This way, you can use Terraform modules to create and manage the network resources in the host project, and then reference them in the service projects3.

Option A is incorrect because it does not create separate Google Cloud projects for each internal project, which makes it harder to delete the infrastructure and assign project owners.Option B is incorrect because it does not create separate Google Cloud projects for each internal project, and also because it attaches the service projects to a Shared VPC, which is not recommended for short-lived projects2. Option C is incorrect because it does not use a Shared VPC, which means that each internal project has to create and manage its own network resources, which increases complexity and reduces reusability.

google_project - Terraform Registry

Managing infrastructure as code with Terraform, Cloud Build, and GitOps | Google Cloud

Automating your automation by Creating Google Cloud Projects Automatically

Your team is developing an application that will be used by consumers all over the world. Currently, the application sits behind a global external application load balancer You need to protect the application from potential application-level attacks. What should you do?

A.
Enable Cloud CDN on the backend service.
A.
Enable Cloud CDN on the backend service.
Answers
B.
Create multiple firewall deny rules to block malicious users, and apply them to the global external application load balancer
B.
Create multiple firewall deny rules to block malicious users, and apply them to the global external application load balancer
Answers
C.
Create a Google Cloud Armor security policy With web application firewall rules, and apply the security policy to the backend service.
C.
Create a Google Cloud Armor security policy With web application firewall rules, and apply the security policy to the backend service.
Answers
D.
Create a VPC Service Controls perimeter with the global external application load balancer as the protected service, and apply it to the backend service
D.
Create a VPC Service Controls perimeter with the global external application load balancer as the protected service, and apply it to the backend service
Answers
Suggested answer: C

Explanation:

The correct answer is C because it meets the requirement of protecting the application from potential application-level attacks.Google Cloud Armor security policies are sets of rules that match on attributes from Layer 3 to Layer 7 to protect externally facing applications1.Web application firewall (WAF) rules are predefined rules that detect and mitigate common web attacks such as cross-site scripting (XSS), SQL injection, remote file inclusion, and more2. By applying a Google Cloud Armor security policy with WAF rules to the backend service, you can filter out malicious requests before they reach your application.

Option A is incorrect because Cloud CDN is a content delivery network that caches static content at the edge of Google's network, but it does not provide any protection against application-level attacks3.Option B is incorrect because firewall rules are applied at the VPC network level, not at the load balancer level4.Firewall rules also only match on Layer 3 and 4 attributes, not on Layer 7 attributes that are relevant for application-level attacks4. Option D is incorrect because VPC Service Controls perimeter is a feature that helps you secure your data from unauthorized access by users outside your organization, but it does not protect your application from external attacks.

Security policy overview | Google Cloud Armor

Web application firewall (WAF) rules | Google Cloud Armor

Cloud CDN overview | Google Cloud

Using firewall rules | VPC

[VPC Service Controls overview | Google Cloud]

You are a network administrator at your company planning a migration to Google Cloud and you need to finish the migration as quickly as possible, To ease the transition, you decided to use the same architecture as your on-premises network' a hub-and-spoke model. Your on-premises architecture consists of over 50 spokes. Each spoke does not have connectivity to the other spokes, and all traffic IS sent through the hub for security reasons. You need to ensure that the Google Cloud architecture matches your on-premises architecture. You want to implement a solution that minimizes management overhead and cost, and uses default networking quotas and limits. What should you do?

A.
Connect all the spokes to the hub with Cloud VPN.
A.
Connect all the spokes to the hub with Cloud VPN.
Answers
B.
Connect all the spokes to the hub with VPC Network Peering.
B.
Connect all the spokes to the hub with VPC Network Peering.
Answers
C.
Connect all the spokes to the hub With Cloud VPN. Use a third-party network appliance as a default gateway to prevent connectivity between the spokes
C.
Connect all the spokes to the hub With Cloud VPN. Use a third-party network appliance as a default gateway to prevent connectivity between the spokes
Answers
D.
Connect all the spokes to the hub with VPC Network Peering. Use a third-party network appliance as a default gateway to prevent connectivity between the spokes.
D.
Connect all the spokes to the hub with VPC Network Peering. Use a third-party network appliance as a default gateway to prevent connectivity between the spokes.
Answers
Suggested answer: D

Explanation:

The correct answer is D because it meets the following requirements:

It matches the hub-and-spoke model of the on-premises network, where each spoke is a separate VPC network that is connected to a central hub VPC network.

It minimizes management overhead and cost, because VPC Network Peering is a simple and low-cost way to connect VPC networks without using any external IP addresses or VPN gateways1.

It uses default networking quotas and limits, because VPC Network Peering does not consume any quota or limit for VPN tunnels, external IP addresses, or forwarding rules2.

It prevents connectivity between the spokes, because VPC Network Peering is non-transitive by default, meaning that a spoke can only communicate with the hub, not with other spokes1.To enforce this restriction, a third-party network appliance can be used as a default gateway in each spoke VPC network, which can filter out any traffic destined for other spokes3.

Option A is incorrect because it does not minimize cost, as Cloud VPN charges for egress traffic and requires external IP addresses for the VPN gateways4.Option B is incorrect because it does not prevent connectivity between the spokes, as VPC Network Peering allows direct communication between peered VPC networks by default1. Option C is incorrect because it does not minimize cost or use default quotas and limits, for the same reasons as option A.

VPC Network Peering overview | VPC

Quotas and limits | VPC

Hub-and-spoke network architecture | Cloud Architecture Center

Cloud VPN overview | Google Cloud

Your company is planning a migration to Google Kubernetes Engine. Your application team informed you that they require a minimum of 60 Pods per node and a maximum of 100 Pods per node Which Pod per node CIDR range should you use?

A.
/24
A.
/24
Answers
B.
/25
B.
/25
Answers
C.
/26
C.
/26
Answers
D.
/28
D.
/28
Answers
Suggested answer: B

Explanation:

To determine the Pod per node CIDR range, you need to calculate how many IP addresses are required for each node, and then choose the smallest CIDR range that can accommodate that number. A CIDR range of /n means that there are 2^(32-n) IP addresses available in that range. For example, a /24 range has 2^(32-24) = 256 IP addresses.

According to the question, the application team requires a minimum of 60 Pods per node and a maximum of 100 Pods per node. Therefore, you need to choose a CIDR range that can provide at least 100 IP addresses per node, but not more than necessary. A /25 range has 2^(32-25) = 128 IP addresses, which is enough for 100 Pods per node. A /26 range has 2^(32-26) = 64 IP addresses, which is not enough for 60 Pods per node. A /24 range has 256 IP addresses, which is more than needed and wastes IP address space. A /28 range has 2^(32-28) = 16 IP addresses, which is far too small for any node.

Therefore, the best option is B. /25.This is also consistent with the Google Kubernetes Engine documentation, which states that each node is allocated a /24 range of IP addresses for Pods by default, but the maximum number of Pods per node is 1101. This means that there are approximately twice as many available IP addresses as possible Pods, which is similar to the ratio of 128 to 100 in the /25 range.

1:Configure maximum Pods per node | Google Kubernetes Engine (GKE) | Google Cloud

You are designing an IP address scheme for new private Google Kubernetes Engine (GKE) clusters. Due to IP address exhaustion of the RFC 1918 address space In your enterprise, you plan to use privately used public IP space for the new clusters. You want to follow Google-recommended practices. What should you do after designing your IP scheme?

A.
Create the minimum usable RFC 1918 primary and secondary subnet IP ranges for the clusters. Re-use the secondary address range for the pods across multiple private GKE clusters
A.
Create the minimum usable RFC 1918 primary and secondary subnet IP ranges for the clusters. Re-use the secondary address range for the pods across multiple private GKE clusters
Answers
B.
Create the minimum usable RFC 1918 primary and secondary subnet IP ranges for the clusters Re-use the secondary address range for the services across multiple private GKE clusters
B.
Create the minimum usable RFC 1918 primary and secondary subnet IP ranges for the clusters Re-use the secondary address range for the services across multiple private GKE clusters
Answers
C.
Create privately used public IP primary and secondary subnet ranges for the clusters. Create a private GKE cluster with the following options selected and
C.
Create privately used public IP primary and secondary subnet ranges for the clusters. Create a private GKE cluster with the following options selected and
Answers
D.
Create privately used public IP primary and secondary subnet ranges for the clusters. Create a private GKE cluster With the following options selected --disable-default-snat, ---enable-ip-alias, and---enable-private-nodes
D.
Create privately used public IP primary and secondary subnet ranges for the clusters. Create a private GKE cluster With the following options selected --disable-default-snat, ---enable-ip-alias, and---enable-private-nodes
Answers
Suggested answer: D

Explanation:

This answer follows the Google-recommended practices for using privately used public IP (PUPI) addresses for GKE Pod address blocks1. The benefits of this approach are:

It allows you to use any public IP addresses that are not owned by Google or your organization for your Pods, which can help mitigate address exhaustion in your enterprise.

It prevents any external traffic from reaching your Pods, as Google Cloud does not route PUPI addresses to the internet or to other VPC networks by default.

It enables you to use VPC Network Peering to connect your GKE cluster to other VPC networks that use different PUPI addresses, as long as you enable the export and import of custom routes for the peering connection.

It preserves the fully integrated network model of GKE, where Pods can communicate with nodes and other resources in the same VPC network without NAT.

The options that you need to select when creating a private GKE cluster with PUPI addresses are:

--disable-default-snat: This option disables source NAT for outbound traffic from Pods to destinations outside the cluster's VPC network.This is necessary to prevent Pods from using RFC 1918 addresses as their source IP addresses, which could cause conflicts with other networks that use the same address space2.

--enable-ip-alias: This option enables alias IP ranges for Pods and Services, which allows you to use separate subnet ranges for them.This is required to use PUPI addresses for Pods1.

--enable-private-nodes: This option creates a private cluster, where nodes do not have external IP addresses and can only communicate with the control plane through a private endpoint.This enhances the security and privacy of your cluster3.

Option A is incorrect because it does not use PUPI addresses for Pods, but rather RFC 1918 addresses. This does not solve the problem of address exhaustion in your enterprise. Option B is incorrect because it reuses the secondary address range for Services across multiple private GKE clusters, which could cause IP conflicts and routing issues. Option C is incorrect because it does not specify the options that are needed to create a private GKE cluster with PUPI addresses.

1:Configuring privately used public IPs for GKE | Kubernetes Engine | Google Cloud2:Using Cloud NAT with GKE | Kubernetes Engine | Google Cloud3:Private clusters | Kubernetes Engine | Google Cloud

You have the networking configuration shown in the diagram. A pair of redundant Dedicated Interconnect connections (int-Igal and int-Iga2) terminate on the same Cloud Router The Interconnect connections terminate on two separate on-premises routers. You are advertising the same prefixes from the Border Gateway Protocol (BGP) sessions associated with the Dedicated Interconnect connections. You need to configure one connection as Active for both ingress and egress traffic. If the active Interconnect connection fails, you want the passive Interconnect connection to automatically begin routing all traffic Which two actions should you take to meet this requirement? (Choose Two)

A.
Configure the advertised route priority > 10,200 on the active Interconnect connection.
A.
Configure the advertised route priority > 10,200 on the active Interconnect connection.
Answers
B.
Advertise a lower MED on the passive Interconnect connection from the on-premises router
B.
Advertise a lower MED on the passive Interconnect connection from the on-premises router
Answers
C.
Configure the advertised route priority as 200 for the BGP session associated Wlth the active Interconnect connection.
C.
Configure the advertised route priority as 200 for the BGP session associated Wlth the active Interconnect connection.
Answers
D.
Configure the advertised route priority as 200 for the BGP session associated With the passive Interconnect connection.
D.
Configure the advertised route priority as 200 for the BGP session associated With the passive Interconnect connection.
Answers
E.
Advertise a lower MED on the active Interconnect connection from the on-premises router
E.
Advertise a lower MED on the active Interconnect connection from the on-premises router
Answers
Suggested answer: C, E

Explanation:

This answer meets the requirement of configuring one connection as Active for both ingress and egress traffic, and enabling automatic failover to the passive connection in case of failure. The reason is:

The advertised route priority is a value that Cloud Router uses to set the route priority when advertising routes to your on-premises router.The lower the value, the higher the priority1.By setting the advertised route priority as 200 for the active connection, you ensure that it has a higher priority than the passive connection, which has the default value of 1001. This way, your on-premises router will prefer the routes from the active connection over the passive one for ingress traffic.

The MED (Multi-Exit Discriminator) is a value that your on-premises router uses to indicate its preference for receiving traffic from Cloud Router.The lower the value, the higher the preference2. By advertising a lower MED on the active connection from your on-premises router, you ensure that Cloud Router will prefer sending traffic to the active connection over the passive one for egress traffic.

If the active connection fails, Cloud Router will stop receiving routes from it and will start using the routes from the passive connection for egress traffic. Similarly, your on-premises router will stop receiving routes with priority 200 from the active connection and will start using the routes with priority 100 from the passive connection for ingress traffic. This achieves automatic failover without any manual intervention.

Option A is incorrect because setting the advertised route priority > 10,200 on the active connection would deprioritize it globally in your VPC network, which is not what you want1.Option B is incorrect because advertising a lower MED on the passive connection would make Cloud Router prefer sending traffic to it over the active one, which is not what you want2.Option D is incorrect because setting the advertised route priority as 200 for both connections would make them equally preferred by your on-premises router, which is not what you want1.

Update the base route priority | Cloud Router | Google Cloud

Configuring BGP sessions | Cloud Router | Google Cloud

Your company's logo is published as an image file across multiple websites that are hosted by your company You have implemented Cloud CDN, however, you want to improve the performance of the cache hit ratio associated with this image file. What should you do?

A.
Configure custom cache keys for the backend service that holds the image file, and clear the Host and Protocol checkboxes.
A.
Configure custom cache keys for the backend service that holds the image file, and clear the Host and Protocol checkboxes.
Answers
B.
Configure Cloud Storage as a custom origin backend to host the image file, and select multi-region as the location type
B.
Configure Cloud Storage as a custom origin backend to host the image file, and select multi-region as the location type
Answers
C.
Configure versioned IJRLs for each domain to serve users the *mage file before the cache entry expires
C.
Configure versioned IJRLs for each domain to serve users the *mage file before the cache entry expires
Answers
D.
Configure the default time to live (TTL) as O for the image file.
D.
Configure the default time to live (TTL) as O for the image file.
Answers
Suggested answer: A

Explanation:

This answer meets the requirement of improving the performance of the cache hit ratio associated with the image file. The reason is:

Custom cache keys allow you to control which parts of the request URL are used to build the cache key.The cache key is a unique identifier that Cloud CDN uses to store and retrieve cached content1.

By default, Cloud CDN uses the complete request URL, including the protocol (http or https) and the host (the domain name), to build the cache key.This means that if the same image file is requested from different domains or protocols, Cloud CDN will cache multiple copies of it, which reduces the cache hit ratio1.

By clearing the Host and Protocol checkboxes, you can tell Cloud CDN to ignore these parts of the request URL when building the cache key.This way, Cloud CDN will cache only one copy of the image file, regardless of which domain or protocol it is requested from, which improves the cache hit ratio1.

Option B is incorrect because configuring Cloud Storage as a custom origin backend does not affect the cache hit ratio. It only affects how Cloud CDN retrieves the content from the origin if it is not cached. Option C is incorrect because configuring versioned URLs for each domain does not improve the cache hit ratio. It actually worsens it, because it creates more variations of the request URL that Cloud CDN has to cache separately. Option D is incorrect because configuring the default TTL as 0 for the image file means that Cloud CDN will not cache it at all, which defeats the purpose of using Cloud CDN.

Custom cache keys | Cloud CDN | Google Cloud

You are responsible for designing a new connectivity solution between your organization's on-premises data center and your Google Cloud Virtual Private Cloud (VPC) network Currently, there Is no end-to-end connectivity. You must ensure a service level agreement (SLA) of 99.99% availability What should you do?

A.
Use one Dedicated Interconnect connection in a single metropolitan area. Configure one Cloud Router and enable global routing in the VPC.
A.
Use one Dedicated Interconnect connection in a single metropolitan area. Configure one Cloud Router and enable global routing in the VPC.
Answers
B.
Use a Direct Peering connection between your on-premises data center and Google Cloud. Configure Classic VPN with two tunnels and one Cloud Router.
B.
Use a Direct Peering connection between your on-premises data center and Google Cloud. Configure Classic VPN with two tunnels and one Cloud Router.
Answers
C.
Use two Dedicated Interconnect connections in a single metropolitan area. Configure one Cloud Router and enable global routing in the VPC.
C.
Use two Dedicated Interconnect connections in a single metropolitan area. Configure one Cloud Router and enable global routing in the VPC.
Answers
D.
Use HA VPN. Configure one tunnel from each Interface of the VPN gateway to connect to the corresponding interfaces on the peer gateway on-premises. Configure one Cloud Router and enable global routing in the VPC.
D.
Use HA VPN. Configure one tunnel from each Interface of the VPN gateway to connect to the corresponding interfaces on the peer gateway on-premises. Configure one Cloud Router and enable global routing in the VPC.
Answers
Suggested answer: D

Explanation:

For Dedicated Interconnects: At least four Dedicated Interconnect connections, two connections in one metropolitan area (metro) and two connections in another metro. Connections that are in the same metro must be placed in different edge availability domains (metro availability zones) to achieve 99.99% availability.

For HA VPN:

HA VPN to peer VPN gateways Connect an HA VPN gateway to one or two separate peer VPN devices 99.99%

HA VPN between two Google Cloud networks Connect two Google Cloud VPC networks in a single region by using an HA VPN gateway in each network 99.99%


Total 215 questions
Go to page: of 22