ExamGecko
Home Home / Google / Professional Cloud Security Engineer

Google Professional Cloud Security Engineer Practice Test - Questions Answers, Page 7

Question list
Search
Search

List of questions

Search

Related questions











A customer wants to deploy a large number of 3-tier web applications on Compute Engine.

How should the customer ensure authenticated network separation between the different tiers of the application?

A.
Run each tier in its own Project, and segregate using Project labels.
A.
Run each tier in its own Project, and segregate using Project labels.
Answers
B.
Run each tier with a different Service Account (SA), and use SA-based firewall rules.
B.
Run each tier with a different Service Account (SA), and use SA-based firewall rules.
Answers
C.
Run each tier in its own subnet, and use subnet-based firewall rules.
C.
Run each tier in its own subnet, and use subnet-based firewall rules.
Answers
D.
Run each tier with its own VM tags, and use tag-based firewall rules.
D.
Run each tier with its own VM tags, and use tag-based firewall rules.
Answers
Suggested answer: B

Explanation:

'Isolate VMs using service accounts when possible' 'even though it is possible to uses tags for target filtering in this manner, we recommend that you use service accounts where possible. Target tags are not access-controlled and can be changed by someone with the instanceAdmin role while VMs are in service. Service accounts are access-controlled, meaning that a specific user must be explicitly authorized to use a service account. There can only be one service account per instance, whereas there can be multiple tags. Also, service accounts assigned to a VM can only be changed when the VM is stopped.' https://cloud.google.com/solutions/best-practices-vpc-design#isolate-vms-service-accounts

A manager wants to start retaining security event logs for 2 years while minimizing costs. You write a filter to select the appropriate log entries.

Where should you export the logs?

A.
BigQuery datasets
A.
BigQuery datasets
Answers
B.
Cloud Storage buckets
B.
Cloud Storage buckets
Answers
C.
StackDriver logging
C.
StackDriver logging
Answers
D.
Cloud Pub/Sub topics
D.
Cloud Pub/Sub topics
Answers
Suggested answer: B

For compliance reasons, an organization needs to ensure that in-scope PCI Kubernetes Pods reside on ''in- scope'' Nodes only. These Nodes can only contain the ''in-scope'' Pods.

How should the organization achieve this objective?

A.
Add a nodeSelector field to the pod configuration to only use the Nodes labeled inscope: true.
A.
Add a nodeSelector field to the pod configuration to only use the Nodes labeled inscope: true.
Answers
B.
Create a node pool with the label inscope: true and a Pod Security Policy that only allows the Pods to run on Nodes with that label.
B.
Create a node pool with the label inscope: true and a Pod Security Policy that only allows the Pods to run on Nodes with that label.
Answers
C.
Place a taint on the Nodes with the label inscope: true and effect NoSchedule and a toleration to match in the Pod configuration.
C.
Place a taint on the Nodes with the label inscope: true and effect NoSchedule and a toleration to match in the Pod configuration.
Answers
D.
Run all in-scope Pods in the namespace ''in-scope-pci''.
D.
Run all in-scope Pods in the namespace ''in-scope-pci''.
Answers
Suggested answer: A

Explanation:

nodeSelector is the simplest recommended form of node selection constraint. You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have. Kubernetes only schedules the Pod onto nodes that have each of the labels you specify. => https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector Tolerations are applied to pods. Tolerations allow the scheduler to schedule pods with matching taints. Tolerations allow scheduling but don't guarantee scheduling: the scheduler also evaluates other parameters as part of its function. => https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/

In an effort for your company messaging app to comply with FIPS 140-2, a decision was made to use GCP compute and network services. The messaging app architecture includes a Managed Instance Group (MIG) that controls a cluster of Compute Engine instances. The instances use Local SSDs for data caching and UDP for instance-to-instance communications. The app development team is willing to make any changes necessary to comply with the standard

Which options should you recommend to meet the requirements?

A.
Encrypt all cache storage and VM-to-VM communication using the BoringCrypto module.
A.
Encrypt all cache storage and VM-to-VM communication using the BoringCrypto module.
Answers
B.
Set Disk Encryption on the Instance Template used by the MIG to customer-managed key and use BoringSSL for all data transit between instances.
B.
Set Disk Encryption on the Instance Template used by the MIG to customer-managed key and use BoringSSL for all data transit between instances.
Answers
C.
Change the app instance-to-instance communications from UDP to TCP and enable BoringSSL on clients' TLS connections.
C.
Change the app instance-to-instance communications from UDP to TCP and enable BoringSSL on clients' TLS connections.
Answers
D.
Set Disk Encryption on the Instance Template used by the MIG to Google-managed Key and use BoringSSL library on all instance-to-instance communications.
D.
Set Disk Encryption on the Instance Template used by the MIG to Google-managed Key and use BoringSSL library on all instance-to-instance communications.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/security/compliance/fips-140-2-validated

Google Cloud Platform uses a FIPS 140-2 validated encryption module called BoringCrypto (certificate 3318) in our production environment. This means that both data in transit to the customer and between data centers, and data at rest are encrypted using FIPS 140-2 validated encryption. The module that achieved FIPS 140-2 validation is part of our BoringSSL library.

A customer has an analytics workload running on Compute Engine that should have limited internet access.

Your team created an egress firewall rule to deny (priority 1000) all traffic to the internet.

The Compute Engine instances now need to reach out to the public repository to get security updates. What should your team do?

A.
Create an egress firewall rule to allow traffic to the CIDR range of the repository with a priority greater than 1000.
A.
Create an egress firewall rule to allow traffic to the CIDR range of the repository with a priority greater than 1000.
Answers
B.
Create an egress firewall rule to allow traffic to the CIDR range of the repository with a priority less than 1000.
B.
Create an egress firewall rule to allow traffic to the CIDR range of the repository with a priority less than 1000.
Answers
C.
Create an egress firewall rule to allow traffic to the hostname of the repository with a priority greater than 1000.
C.
Create an egress firewall rule to allow traffic to the hostname of the repository with a priority greater than 1000.
Answers
D.
Create an egress firewall rule to allow traffic to the hostname of the repository with a priority less than 1000.
D.
Create an egress firewall rule to allow traffic to the hostname of the repository with a priority less than 1000.
Answers
Suggested answer: B

Explanation:

https://cloud.google.com/vpc/docs/firewalls#priority_order_for_firewall_rules

You want data on Compute Engine disks to be encrypted at rest with keys managed by Cloud Key Management Service (KMS). Cloud Identity and Access Management (IAM) permissions to these keys must be managed in a grouped way because the permissions should be the same for all keys.

What should you do?

A.
Create a single KeyRing for all persistent disks and all Keys in this KeyRing. Manage the IAM permissions at the Key level.
A.
Create a single KeyRing for all persistent disks and all Keys in this KeyRing. Manage the IAM permissions at the Key level.
Answers
B.
Create a single KeyRing for all persistent disks and all Keys in this KeyRing. Manage the IAM permissions at the KeyRing level.
B.
Create a single KeyRing for all persistent disks and all Keys in this KeyRing. Manage the IAM permissions at the KeyRing level.
Answers
C.
Create a KeyRing per persistent disk, with each KeyRing containing a single Key. Manage the IAM permissions at the Key level.
C.
Create a KeyRing per persistent disk, with each KeyRing containing a single Key. Manage the IAM permissions at the Key level.
Answers
D.
Create a KeyRing per persistent disk, with each KeyRing containing a single Key. Manage the IAM permissions at the KeyRing level.
D.
Create a KeyRing per persistent disk, with each KeyRing containing a single Key. Manage the IAM permissions at the KeyRing level.
Answers
Suggested answer: B

Explanation:

https://cloud.netapp.com/blog/gcp-cvo-blg-how-to-use-google-cloud-encryption-with-a-persistent-disk

A company is backing up application logs to a Cloud Storage bucket shared with both analysts and the administrator. Analysts should only have access to logs that do not contain any personally identifiable information (PII). Log files containing PII should be stored in another bucket that is only accessible by the administrator.

What should you do?

A.
Use Cloud Pub/Sub and Cloud Functions to trigger a Data Loss Prevention scan every time a file is uploaded to the shared bucket. If the scan detects PII, have the function move into a Cloud Storage bucket only accessible by the administrator.
A.
Use Cloud Pub/Sub and Cloud Functions to trigger a Data Loss Prevention scan every time a file is uploaded to the shared bucket. If the scan detects PII, have the function move into a Cloud Storage bucket only accessible by the administrator.
Answers
B.
Upload the logs to both the shared bucket and the bucket only accessible by the administrator. Create a job trigger using the Cloud Data Loss Prevention API. Configure the trigger to delete any files from the shared bucket that contain PII.
B.
Upload the logs to both the shared bucket and the bucket only accessible by the administrator. Create a job trigger using the Cloud Data Loss Prevention API. Configure the trigger to delete any files from the shared bucket that contain PII.
Answers
C.
On the bucket shared with both the analysts and the administrator, configure Object Lifecycle Management to delete objects that contain any PII.
C.
On the bucket shared with both the analysts and the administrator, configure Object Lifecycle Management to delete objects that contain any PII.
Answers
D.
On the bucket shared with both the analysts and the administrator, configure a Cloud Storage Trigger that is only triggered when PII data is uploaded. Use Cloud Functions to capture the trigger and delete such files.
D.
On the bucket shared with both the analysts and the administrator, configure a Cloud Storage Trigger that is only triggered when PII data is uploaded. Use Cloud Functions to capture the trigger and delete such files.
Answers
Suggested answer: A

Explanation:

https://codelabs.developers.google.com/codelabs/cloud-storage-dlp-functions#0 https://www.youtube.com/watch?v=0TmO1f-Ox40

A customer terminates an engineer and needs to make sure the engineer's Google account is automatically deprovisioned.

What should the customer do?

A.
Use the Cloud SDK with their directory service to remove their IAM permissions in Cloud Identity.
A.
Use the Cloud SDK with their directory service to remove their IAM permissions in Cloud Identity.
Answers
B.
Use the Cloud SDK with their directory service to provision and deprovision users from Cloud Identity.
B.
Use the Cloud SDK with their directory service to provision and deprovision users from Cloud Identity.
Answers
C.
Configure Cloud Directory Sync with their directory service to provision and deprovision users from Cloud Identity.
C.
Configure Cloud Directory Sync with their directory service to provision and deprovision users from Cloud Identity.
Answers
D.
Configure Cloud Directory Sync with their directory service to remove their IAM permissions in Cloud Identity.
D.
Configure Cloud Directory Sync with their directory service to remove their IAM permissions in Cloud Identity.
Answers
Suggested answer: C

Explanation:

https://cloud.google.com/identity/solutions/automate-user-provisioning#cloud_identity_automated_provisioning

'Cloud Identity has a catalog of automated provisioning connectors, which act as a bridge between Cloud Identity and third-party cloud apps.'

An organization is evaluating the use of Google Cloud Platform (GCP) for certain IT workloads. A well- established directory service is used to manage user identities and lifecycle management. This directory service must continue for the organization to use as the ''source of truth'' directory for identities.

Which solution meets the organization's requirements?

A.
Google Cloud Directory Sync (GCDS)
A.
Google Cloud Directory Sync (GCDS)
Answers
B.
Cloud Identity
B.
Cloud Identity
Answers
C.
Security Assertion Markup Language (SAML)
C.
Security Assertion Markup Language (SAML)
Answers
D.
Pub/Sub
D.
Pub/Sub
Answers
Suggested answer: A

Explanation:

With Google Cloud Directory Sync (GCDS), you can synchronize the data in your Google Account with your Microsoft Active Directory or LDAP server. GCDS doesn't migrate any content (such as email messages, calendar events, or files) to your Google Account. You use GCDS to synchronize your Google users, groups, and shared contacts to match the information in your LDAP server.

https://support.google.com/a/answer/106368?hl=en

Which international compliance standard provides guidelines for information security controls applicable to the provision and use of cloud services?

A.
ISO 27001
A.
ISO 27001
Answers
B.
ISO 27002
B.
ISO 27002
Answers
C.
ISO 27017
C.
ISO 27017
Answers
D.
ISO 27018
D.
ISO 27018
Answers
Suggested answer: C

Explanation:

Create a new Service Account that should be able to list the Compute Engine instances in the project. You want to follow Google-recommended practices.

https://cloud.google.com/security/compliance/iso-27017

Total 235 questions
Go to page: of 24