ExamGecko
Home Home / Google / Professional Cloud Security Engineer

Google Professional Cloud Security Engineer Practice Test - Questions Answers, Page 23

Question list
Search
Search

List of questions

Search

Related questions











Your company conducts clinical trials and needs to analyze the results of a recent study that are stored in BigQuery. The interval when the medicine was taken contains start and stop dates The interval data is critical to the analysis, but specific dates may identify a particular batch and introduce bias You need to obfuscate the start and end dates for each row and preserve the interval data.

What should you do?

A.
Use bucketing to shift values to a predetermined date based on the initial value.
A.
Use bucketing to shift values to a predetermined date based on the initial value.
Answers
B.
Extract the date using TimePartConfig from each date field and append a random month and year
B.
Extract the date using TimePartConfig from each date field and append a random month and year
Answers
C.
Use date shifting with the context set to the unique ID of the test subject
C.
Use date shifting with the context set to the unique ID of the test subject
Answers
D.
Use the FFX mode of format preserving encryption (FPE) and maintain data consistency
D.
Use the FFX mode of format preserving encryption (FPE) and maintain data consistency
Answers
Suggested answer: A

Explanation:

'Date shifting techniques randomly shift a set of dates but preserve the sequence and duration of a period of time. Shifting dates is usually done in context to an individual or an entity. That is, each individual's dates are shifted by an amount of time that is unique to that individual.'

You are setting up a new Cloud Storage bucket in your environment that is encrypted with a customer managed encryption key (CMEK). The CMEK is stored in Cloud Key Management Service (KMS). in project 'pr j -a', and the Cloud Storage bucket will use project 'prj-b'. The key is backed by a Cloud Hardware Security Module (HSM) and resides in the region europe-west3. Your storage bucket will be located in the region europe-west1. When you create the bucket, you cannot access the key. and you need to troubleshoot why.

What has caused the access issue?

A.
A firewall rule prevents the key from being accessible.
A.
A firewall rule prevents the key from being accessible.
Answers
B.
Cloud HSM does not support Cloud Storage
B.
Cloud HSM does not support Cloud Storage
Answers
C.
The CMEK is in a different project than the Cloud Storage bucket
C.
The CMEK is in a different project than the Cloud Storage bucket
Answers
D.
The CMEK is in a different region than the Cloud Storage bucket.
D.
The CMEK is in a different region than the Cloud Storage bucket.
Answers
Suggested answer: D

Explanation:

When you use a customer-managed encryption key (CMEK) to secure a Cloud Storage bucket, the key and the bucket must be located in the same region. In this case, the key is in europe-west3 and the bucket is in europe-west1, which is why you're unable to access the key.

You are deploying regulated workloads on Google Cloud. The regulation has data residency and data access requirements. It also requires that support is provided from the same geographical location as where the data resides.

What should you do?

A.
Enable Access Transparency Logging.
A.
Enable Access Transparency Logging.
Answers
B.
Deploy resources only to regions permitted by data residency requirements
B.
Deploy resources only to regions permitted by data residency requirements
Answers
C.
Use Data Access logging and Access Transparency logging to confirm that no users are accessing data from another region.
C.
Use Data Access logging and Access Transparency logging to confirm that no users are accessing data from another region.
Answers
D.
Deploy Assured Workloads.
D.
Deploy Assured Workloads.
Answers
Suggested answer: D

Explanation:

Assured Workloads for Google Cloud allows you to deploy regulated workloads with data residency, access, and support requirements. It helps you configure your environment in a manner that aligns with specific compliance frameworks and standards.

You are migrating an application into the cloud The application will need to read data from a Cloud Storage bucket. Due to local regulatory requirements, you need to hold the key material used for encryption fully under your control and you require a valid rationale for accessing the key material.

What should you do?

A.
Encrypt the data in the Cloud Storage bucket by using Customer Managed Encryption Keys. Configure an IAM deny policy for unauthorized groups
A.
Encrypt the data in the Cloud Storage bucket by using Customer Managed Encryption Keys. Configure an IAM deny policy for unauthorized groups
Answers
B.
Encrypt the data in the Cloud Storage bucket by using Customer Managed Encryption Keys backed by a Cloud Hardware Security Module (HSM). Enable data access logs.
B.
Encrypt the data in the Cloud Storage bucket by using Customer Managed Encryption Keys backed by a Cloud Hardware Security Module (HSM). Enable data access logs.
Answers
C.
Generate a key in your on-premises environment and store it in a Hardware Security Module (HSM) that is managed on-premises Use this key as an external key in the Cloud Key Management Service (KMS). Activate Key Access Justifications (KAJ) and set the external key system to reject unauthorized accesses.
C.
Generate a key in your on-premises environment and store it in a Hardware Security Module (HSM) that is managed on-premises Use this key as an external key in the Cloud Key Management Service (KMS). Activate Key Access Justifications (KAJ) and set the external key system to reject unauthorized accesses.
Answers
D.
Generate a key in your on-premises environment to encrypt the data before you upload the data to the Cloud Storage bucket Upload the key to the Cloud Key Management Service (KMS). Activate Key Access Justifications (KAJ) and have the external key system reject unauthorized accesses.
D.
Generate a key in your on-premises environment to encrypt the data before you upload the data to the Cloud Storage bucket Upload the key to the Cloud Key Management Service (KMS). Activate Key Access Justifications (KAJ) and have the external key system reject unauthorized accesses.
Answers
Suggested answer: C

Explanation:

By generating a key in your on-premises environment and storing it in an HSM that you manage, you're ensuring that the key material is fully under your control. Using the key as an external key in Cloud KMS allows you to use the key with Google Cloud services without having the key stored on Google Cloud. Activating Key Access Justifications (KAJ) provides a reason every time the key is accessed, and you can configure the external key system to reject unauthorized access attempts.

Your organization develops software involved in many open source projects and is concerned about software supply chain threats You need to deliver provenance for the build to demonstrate the software is untampered.

What should you do?

A.
* 1- Generate Supply Chain Levels for Software Artifacts (SLSA) level 3 assurance by using Cloud Build. * 2. View the build provenance in the Security insights side panel within the Google Cloud console.
A.
* 1- Generate Supply Chain Levels for Software Artifacts (SLSA) level 3 assurance by using Cloud Build. * 2. View the build provenance in the Security insights side panel within the Google Cloud console.
Answers
B.
* 1. Review the software process. * 2. Generate private and public key pairs and use Pretty Good Privacy (PGP) protocols to sign the output software artifacts together with a file containing the address of your enterprise and point of contact. * 3. Publish the PGP signed attestation to your public web page.
B.
* 1. Review the software process. * 2. Generate private and public key pairs and use Pretty Good Privacy (PGP) protocols to sign the output software artifacts together with a file containing the address of your enterprise and point of contact. * 3. Publish the PGP signed attestation to your public web page.
Answers
C.
* 1, Publish the software code on GitHub as open source. * 2. Establish a bug bounty program, and encourage the open source community to review, report, and fix the vulnerabilities.
C.
* 1, Publish the software code on GitHub as open source. * 2. Establish a bug bounty program, and encourage the open source community to review, report, and fix the vulnerabilities.
Answers
D.
* 1. Hire an external auditor to review and provide provenance * 2. Define the scope and conditions. * 3. Get support from the Security department or representative. * 4. Publish the attestation to your public web page.
D.
* 1. Hire an external auditor to review and provide provenance * 2. Define the scope and conditions. * 3. Get support from the Security department or representative. * 4. Publish the attestation to your public web page.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/build/docs/securing-builds/view-build-provenance

You control network traffic for a folder in your Google Cloud environment. Your folder includes multiple projects and Virtual Private Cloud (VPC) networks You want to enforce on the folder level that egress connections are limited only to IP range 10.58.5.0/24 and only from the VPC network dev-vpc.' You want to minimize implementation and maintenance effort

What should you do?

A.
* 1. Attach external IP addresses to the VMs in scope. * 2. Configure a VPC Firewall rule in 'dev-vpc' that allows egress connectivity to IP range 10.58.5.0/24 for all source addresses in this network.
A.
* 1. Attach external IP addresses to the VMs in scope. * 2. Configure a VPC Firewall rule in 'dev-vpc' that allows egress connectivity to IP range 10.58.5.0/24 for all source addresses in this network.
Answers
B.
* 1. Attach external IP addresses to the VMs in scope. * 2. Define and apply a hierarchical firewall policy on folder level to deny all egress connections and to allow egress to IP range 10 58.5.0/24 from network dev-vpc.
B.
* 1. Attach external IP addresses to the VMs in scope. * 2. Define and apply a hierarchical firewall policy on folder level to deny all egress connections and to allow egress to IP range 10 58.5.0/24 from network dev-vpc.
Answers
C.
* 1. Leave the network configuration of the VMs in scope unchanged. * 2. Create a new project including a new VPC network 'new-vpc.' * 3 Deploy a network appliance in 'new-vpc' to filter access requests and only allow egress connections from -dev-vpc' to 10.58.5.0/24.
C.
* 1. Leave the network configuration of the VMs in scope unchanged. * 2. Create a new project including a new VPC network 'new-vpc.' * 3 Deploy a network appliance in 'new-vpc' to filter access requests and only allow egress connections from -dev-vpc' to 10.58.5.0/24.
Answers
D.
* 1 Leave the network configuration of the VMs in scope unchanged * 2 Enable Cloud NAT for dev-vpc' and restrict the target range in Cloud NAT to 10.58.5 0/24.
D.
* 1 Leave the network configuration of the VMs in scope unchanged * 2 Enable Cloud NAT for dev-vpc' and restrict the target range in Cloud NAT to 10.58.5 0/24.
Answers
Suggested answer: B

Explanation:

This approach allows you to control network traffic at the folder level. By attaching external IP addresses to the VMs in scope, you can ensure that the VMs have a unique, routable IP address for outbound connections. Then, by defining and applying a hierarchical firewall policy at the folder level, you can enforce that egress connections are limited to the specified IP range and only from the specified VPC network.

Your Google Cloud environment has one organization node, one folder named Apps.' and several projects within that folder The organizational node enforces the constraints/iam.allowedPolicyMemberDomains organization policy, which allows members from the terramearth.com organization The 'Apps' folder enforces the constraints/iam.allowedPolicyMemberDomains organization policy, which allows members from the flowlogistic.com organization. It also has the inheritFromParent: false property.

You attempt to grant access to a project in the Apps folder to the user [email protected].

What is the result of your action and why?

A.
The action fails because a constraints/iam.allowedPolicyMemberDomains organization policy must be defined on the current project to deactivate the constraint temporarily.
A.
The action fails because a constraints/iam.allowedPolicyMemberDomains organization policy must be defined on the current project to deactivate the constraint temporarily.
Answers
B.
The action fails because a constraints/iam.allowedPolicyMemberDomains organization policy is in place and only members from the flowlogistic.com organization are allowed.
B.
The action fails because a constraints/iam.allowedPolicyMemberDomains organization policy is in place and only members from the flowlogistic.com organization are allowed.
Answers
C.
The action succeeds because members from both organizations, terramearth. com or flowlogistic.com, are allowed on projects in the 'Apps' folder
C.
The action succeeds because members from both organizations, terramearth. com or flowlogistic.com, are allowed on projects in the 'Apps' folder
Answers
D.
The action succeeds and the new member is successfully added to the project's Identity and Access Management (IAM) policy because all policies are inherited by underlying folders and projects.
D.
The action succeeds and the new member is successfully added to the project's Identity and Access Management (IAM) policy because all policies are inherited by underlying folders and projects.
Answers
Suggested answer: B

Explanation:

The action fails because a constraints/iam.allowedPolicyMemberDomains organization policy is in place and only members from the flowlogistic.com organization are allowed. The inheritFromParent: false property on the ''Apps'' folder means that it does not inherit the organization policy from the organization node. Therefore, only the policy set at the folder level applies, which allows only members from the flowlogistic.com organization. As a result, the attempt to grant access to the user [email protected] fails because this user is not a member of the flowlogistic.com organization.

You manage a fleet of virtual machines (VMs) in your organization. You have encountered issues with lack of patching in many VMs. You need to automate regular patching in your VMs and view the patch management data across multiple projects.

What should you do?

Choose 2 answers

A.
Deploy patches with VM Manager by using OS patch management
A.
Deploy patches with VM Manager by using OS patch management
Answers
B.
View patch management data in VM Manager by using OS patch management.
B.
View patch management data in VM Manager by using OS patch management.
Answers
C.
Deploy patches with Security Command Center by using Rapid Vulnerability Detection.
C.
Deploy patches with Security Command Center by using Rapid Vulnerability Detection.
Answers
D.
View patch management data in a Security Command Center dashboard.
D.
View patch management data in a Security Command Center dashboard.
Answers
E.
View patch management data in Artifact Registry.
E.
View patch management data in Artifact Registry.
Answers
Suggested answer: A, B

Explanation:

https://cloud.google.com/compute/docs/os-patch-management

Employees at your company use their personal computers to access your organization s Google Cloud console. You need to ensure that users can only access the Google Cloud console from their corporate-issued devices and verify that they have a valid enterprise certificate

What should you do?

A.
Implement an Identity and Access Management (IAM) conditional policy to verify the device certificate
A.
Implement an Identity and Access Management (IAM) conditional policy to verify the device certificate
Answers
B.
Implement a VPC firewall policy Activate packet inspection and create an allow rule to validate and verify the device certificate.
B.
Implement a VPC firewall policy Activate packet inspection and create an allow rule to validate and verify the device certificate.
Answers
C.
Implement an organization policy to verify the certificate from the access context.
C.
Implement an organization policy to verify the certificate from the access context.
Answers
D.
Implement an Access Policy in BeyondCorp Enterprise to verify the device certificate Create an access binding with the access policy just created.
D.
Implement an Access Policy in BeyondCorp Enterprise to verify the device certificate Create an access binding with the access policy just created.
Answers
Suggested answer: D

Explanation:

https://cloud.google.com/beyondcorp?hl=pt-br

You are developing a new application that uses exclusively Compute Engine VMs Once a day. this application will execute five different batch jobs Each of the batch jobs requires a dedicated set of permissions on Google Cloud resources outside of your application. You need to design a secure access concept for the batch jobs that adheres to the least-privilege principle

What should you do?

A.
1. Create a general service account 'g-sa' to execute the batch jobs. * 2 Grant the permissions required to execute the batch jobs to g-sa. * 3. Execute the batch jobs with the permissions granted to g-sa
A.
1. Create a general service account 'g-sa' to execute the batch jobs. * 2 Grant the permissions required to execute the batch jobs to g-sa. * 3. Execute the batch jobs with the permissions granted to g-sa
Answers
B.
1. Create a general service account 'g-sa' to orchestrate the batch jobs. * 2. Create one service account per batch job Mb-sa-[1-5],' and grant only the permissions required to run the individual batch jobs to the service accounts. * 3. Grant the Service Account Token Creator role to g-sa Use g-sa to obtain short-lived access tokens for b-sa-[1-5] and to execute the batch jobs with the permissions of b-sa-[1-5].
B.
1. Create a general service account 'g-sa' to orchestrate the batch jobs. * 2. Create one service account per batch job Mb-sa-[1-5],' and grant only the permissions required to run the individual batch jobs to the service accounts. * 3. Grant the Service Account Token Creator role to g-sa Use g-sa to obtain short-lived access tokens for b-sa-[1-5] and to execute the batch jobs with the permissions of b-sa-[1-5].
Answers
C.
1. Create a workload identity pool and configure workload identity pool providers for each batch job * 2 Assign the workload identity user role to each of the identities configured in the providers. * 3. Create one service account per batch job Mb-sa-[1-5]'. and grant only the permissions required to run the individual batch jobs to the service accounts * 4 Generate credential configuration files for each of the providers Use these files to execute the batch jobs with the permissions of b-sa-[1-5].
C.
1. Create a workload identity pool and configure workload identity pool providers for each batch job * 2 Assign the workload identity user role to each of the identities configured in the providers. * 3. Create one service account per batch job Mb-sa-[1-5]'. and grant only the permissions required to run the individual batch jobs to the service accounts * 4 Generate credential configuration files for each of the providers Use these files to execute the batch jobs with the permissions of b-sa-[1-5].
Answers
D.
* 1. Create a general service account 'g-sa' to orchestrate the batch jobs. * 2 Create one service account per batch job 'b-sa-[1-5)\ Grant only the permissions required to run the individual batch jobs to the service accounts and generate service account keys for each of these service accounts * 3. Store the service account keys in Secret Manager. Grant g-sa access to Secret Manager and run the batch jobs with the permissions of b-sa-[1-5].
D.
* 1. Create a general service account 'g-sa' to orchestrate the batch jobs. * 2 Create one service account per batch job 'b-sa-[1-5)\ Grant only the permissions required to run the individual batch jobs to the service accounts and generate service account keys for each of these service accounts * 3. Store the service account keys in Secret Manager. Grant g-sa access to Secret Manager and run the batch jobs with the permissions of b-sa-[1-5].
Answers
Suggested answer: B
Total 235 questions
Go to page: of 24