Linux Foundation CKS Practice Test - Questions Answers, Page 4
List of questions
Related questions
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context qa
Context:
A pod fails to run because of an incorrectly specified ServiceAccount Task:
Create a new service account named backend-qa in an existing namespace qa, which must not have access to any secret.
Edit the frontend pod yaml to use backend-qa service account
Note: You can find the frontend pod yaml at /home/cert_masters/frontend-pod.yaml
You must complete this task on the following cluster/nodes:
Cluster: trace
Master node: master
Worker node: worker1
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context trace
Given: You may use Sysdig or Falco documentation.
Task:
Use detection tools to detect anomalies like processes spawning and executing something weird frequently in the single container belonging to Pod tomcat.
Two tools are available to use:
1. falco
2. sysdig
Tools are pre-installed on the worker1 node only.
Analyse the container’s behaviour for at least 40 seconds, using filters that detect newly spawning and executing processes.
Store an incident file at /home/cert_masters/report, in the following format:
[timestamp],[uid],[processName]
Note: Make sure to store incident file on the cluster's worker node, don't move it to master node.
Cluster: dev
Master node: master1
Worker node: worker1
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context dev
Task:
Retrieve the content of the existing secret named adam in the safe namespace.
Store the username field in a file names /home/cert-masters/username.txt, and the password field in a file named /home/cert-masters/password.txt.
1. You must create both files; they don't exist yet.
2. Do not use/modify the created files in the following steps, create new temporary files if needed.
Create a new secret names newsecret in the safe namespace, with the following content:
Username: dbadmin
Password: moresecurepas
Finally, create a new Pod that has access to the secret newsecret via a volume:
Namespace: safe
Pod name: mysecret-pod
Container name: db-container
Image: redis
Volume name: secret-vol
Mount path: /etc/mysecret
Cluster: scanner
Master node: controlplane
Worker node: worker1
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context scanner
Given:
You may use Trivy's documentation.
Task:
Use the Trivy open-source container scanner to detect images with severe vulnerabilities used by Pods in the namespace nato.
Look for images with High or Critical severity vulnerabilities and delete the Pods that use those images.
Trivy is pre-installed on the cluster's master node. Use cluster's master node to use Trivy.
Context:
Cluster: gvisor
Master node: master1
Worker node: worker1
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context gvisor
Context: This cluster has been prepared to support runtime handler, runsc as well as traditional one.
Task:
Create a RuntimeClass named not-trusted using the prepared runtime handler names runsc.
Update all Pods in the namespace server to run on newruntime.
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context prod-account
Context:
A Role bound to a Pod's ServiceAccount grants overly permissive permissions. Complete the following tasks to reduce the set of permissions.
Task:
Given an existing Pod named web-pod running in the namespace database.
1. Edit the existing Role bound to the Pod's ServiceAccount test-sa to only allow performing get operations, only on resources of type Pods.
2. Create a new Role named test-role-2 in the namespace database, which only allows performing update operations, only on resources of type statuefulsets.
3. Create a new RoleBinding named test-role-2-bind binding the newly created Role to the Pod's ServiceAccount.
Note: Don't delete the existing RoleBinding.
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context dev
Context:
A CIS Benchmark tool was run against the kubeadm created cluster and found multiple issues that must be addressed.
Task:
Fix all issues via configuration and restart the affected components to ensure the new settings take effect.
Fix all of the following violations that were found against the API server:
1.2.7 authorization-mode argument is not set to AlwaysAllow FAIL
1.2.8 authorization-mode argument includes Node FAIL
1.2.7 authorization-mode argument includes RBAC FAIL
Fix all of the following violations that were found against the Kubelet:
4.2.1 Ensure that the anonymous-auth argument is set to false FAIL
4.2.2 authorization-mode argument is not set to AlwaysAllow FAIL (Use Webhook autumn/authz where possible) Fix all of the following violations that were found against etcd:
2.2 Ensure that the client-cert-auth argument is set to true
Context
This cluster uses containerd as CRI runtime.
Containerd's default runtime handler is runc. Containerd has been prepared to support an additional runtime handler, runsc (gVisor).
Task
Create a RuntimeClass named sandboxed using the prepared runtime handler named runsc.
Update all Pods in the namespace server to run on gVisor.
Context
Your organization’s security policy includes:
ServiceAccounts must not automount API credentials
ServiceAccount names must end in "-sa"
The Pod specified in the manifest file /home/candidate/KSCH00301 /pod-m nifest.yaml fails to schedule because of an incorrectly specified ServiceAccount.
Complete the following tasks:
Task
1. Create a new ServiceAccount named frontend-sa in the existing namespace q a. Ensure the ServiceAccount does not automount API credentials.
2. Using the manifest file at /home/candidate/KSCH00301 /pod-manifest.yaml, create the Pod.
3. Finally, clean up any unused ServiceAccounts in namespace qa.
Context
A CIS Benchmark tool was run against the kubeadm-created cluster and found multiple issues that must be addressed immediately.
Task
Fix all issues via configuration and restart the affected components to ensure the new settings take effect.
Fix all of the following violations that were found against the API server:
Fix all of the following violations that were found against the Kubelet:
Fix all of the following violations that were found against etcd:
Question