ExamGecko

Linux Foundation CKS Practice Test - Questions Answers, Page 4

Question list
Search
Search

List of questions

Search

Related questions











You can switch the cluster/configuration context using the following command:

[desk@cli] $ kubectl config use-context qa

Context:

A pod fails to run because of an incorrectly specified ServiceAccount Task:

Create a new service account named backend-qa in an existing namespace qa, which must not have access to any secret.

Edit the frontend pod yaml to use backend-qa service account

Note: You can find the frontend pod yaml at /home/cert_masters/frontend-pod.yaml

A.
See the explanation
A.
See the explanation
Answers
Suggested answer: A

Explanation:

[desk@cli] $ k create sa backend-qa -n qa

sa/backend-qa created

[desk@cli] $ k get role,rolebinding -n qa

No resources found in qa namespace.

[desk@cli] $ k create role backend -n qa --resource pods,namespaces,configmaps --verb list

# No access to secret

[desk@cli] $ k create rolebinding backend -n qa --role backend --serviceaccount qa:backend-qa

[desk@cli] $ vim /home/cert_masters/frontend-pod.yaml

apiVersion: v1

kind: Pod

metadata:

name: frontend

spec:

serviceAccountName: backend-qa # Add this

image: nginx

name: frontend

[desk@cli] $ k apply -f /home/cert_masters/frontend-pod.yaml

pod created

[desk@cli] $ k create sa backend-qa -n qa

serviceaccount/backend-qa created

[desk@cli] $ k get role,rolebinding -n qa

No resources found in qa namespace.

[desk@cli] $ k create role backend -n qa --resource pods,namespaces,configmaps --verb list

role.rbac.authorization.k8s.io/backend created

[desk@cli] $ k create rolebinding backend -n qa --role backend --serviceaccount qa:backend-qa

rolebinding.rbac.authorization.k8s.io/backend created

[desk@cli] $ vim /home/cert_masters/frontend-pod.yaml

apiVersion: v1

kind: Pod

metadata:

name: frontend

spec:

serviceAccountName: backend-qa # Add this

image: nginx

name: frontend

[desk@cli] $ k apply -f /home/cert_masters/frontend-pod.yaml

pod/frontend created

https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/

You must complete this task on the following cluster/nodes:

Cluster: trace

Master node: master

Worker node: worker1

You can switch the cluster/configuration context using the following command:

[desk@cli] $ kubectl config use-context trace

Given: You may use Sysdig or Falco documentation.

Task:

Use detection tools to detect anomalies like processes spawning and executing something weird frequently in the single container belonging to Pod tomcat.

Two tools are available to use:

1. falco

2. sysdig

Tools are pre-installed on the worker1 node only.

Analyse the container’s behaviour for at least 40 seconds, using filters that detect newly spawning and executing processes.

Store an incident file at /home/cert_masters/report, in the following format:

[timestamp],[uid],[processName]

Note: Make sure to store incident file on the cluster's worker node, don't move it to master node.

A.
See the explanation
A.
See the explanation
Answers
Suggested answer: A

Explanation:

$vim /etc/falco/falco_rules.local.yaml

- rule: Container Drift Detected (open+create)

desc: New executable created in a container due to open+create

condition: >

evt.type in (open,openat,creat) and

evt.is_open_exec=true and

container and

not runc_writing_exec_fifo and

not runc_writing_var_lib_docker and

not user_known_container_drift_activities and

evt.rawres>=0

output: >

%evt.time,%user.uid,%proc.name # Add this/Refer falco documentation

priority: ERROR

$kill -1 <PID of falco>

Explanation

[desk@cli] $ ssh node01

[node01@cli] $ vim /etc/falco/falco_rules.yaml

search for Container Drift Detected & paste in falco_rules.local.yaml

[node01@cli] $ vim /etc/falco/falco_rules.local.yaml

- rule: Container Drift Detected (open+create)

desc: New executable created in a container due to open+create

condition: >

evt.type in (open,openat,creat) and

evt.is_open_exec=true and

container and

not runc_writing_exec_fifo and

not runc_writing_var_lib_docker and

not user_known_container_drift_activities and

evt.rawres>=0

output: >

%evt.time,%user.uid,%proc.name # Add this/Refer falco documentation

priority: ERROR

[node01@cli] $ vim /etc/falco/falco.yaml

send HUP signal to falco process to re-read the configuration

Reference:

https://falco.org/docs/alerts/

https://falco.org/docs/rules/supported-fields/

Cluster: dev

Master node: master1

Worker node: worker1

You can switch the cluster/configuration context using the following command:

[desk@cli] $ kubectl config use-context dev

Task:

Retrieve the content of the existing secret named adam in the safe namespace.

Store the username field in a file names /home/cert-masters/username.txt, and the password field in a file named /home/cert-masters/password.txt.

1. You must create both files; they don't exist yet.

2. Do not use/modify the created files in the following steps, create new temporary files if needed.

Create a new secret names newsecret in the safe namespace, with the following content:

Username: dbadmin

Password: moresecurepas

Finally, create a new Pod that has access to the secret newsecret via a volume:

Namespace: safe

Pod name: mysecret-pod

Container name: db-container

Image: redis

Volume name: secret-vol

Mount path: /etc/mysecret

A.
See the explanation
A.
See the explanation
Answers
Suggested answer: A

Explanation:

Cluster: scanner

Master node: controlplane

Worker node: worker1

You can switch the cluster/configuration context using the following command:

[desk@cli] $ kubectl config use-context scanner

Given:

You may use Trivy's documentation.

Task:

Use the Trivy open-source container scanner to detect images with severe vulnerabilities used by Pods in the namespace nato.

Look for images with High or Critical severity vulnerabilities and delete the Pods that use those images.

Trivy is pre-installed on the cluster's master node. Use cluster's master node to use Trivy.

A.
See the explanation
A.
See the explanation
Answers
Suggested answer: A

Explanation:

Reference: https://github.com/aquasecurity/trivy

Context:

Cluster: gvisor

Master node: master1

Worker node: worker1

You can switch the cluster/configuration context using the following command:

[desk@cli] $ kubectl config use-context gvisor

Context: This cluster has been prepared to support runtime handler, runsc as well as traditional one.

Task:

Create a RuntimeClass named not-trusted using the prepared runtime handler names runsc.

Update all Pods in the namespace server to run on newruntime.

A.
See the explanation
A.
See the explanation
Answers
Suggested answer: A

Explanation:

Explanation

[desk@cli] $vim runtime.yaml

apiVersion: node.k8s.io/v1

kind: RuntimeClass

metadata:

name: not-trusted

handler: runsc

[desk@cli] $ k apply -f runtime.yaml

[desk@cli] $ k get pods

NAME READY STATUS RESTARTS AGE

nginx-6798fc88e8-chp6r 1/1 Running 0 11m

nginx-6798fc88e8-fs53n 1/1 Running 0 11m

nginx-6798fc88e8-ndved 1/1 Running 0 11m

[desk@cli] $ k get deploy

NAME READY UP-TO-DATE AVAILABLE AGE

nginx 3/3 11 3 5m

[desk@cli] $ k edit deploy nginx

Reference: https://kubernetes.io/docs/concepts/containers/runtime-class/

You can switch the cluster/configuration context using the following command:

[desk@cli] $ kubectl config use-context prod-account

Context:

A Role bound to a Pod's ServiceAccount grants overly permissive permissions. Complete the following tasks to reduce the set of permissions.

Task:

Given an existing Pod named web-pod running in the namespace database.

1. Edit the existing Role bound to the Pod's ServiceAccount test-sa to only allow performing get operations, only on resources of type Pods.

2. Create a new Role named test-role-2 in the namespace database, which only allows performing update operations, only on resources of type statuefulsets.

3. Create a new RoleBinding named test-role-2-bind binding the newly created Role to the Pod's ServiceAccount.

Note: Don't delete the existing RoleBinding.

A.
See the explanation
A.
See the explanation
Answers
Suggested answer: A

Explanation:

You can switch the cluster/configuration context using the following command:

[desk@cli] $ kubectl config use-context dev

Context:

A CIS Benchmark tool was run against the kubeadm created cluster and found multiple issues that must be addressed.

Task:

Fix all issues via configuration and restart the affected components to ensure the new settings take effect.

Fix all of the following violations that were found against the API server:

1.2.7 authorization-mode argument is not set to AlwaysAllow FAIL

1.2.8 authorization-mode argument includes Node FAIL

1.2.7 authorization-mode argument includes RBAC FAIL

Fix all of the following violations that were found against the Kubelet:

4.2.1 Ensure that the anonymous-auth argument is set to false FAIL

4.2.2 authorization-mode argument is not set to AlwaysAllow FAIL (Use Webhook autumn/authz where possible) Fix all of the following violations that were found against etcd:

2.2 Ensure that the client-cert-auth argument is set to true

A.
See the explanation
A.
See the explanation
Answers
Suggested answer: A

Explanation:

worker1 $ vim /var/lib/kubelet/config.yaml

anonymous:

enabled: true #Delete this

enabled: false #Replace by this

authorization:

mode: AlwaysAllow #Delete this

mode: Webhook #Replace by this

worker1 $ systemctl restart kubelet. # To reload kubelet config

ssh to master1

master1 $ vim /etc/kubernetes/manifests/kube-apiserver.yaml

- -- authorization-mode=Node,RBAC

master1 $ vim /etc/kubernetes/manifests/etcd.yaml

- --client-cert-auth=true

Explanation

ssh to worker1

worker1 $ vim /var/lib/kubelet/config.yaml

apiVersion: kubelet.config.k8s.io/v1beta1

authentication:

anonymous:

enabled: true #Delete this

enabled: false #Replace by this

webhook:

cacheTTL: 0s

enabled: true

x509:

clientCAFile: /etc/kubernetes/pki/ca.crt

authorization:

mode: AlwaysAllow #Delete this

mode: Webhook #Replace by this

webhook:

cacheAuthorizedTTL: 0s

cacheUnauthorizedTTL: 0s

cgroupDriver: systemd

clusterDNS:

- 10.96.0.10

clusterDomain: cluster.local

cpuManagerReconcilePeriod: 0s

evictionPressureTransitionPeriod: 0s

fileCheckFrequency: 0s

healthzBindAddress: 127.0.0.1

healthzPort: 10248

httpCheckFrequency: 0s

imageMinimumGCAge: 0s

kind: KubeletConfiguration

logging: {}

nodeStatusReportFrequency: 0s

nodeStatusUpdateFrequency: 0s

resolvConf: /run/systemd/resolve/resolv.conf

rotateCertificates: true

runtimeRequestTimeout: 0s

staticPodPath: /etc/kubernetes/manifests

streamingConnectionIdleTimeout: 0s

syncFrequency: 0s

volumeStatsAggPeriod: 0s

worker1 $ systemctl restart kubelet. # To reload kubelet config

ssh to master1

master1 $ vim /etc/kubernetes/manifests/kube-apiserver.yaml

master1 $ vim /etc/kubernetes/manifests/etcd.yaml

Reference:

kubelet parameters: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/

kubeapi parameters: https://kubernetes.io/docs/reference/command-line-tools-reference/kubeapiserver/

etcd parameters: https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/

Context

This cluster uses containerd as CRI runtime.

Containerd's default runtime handler is runc. Containerd has been prepared to support an additional runtime handler, runsc (gVisor).

Task

Create a RuntimeClass named sandboxed using the prepared runtime handler named runsc.

Update all Pods in the namespace server to run on gVisor.

A.
See the explanation
A.
See the explanation
Answers
Suggested answer: A

Explanation:

Explanation:

Context

Your organization’s security policy includes:

ServiceAccounts must not automount API credentials

ServiceAccount names must end in "-sa"

The Pod specified in the manifest file /home/candidate/KSCH00301 /pod-m nifest.yaml fails to schedule because of an incorrectly specified ServiceAccount.

Complete the following tasks:

Task

1. Create a new ServiceAccount named frontend-sa in the existing namespace q a. Ensure the ServiceAccount does not automount API credentials.

2. Using the manifest file at /home/candidate/KSCH00301 /pod-manifest.yaml, create the Pod.

3. Finally, clean up any unused ServiceAccounts in namespace qa.

A.
See the explanation
A.
See the explanation
Answers
Suggested answer: A

Explanation:

Context

A CIS Benchmark tool was run against the kubeadm-created cluster and found multiple issues that must be addressed immediately.

Task

Fix all issues via configuration and restart the affected components to ensure the new settings take effect.

Fix all of the following violations that were found against the API server:

Fix all of the following violations that were found against the Kubelet:

Fix all of the following violations that were found against etcd:

A.
See the explanation
A.
See the explanation
Answers
Suggested answer: A

Explanation:

Total 44 questions
Go to page: of 5