CKS: Certified Kubernetes Security Specialist
Linux Foundation
The CKS exam, known as the Certified Kubernetes Security Specialist, is crucial for IT professionals aiming to validate their skills. Practicing with real exam questions shared by those who have succeeded can significantly boost your chances of passing. In this guide, we'll provide you with practice test questions and answers offering insights directly from candidates who have already passed the exam.
Exam Details:
-
Exam Number: CKS
-
Exam Name: Certified Kubernetes Security Specialist
-
Length of Test: 120 minutes
-
Exam Format: Performance-based tasks and multiple-choice questions
-
Exam Language: English
-
Number of Questions in the Actual Exam: Approximately 15-20 tasks
-
Passing Score: 66%
Why Use CKS Practice Test?
-
Real Exam Experience: Our practice tests accurately replicate the format and difficulty of the actual CKS exam, providing you with a realistic preparation experience.
-
Identify Knowledge Gaps: Practicing with these tests helps you identify areas where you need more study, allowing you to focus your efforts effectively.
-
Boost Confidence: Regular practice with exam-like questions builds your confidence and reduces test anxiety.
-
Track Your Progress: Monitor your performance over time to see your improvement and adjust your study plan accordingly.
Key Features of CKS Practice Test:
-
Up-to-Date Content: Our community ensures that the questions are regularly updated to reflect the latest exam objectives and technology trends.
-
Detailed Explanations: Each question comes with detailed explanations, helping you understand the correct answers and learn from any mistakes.
-
Comprehensive Coverage: The practice tests cover all key topics of the CKS exam, including securing container images, cluster hardening, and network policies.
-
Customizable Practice: Create your own practice sessions based on specific topics or difficulty levels to tailor your study experience to your needs.
Use the member-shared CKS Practice Tests to ensure you're fully prepared for your certification exam. Start practicing today and take a significant step towards achieving your certification goals!
Related questions
Context
AppArmor is enabled on the cluster's worker node. An AppArmor profile is prepared, but not enforced yet.
Task
On the cluster's worker node, enforce the prepared AppArmor profile located at
/etc/apparmor.d/nginx_apparmor.
Edit the prepared manifest file located at /home/candidate/KSSH00401/nginx-pod.yaml to apply the AppArmor profile.
Finally, apply the manifest file and create the Pod specified in it.
Create a new ServiceAccount named backend-sa in the existing namespace default, which has the capability to list the pods inside the namespace default.
Create a new Pod named backend-pod in the namespace default, mount the newly created sa backend-sa to the pod, and Verify that the pod is able to list pods.
Ensure that the Pod is running.
Explanation:
A service account provides an identity for processes that run in a Pod.
When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default).
When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace. If you get the raw json or yaml for a pod you have created (for example, kubectl get pods/<podname> -o yaml), you can see the spec.serviceAccountName field has been automatically set.
You can access the API from inside a pod using automatically mounted service account credentials, as described in Accessing the Cluster. The API permissions of the service account depend on the authorization plugin and policy in use.
In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account: apiVersion: v1 kind: ServiceAccount metadata: name: build-robot automountServiceAccountToken: false
...
In version 1.6+, you can also opt out of automounting API credentials for a particular pod: apiVersion: v1 kind: Pod metadata: name: my-pod spec: serviceAccountName: build-robot automountServiceAccountToken: false
...
The pod spec takes precedence over the service account if both specify a automountServiceAccountToken value.
Enable audit logs in the cluster, To Do so, enable the log backend, and ensure that
1. logs are stored at /var/log/kubernetes-logs.txt.
2. Log files are retained for 12 days.
3. at maximum, a number of 8 old audit logs files are retained.
4. set the maximum size before getting rotated to 200MB
Edit and extend the basic policy to log:
1. namespaces changes at RequestResponse
2. Log the request body of secrets changes in the namespace kube-system.
3. Log all other resources in core and extensions at the Request level.
4. Log "pods/portforward", "services/proxy" at Metadata level.
5. Omit the Stage RequestReceived
All other requests at the Metadata level
Explanation:
Kubernetes auditing provides a security-relevant chronological set of records about a cluster. Kubeapiserver performs auditing. Each request on each stage of its execution generates an event, which is then pre-processed according to a certain policy and written to a backend. The policy determines what’s recorded and the backends persist the records.
You might want to configure the audit log as part of compliance with the CIS (Center for Internet Security) Kubernetes Benchmark controls.
The log backend writes audit events to a file in JSONlines format. You can configure the log audit backend using the following kube-apiserver flags:
--audit-log-path specifies the log file path that log backend uses to write audit events. Not specifying this flag disables log backend. - means standard out
--audit-log-maxage defined the maximum number of days to retain old audit log files
--audit-log-maxbackup defines the maximum number of audit log files to retain
--audit-log-maxsize defines the maximum size in megabytes of the audit log file before it gets rotated
If your cluster's control plane runs the kube-apiserver as a Pod, remember to mount the hostPath to the location of the policy file and log file, so that audit records are persisted. For example:
--audit-policy-file=/etc/kubernetes/audit-policy.yaml \
--audit-log-path=/var/log/audit.log
Context
A CIS Benchmark tool was run against the kubeadm-created cluster and found multiple issues that must be addressed immediately.
Task
Fix all issues via configuration and restart the affected components to ensure the new settings take effect.
Fix all of the following violations that were found against the API server:
Fix all of the following violations that were found against the Kubelet:
Fix all of the following violations that were found against etcd:
Create a Pod name Nginx-pod inside the namespace testing, Create a service for the Nginx-pod named nginx-svc, using the ingress of your choice, run the ingress on tls, secure port.
Explanation:
$ kubectl get ing -n <namespace-of-ingress-resource>
NAME HOSTS ADDRESS PORTS AGE cafe-ingress cafe.com 10.0.2.15 80 25s $ kubectl describe ing <ingress-resource-name> -n <namespace-of-ingress-resource> Name: cafe-ingress Namespace: default Address: 10.0.2.15 Default backend: default-http-backend:80 (172.17.0.5:8080) Rules:
Host Path Backends
---- ---- -------- cafe.com
/tea tea-svc:80 (<none>)
/coffee coffee-svc:80 (<none>)
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{},"name":"cafeingress"," namespace":"default","selfLink":"/apis/networking/v1/namespaces/default/ingresses/cafe -ingress"},"spec":{"rules":[{"host":"cafe.com","http":{"paths":[{"backend":{"serviceName":"teasvc"," servicePort":80},"path":"/tea"},{"backend":{"serviceName":"coffeesvc"," servicePort":80},"path":"/coffee"}]}}]},"status":{"loadBalancer":{"ingress":[{"ip":"169.48.142.11 0"}]}}} Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 1m ingress-nginx-controller Ingress default/cafe-ingress Normal UPDATE 58s ingress-nginx-controller Ingress default/cafe-ingress $ kubectl get pods -n <namespace-of-ingress-controller> NAME READY STATUS RESTARTS AGE ingress-nginx-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl logs -n <namespace> ingress-nginx-controller-67956bf89d-fv58j ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.14.0 Build: git-734361d Repository: https://github.com/kubernetes/ingress-nginx -------------------------------------------------------------------------------
....
Create a User named john, create the CSR Request, fetch the certificate of the user after approving it.
Create a Role name john-role to list secrets, pods in namespace john
Finally, Create a RoleBinding named john-role-binding to attach the newly created role john-role to the user john in the namespace john.
To Verify: Use the kubectl auth CLI command to verify the permissions.
Explanation:
se kubectl to create a CSR and approve it.
Get the list of CSRs:
kubectl get csr
Approve the CSR:
kubectl certificate approve myuser
Get the certificate
Retrieve the certificate from the CSR:
kubectl get csr/myuser -o yaml
here are the role and role-binding to give john permission to create NEW_CRD resource:
kubectl apply -f roleBindingJohn.yaml --as=john
rolebinding.rbac.authorization.k8s.io/john_external-rosource-rb created
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: john_crd
namespace: development-john
subjects:
- kind: User
name: john
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: crd-creation
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: crd-creation
rules:
- apiGroups: ["kubernetes-client.io/v1"]
resources: ["NEW_CRD"]
Cluster: scanner
Master node: controlplane
Worker node: worker1
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context scanner
Given:
You may use Trivy's documentation.
Task:
Use the Trivy open-source container scanner to detect images with severe vulnerabilities used by Pods in the namespace nato.
Look for images with High or Critical severity vulnerabilities and delete the Pods that use those images.
Trivy is pre-installed on the cluster's master node. Use cluster's master node to use Trivy.
You must complete this task on the following cluster/nodes: Cluster: immutable-cluster
Master node: master1
Worker node: worker1
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context immutable-cluster
Context: It is best practice to design containers to be stateless and immutable.
Task:
Inspect Pods running in namespace prod and delete any Pod that is either not stateless or not immutable.
Use the following strict interpretation of stateless and immutable:
1. Pods being able to store data inside containers must be treated as not stateless.
Note: You don't have to worry whether data is actually stored inside containers or not already.
2. Pods being configured to be privileged in any way must be treated as potentially not stateless or not immutable.
Explanation:
Reference: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
https://cloud.google.com/architecture/best-practices-for-operating-containers
Use the kubesec docker images to scan the given YAML manifest, edit and apply the advised changes,
and passed with a score of 4 points.
kubesec-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: kubesec-demo
spec:
containers:
- name: kubesec-demo
image: gcr.io/google-samples/node-hello:1.0
securityContext:
readOnlyRootFilesystem: true
Hint: docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < kubesec-test.yaml
Explanation:
kubesec scan k8s-deployment.yaml
cat <<EOF > kubesec-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: kubesec-demo
spec:
containers:
- name: kubesec-demo
image: gcr.io/google-samples/node-hello:1.0
securityContext:
readOnlyRootFilesystem: true
EOF
kubesec scan kubesec-test.yaml
docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < kubesec-test.yaml
kubesec http 8080 &
[1] 12345
{"severity":"info","timestamp":"2019-05-
12T11:58:34.662+0100","caller":"server/server.go:69","message":"Starting HTTP server on port
8080"}
curl -sSX POST --data-binary @test/asset/score-0-cap-sys-admin.yml http://localhost:8080/scan
[
{
"object": "Pod/security-context-demo.default",
"valid": true,
"message": "Failed with a score of -30 points",
"score": -30,
"scoring": {
"critical": [
{
"selector": "containers[] .securityContext .capabilities .add == SYS_ADMIN",
"reason": "CAP_SYS_ADMIN is the most privileged capability and should always be avoided"
},
{
"selector": "containers[] .securityContext .runAsNonRoot == true",
"reason": "Force the running image to run as a non-root user to ensure least privilege"
},
// ...
Context
A container image scanner is set up on the cluster, but it's not yet fully integrated into the cluster s configuration. When complete, the container image scanner shall scan for and reject the use of vulnerable images.
Task
Given an incomplete configuration in directory /etc/kubernetes/epconfig and a functional container image scanner with HTTPS endpoint https://wakanda.local:8081 /image_policy :
1. Enable the necessary plugins to create an image policy
2. Validate the control configuration and change it to an implicit deny
3. Edit the configuration to point to the provided HTTPS endpoint correctlyFinally, test if the configuration is working by trying to deploy the vulnerable resource /root/KSSC00202/vulnerable-resource.yml.
Explanation:
Question