ExamGecko

Linux Foundation CKAD Practice Test - Questions Answers, Page 2

Question list
Search
Search

List of questions

Search

Related questions











Context

As a Kubernetes application developer you will often find yourself needing to update a running application.

Task

Please complete the following:

• Update the app deployment in the kdpd00202 namespace with a maxSurge of 5% and a maxUnavailable of 2%

• Perform a rolling update of the web1 deployment, changing the Ifccncf/ngmx image version to 1.13

• Roll back the app deployment to the previous version

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

Solution:

Context

You have been tasked with scaling an existing deployment for availability, and creating a service to expose the deployment within your infrastructure.

Task

Start with the deployment named kdsn00101-deployment which has already been deployed to the namespace kdsn00101 . Edit it to:

• Add the func=webFrontEnd key/value label to the pod template metadata to identify the pod for the service definition

• Have 4 replicas

Next, create ana deploy in namespace kdsn00l01 a service that accomplishes the following:

• Exposes the service on TCP port 8080

• is mapped to me pods defined by the specification of kdsn00l01-deployment

• Is of type NodePort

• Has a name of cherry

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

Solution:

Context

A container within the poller pod is hard-coded to connect the nginxsvc service on port 90 . As this port changes to 5050 an additional container needs to be added to the poller pod which adapts the container to connect to this new port. This should be realized as an ambassador container within the pod.

Task

• Update the nginxsvc service to serve on port 5050.

• Add an HAproxy container named haproxy bound to port 90 to the poller pod and deploy the enhanced pod. Use the image haproxy and inject the configuration located at

/opt/KDMC00101/haproxy.cfg, with a ConfigMap named haproxy-config, mounted into the container so that haproxy.cfg is available at /usr/local/etc/haproxy/haproxy.cfg. Ensure that you update the args of the poller container to connect to localhost instead of nginxsvc so that the connection is correctly proxied to the new service endpoint. You must not modify the port of the endpoint in poller's args . The spec file used to create the initial poller pod is available in

/opt/KDMC00101/poller.yaml

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

Solution:

apiVersion: apps/v1

kind: Deployment

metadata:

name: my-nginx

spec:

selector:

matchLabels:

run: my-nginx

replicas: 2

template:

metadata:

labels:

run: my-nginx

spec:

containers:

- name: my-nginx

image: nginx

ports:

- containerPort: 90

This makes it accessible from any node in your cluster. Check the nodes the Pod is running on:

kubectl apply -f ./run-my-nginx.yaml

kubectl get pods -l run=my-nginx -o wide

NAME READY STATUS RESTARTS AGE IP NODE

my-nginx-3800858182-jr4a2 1/1 Running 0 13s 10.244.3.4 kubernetes-minion-905m

my-nginx-3800858182-kna2y 1/1 Running 0 13s 10.244.2.5 kubernetes-minion-ljyd

Check your pods' IPs:

kubectl get pods -l run=my-nginx -o yaml | grep podIP

podIP: 10.244.3.4

podIP: 10.244.2.5

Context

Developers occasionally need to submit pods that run periodically.

Task

Follow the steps below to create a pod that will start at a predetermined time and]which runs to completion only once each time it is started:

• Create a YAML formatted Kubernetes manifest /opt/KDPD00301/periodic.yaml that runs the following shell command: date in a single busybox container. The command should run every minute and must complete within 22 seconds or be terminated oy Kubernetes. The Cronjob namp and container name should both be hello

• Create the resource in the above manifest and verify that the job executes successfully at least once

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

Solution:

Context

Task

You have rolled out a new pod to your infrastructure and now you need to allow it to communicate with the web and storage pods but nothing else. Given the running pod kdsn00201 -newpod edit it to use a network policy that will allow it to send and receive traffic only to and from the web and storage pods.

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

apiVersion: networking.k8s.i o/v1
kind: NetworkPolicy
metadata:
name: internal-policy
namespace: default
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Egress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchLabels:
name: mysql
ports:
- protocol: TCP
port: 3306
- to:
- podSelector:
matchLabels:
name: payroll
ports:
- protocol: TCP
port: 8080
- ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP


Context

A user has reported an aopticauon is unteachable due to a failing livenessProbe .

Task

Perform the following tasks:

• Find the broken pod and store its name and namespace to /opt/KDOB00401/broken.txt in the format:

The output file has already been created

• Store the associated error events to a file /opt/KDOB00401/error.txt, The output file has already been created. You will need to use the -o wide output specifier with your command

• Fix the issue.

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

Solution:

Create the Pod:

kubectl create -f http://k8s.io/docs/tasks/configure-pod-container/exec-liveness.yaml

Within 30 seconds, view the Pod events:

kubectl describe pod liveness-exec

The output indicates that no liveness probes have failed yet:

FirstSeen LastSeen Count From SubobjectPath Type Reason Message

--------- -------- ----- ---- ------------- -------- ------ -------

24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned livenessexec

to worker0

23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image

"gcr.io/google_containers/busybox"

23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "gcr.io/google_containers/busybox"

23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]

23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e

After 35 seconds, view the Pod events again:

kubectl describe pod liveness-exec

At the bottom of the output, there are messages indicating that the liveness probes have failed, and

the containers have been killed and recreated.

FirstSeen LastSeen Count From SubobjectPath Type Reason Message

--------- -------- ----- ---- ------------- -------- ------ -------

37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned livenessexec

to worker0

36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image

"gcr.io/google_containers/busybox"

36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully

pulled image "gcr.io/google_containers/busybox"

36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]

36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e

2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory

Wait another 30 seconds, and verify that the Container has been restarted:

kubectl get pod liveness-exec

The output shows that RESTARTS has been incremented:

NAME READY STATUS RESTARTS AGE

liveness-exec 1/1 Running 1 m

Context

A project that you are working on has a requirement for persistent data to be available.

Task

To facilitate this, perform the following tasks:

• Create a file on node sk8s-node-0 at /opt/KDSP00101/data/index.html with the content Acct=Finance

• Create a PersistentVolume named task-pv-volume using hostPath and allocate 1Gi to it, specifying that the volume is at /opt/KDSP00101/data on the cluster's node. The configuration should specify the access mode of ReadWriteOnce . It should define the StorageClass name exam for the PersistentVolume , which will be used to bind PersistentVolumeClaim requests to this PersistenetVolume.

• Create a PefsissentVolumeClaim named task-pv-claim that requests a volume of at least 100Mi and specifies an access mode of ReadWriteOnce

• Create a pod that uses the PersistentVolmeClaim as a volume with a label app: my-storage-app mounting the resulting volume to a mountPath /usr/share/nginx/html inside the pod

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

Solution:

Context

Given a container that writes a log file in format A and a container that converts log files from format A to format B, create a deployment that runs both containers such that the log files from the first container are converted by the second container, emitting logs in format B.

Task:

• Create a deployment named deployment-xyz in the default namespace, that:

• Includes a primary lfccncf/busybox:1 container, named logger-dev

• includes a sidecar Ifccncf/fluentd:v0.12 container, named adapter-zen

• Mounts a shared volume /tmp/log on both containers, which does not persist when the pod is deleted

• Instructs the logger-dev container to run the command

which should output logs to /tmp/log/input.log in plain text format, with example values:

• The adapter-zen sidecar container should read /tmp/log/input.log and output the data to

/tmp/log/output.* in Fluentd JSON format. Note that no knowledge of Fluentd is required to complete this task: all you will need to achieve this is to create the ConfigMap from the spec file provided at /opt/KDMC00102/fluentd-configma p.yaml , and mount that ConfigMap to /fluentd/etc in the adapter-zen sidecar container

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

Solution:

Context

Task

A Deployment named backend-deployment in namespace staging runs a web application on port 8081.

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

Solution:

Context

Task:

Update the Deployment app-1 in the frontend namespace to use the existing ServiceAccount app.

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

Solution:

Total 32 questions
Go to page: of 4