ExamGecko

Linux Foundation CKA Practice Test - Questions Answers, Page 3

Question list
Search
Search

Configure the kubelet systemd- managed service, on the node labelled with name=wk8s-node-1, to launch a pod containing a single container of Image httpd named webtool automatically. Any spec files required should be placed in the /etc/kubernetes/manifests directory on the node.

You can ssh to the appropriate node using:

[student@node-1] $ ssh wk8s-node-1

You can assume elevated privileges on the node with the following command:

[student@wk8s-node-1] $ | sudo –i

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

solution

For this item, you will have to ssh to the nodes ik8s-master-0 and ik8s-node-0 and complete all tasks on these nodes. Ensure that you return to the base node (hostname: node-1) when you have completed this item.

Context

As an administrator of a small development team, you have been asked to set up a Kubernetes cluster to test the viability of a new application.

Task

You must use kubeadm to perform this task. Any kubeadm invocations will require the use of the --ignore-preflight-errors=all option.

Configure the node ik8s-master-O as a master node. .

Join the node ik8s-node-o to the cluster.

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

solution

You must use the kubeadm configuration file located at /etc/kubeadm.conf when initializing your cluster.

You may use any CNI plugin to complete this task, but if you don't have your favourite CNI plugin's manifest URL at hand, Calico is one popular option:

https://docs.projectcalico.org/v3.14/manifests/calico.yaml

Docker is already installed on both nodes and apt has been configured so that you can install the required tools.

Given a partially-functioning Kubernetes cluster, identify symptoms of failure on the cluster.

Determine the node, the failing service, and take actions to bring up the failed service and restore the health of the cluster. Ensure that any changes are made permanently.

You can ssh to the relevant I nodes (bk8s-master-0 or bk8s-node-0) using:

[student@node-1] $ ssh <nodename>

You can assume elevated privileges on any node in the cluster with the following command:

[student@nodename] $ | sudo –i

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

solution

Create a persistent volume with name app-data, of capacity 2Gi and access mode ReadWriteMany.

The type of volume is hostPath and its location is /srv/app-data.

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

solution

Persistent Volume

A persistent volume is a piece of storage in a Kubernetes cluster. PersistentVolumes are a clusterlevel resource like nodes, which don't belong to any namespace. It is provisioned by the administrator and has a particular file size. This way, a developer deploying their app on Kubernetes need not know the underlying infrastructure. When the developer needs a certain amount of persistent storage for their application, the system administrator configures the cluster so that they consume the PersistentVolume provisioned in an easy way.

Creating Persistent Volume

kind: PersistentVolume

apiVersion: v1

metadata:

name:app-data

spec:

capacity: # defines the capacity of PV we are creating

storage: 2Gi #the amount of storage we are tying to claim

accessModes: # defines the rights of the volume we are creating

- ReadWriteMany

hostPath:

path: "/srv/app-data" # path to which we are creating the volume

Challenge

Create a Persistent Volume named app-data, with access mode ReadWriteMany, storage classname shared, 2Gi of storage capacity and the host path /srv/app-data.

2. Save the file and create the persistent volume.

3. View the persistent volume.

Our persistent volume status is available meaning it is available and it has not been mounted yet.

This status will change when we mount the persistentVolume to a persistentVolumeClaim.

PersistentVolumeClaim

In a real ecosystem, a system admin will create the PersistentVolume then a developer will create a

PersistentVolumeClaim which will be referenced in a pod. A PersistentVolumeClaim is created by specifying the minimum size and the access mode they require from the persistentVolume.

Challenge

Create a Persistent Volume Claim that requests the Persistent Volume we had created above. The claim should request 2Gi. Ensure that the Persistent Volume Claim has the same storageClassName as the persistentVolume you had previously created.

kind: PersistentVolume

apiVersion: v1

metadata:

name:app-data

spec:

accessModes:

- ReadWriteMany

resources:

requests:

storage: 2Gi

storageClassName: shared

2. Save and create the pvc

njerry191@cloudshell:~ (extreme-clone-2654111)$ kubect1 create -f app-data.yaml

persistentvolumeclaim/app-data created

3. View the pvc

4. Let's see what has changed in the pv we had initially created.

Our status has now changed from available to bound.

5. Create a new pod named myapp with image nginx that will be used to Mount the Persistent

Volume Claim with the path /var/app/config.

Mounting a Claim

apiVersion: v1

kind: Pod

metadata:

creationTimestamp: null

name: app-data

spec:

volumes:

- name:congigpvc

persistenVolumeClaim:

claimName: app-data

containers:

- image: nginx

name: app

volumeMounts:

- mountPath: "/srv/app-data "

name: configpvc

Create a namespace called 'development' and a pod with image nginx called nginx on this

namespace.

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

kubectl create namespace development

kubectl run nginx --image=nginx --restart=Never -n development

Create a nginx pod with label env=test in engineering namespace

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

kubectl run nginx --image=nginx --restart=Never --labels=env=test --namespace=engineering --dryrun

-o yaml > nginx-pod.yaml

kubectl run nginx --image=nginx --restart=Never --labels=env=test --namespace=engineering --dryrun

-o yaml | kubectl create -n engineering -f –

YAML File:

apiVersion: v1

kind: Pod

metadata:

name: nginx

namespace: engineering

labels:

env: test

spec:

containers:

- name: nginx

image: nginx

imagePullPolicy: IfNotPresent

restartPolicy: Never

kubectl create -f nginx-pod.yaml

Get list of all pods in all namespaces and write it to file "/opt/pods-list.yaml"

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

kubectl get po –all-namespaces > /opt/pods-list.yaml

Create a pod with image nginx called nginx and allow traffic on port 80

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

kubectl run nginx --image=nginx --restart=Never --port=80

Create a busybox pod that runs the command "env" and save the output to "envpod" file

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

kubectl run busybox --image=busybox --restart=Never –-rm -it -- env > envpod.yaml

List pod logs named "frontend" and search for the pattern "started" and write it to a file "/opt/errorlogs"

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

Kubectl logs frontend | grep -i "started" > /opt/error-logs

Total 67 questions
Go to page: of 7