ExamGecko

Linux Foundation CKA Practice Test - Questions Answers, Page 6

Question list
Search
Search

List of questions

Search

Score: 7%

Task

Given an existing Kubernetes cluster running version 1.20.0, upgrade all of the Kubernetes control plane and node components on the master node only to version 1.20.1.

Be sure to drain the master node before upgrading it and uncordon it after the upgrade.

You are also expected to upgrade kubelet and kubectl on the master node.

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

SOLUTION:

[student@node-1] > ssh ek8s

kubectl cordon k8s-master

kubectl drain k8s-master --delete-local-data --ignore-daemonsets --force

apt-get install kubeadm=1.20.1-00 kubelet=1.20.1-00 kubectl=1.20.1-00 --

disableexcludes=kubernetes

kubeadm upgrade apply 1.20.1 --etcd-upgrade=false

systemctl daemon-reload

systemctl restart kubelet

kubectl uncordon k8s-master

Score: 7%

Task

First, create a snapshot of the existing etcd instance running at https://127.0.0.1:2379, saving the snapshot to /srv/data/etcd-snapshot.db.

Next, restore an existing, previous snapshot located at /var/lib/backup/etcd-snapshot-previo us.db

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

Solution:

#backup

ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN000601/ca.crt --cert=/opt/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcd-client.key snapshot save /etc/data/etcd-snapshot.db

#restore

ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN000601/ca.crt --cert=/opt/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcd-client.key snapshot restore /var/lib/backup/etcd-snapshot-previoys.db

Score: 7%

Task

Create a new NetworkPolicy named allow-port-from-namespace in the existing namespace echo.

Ensure that the new NetworkPolicy allows Pods in namespace my-app to connect to port 9000 of

Pods in namespace echo.

Further ensure that the new NetworkPolicy:

• does not allow access to Pods, which don't listen on port 9000

• does not allow access from Pods, which are not in namespace my-app

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

Solution:

#network.yaml

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

name: allow-port-from-namespace

namespace: internal

spec:

podSelector:

matchLabels: {

}

policyTypes:

- Ingress

ingress:

- from:

- podSelector: {

}

ports:

- protocol: TCP

port: 8080

#spec.podSelector namespace pod

kubectl create -f network.yaml

Score: 7%

Task

Reconfigure the existing deployment front-end and add a port specification named http exposing port 80/tcp of the existing container nginx.

Create a new service named front-end-svc exposing the container port http.

Configure the new service to also expose the individual Pods via a NodePort on the nodes on which they are scheduled.

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

Solution:

kubectl get deploy front-end

kubectl edit deploy front-end -o yaml

#port specification named http

#service.yaml

apiVersion: v1

kind: Service

metadata:

name: front-end-svc

labels:

app: nginx

spec:

ports:

- port: 80

protocol: tcp

name: http

selector:

app: nginx

type: NodePort

# kubectl create -f service.yaml

# kubectl get svc

# port specification named http

kubectl expose deployment front-end --name=front-end-svc --port=80 --tarport=80 --type=NodePort

Score: 7%

Task

Create a new nginx Ingress resource as follows:

• Name: ping

• Namespace: ing-internal

• Exposing service hi on path /hi using service port 5678

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

Solution:

vi ingress.yaml

#a

piVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: ping

namespace: ing-internal

spec:

rules:

- http:

paths:

- path: /hi

pathType: Prefix

backend:

service:

name: hi

port:

number: 5678

# kubectl create -

f

ingress.yaml

Score: 4%

Task

Scale the deployment presentation to 6 pods.

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

Solution:

kubectl get deployment

kubectl scale deployment.apps/presentation --replicas=6

Score: 4%

Task

Schedule a pod as follows:

• Name: nginx-kusc00401

• Image: nginx

• Node selector: disk=ssd

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

Solution:

#yaml

apiVersion: v1

kind: Pod

metadata:

name: nginx-kusc00401

spec:

containers:

- name: nginx

image: nginx

imagePullPolicy: IfNotPresent

nodeSelector:

disk: spinning

# kubectl create -

f

node-select.yaml

Score: 4%

Task

Check to see how many nodes are ready (not including nodes tainted NoSchedule ) and write the number to /opt/KUSC00402/kusc00402.txt.

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

Solution:

kubectl describe nodes | grep ready|wc -l

kubectl describe nodes | grep -i taint | grep -i noschedule |wc -l

echo 3 > /opt/KUSC00402/kusc00402.txt

# kubectl get node |

grep -

i

ready |

wc -

l

# taints?noSchedule

kubectl describe nodes | grep -i taints | grep -i noschedule |wc -l

#e

cho 2 > /opt/KUSC00402/kusc00402.txt

Score: 4%

Task

Create a pod named kucc8 with a single app container for each of the following images running inside (there may be between 1 and 4 images specified): nginx + redis + memcached .

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

Solution:

kubectl run kucc8 --image=nginx --dry-run -o yaml > kucc8.yaml

# vi kucc8.yaml

apiVersion: v1

kind: Pod

metadata:

creationTimestamp: null

name: kucc8

spec:

containers:

- image: nginx

name: nginx

- image: redis

name: redis

- image: memcached

name: memcached

- image: consul

name: consul

# kubectl create -

f

kucc8.yaml

#12.07

Score: 4%

Task

Create a persistent volume with name app-data , of capacity 1Gi and access mode ReadOnlyMany.

The type of volume is hostPath and its location is /srv/app-data .

A.
See the solution below.
A.
See the solution below.
Answers
Suggested answer: A

Explanation:

Solution:

#vi pv.yaml

apiVersion: v1

kind: PersistentVolume

metadata:

name: app-config

spec:

capacity:

storage: 1Gi

accessModes:

- ReadOnlyMany

hostPath:

path: /srv/app-config

# kubectl create -

f

pv.yaml

Total 67 questions
Go to page: of 7