ExamGecko
Home Home / Docker / DCA

Docker DCA Practice Test - Questions Answers, Page 17

Question list
Search
Search

List of questions

Search

Related questions











Which docker run' flag lifts cgroup limitations?

A.

'docker run -privileged

A.

'docker run -privileged

Answers
B.

'docker run -cpu-period

B.

'docker run -cpu-period

Answers
C.

'docker run -isolation

C.

'docker run -isolation

Answers
D.

'docker run -cap-drop

D.

'docker run -cap-drop

Answers
Suggested answer: A

Explanation:

The --privileged flag lifts all the cgroup limitations for a container, as well as other security restrictions imposed by the Docker daemon1. This gives the container full access to the host's devices, resources, and capabilities, as if it was running directly on the host2. This can be useful for certain use cases that require elevated privileges, such as running Docker-in-Docker or debugging system issues3. However, using the --privileged flag also poses a security risk, as it exposes the host to potential attacks or damages from the container4. Therefore, it is not recommended to use the --privileged flag unless absolutely necessary, and only with trusted images and containers.

The other options are not correct because they do not lift all the cgroup limitations for a container, but only affect specific aspects of the container's resource allocation or isolation:

* The --cpu-period flag sets the CPU CFS (Completely Fair Scheduler) period for a container, which is the length of a CPU cycle in microseconds. This flag can be used in conjunction with the --cpu-quota flag to limit the CPU time allocated to a container. However, this flag does not affect other cgroup limitations, such as memory, disk, or network.

* The --isolation flag sets the isolation technology for a container, which is the mechanism that separates the container from the host or other containers. This flag is only available on Windows containers, and can be used to choose between process, hyperv, or process-isolated modes. However, this flag does not affect the cgroup limitations for a container, but only the level of isolation from the host or other containers.

* The --cap-drop flag drops one or more Linux capabilities for a container, which are the privileges that a process can use to perform certain actions on the system. This flag can be used to reduce the attack surface of a container by removing unnecessary or dangerous capabilities. However, this flag does not affect the cgroup limitations for a container, but only the capabilities granted to the container by the Docker daemon.

* Runtime privilege and Linux capabilities

* Docker Security: Using Containers Safely in Production

* Docker run reference

* Docker Security: Are Your Containers Tightly Secured to the Ship? SlideShare

* [Secure Engine]

* [Configure a Pod to Use a Limited Amount of CPU]

* [Limit a container's resources]

* [Managing Container Resources]

* [Isolation modes]

* [Windows Container Isolation Modes]

* [Windows Container Version Compatibility]

* [Docker and Linux Containers]

* [Docker Security Cheat Sheet]

* [Docker Security: Using Containers Safely in Production]

A Kubernetes node is allocated a /26 CIDR block (64 unique IPs) for its address space.

If every pod on this node has exactly two containers in it, how many pods can this address space support on this node?

A.

-995

A.

-995

Answers
B.

64

B.

64

Answers
C.

32 in every Kubernetes namespace

C.

32 in every Kubernetes namespace

Answers
D.

64 for every service routing to pods on this node

D.

64 for every service routing to pods on this node

Answers
E.

32

E.

32

Answers
Suggested answer: E

Explanation:

A Kubernetes node is allocated a /26 CIDR block (64 unique IPs) for its address space. This means that the node can assign up to 64 IP addresses to its resources, such as pods and containers. If every pod on this node has exactly two containers in it, then each pod will need two IP addresses, one for each container. Therefore, the node can support up to 32 pods, since 64 / 2 = 32. The other options are incorrect because they either exceed the available IP addresses or do not account for the number of containers per pod.

Reference:

* CIDR Blocks and Container Engine for Kubernetes - Oracle

* How kubernetes assigns podCIDR for nodes? - Stack Overflow

You are running only Kubernetes workloads on a worker node that requires maintenance, such as installing patches or an OS upgrade.

Which command must be run on the node to gracefully terminate all pods on the node, while marking the node as unschedulable?

A.

'docker swarm leave'

A.

'docker swarm leave'

Answers
B.

'docker node update -availability drain <node name>

B.

'docker node update -availability drain <node name>

Answers
C.

'kubectl drain <node name>'

C.

'kubectl drain <node name>'

Answers
D.

'kubectl cordon <node name>

D.

'kubectl cordon <node name>

Answers
Suggested answer: C

Explanation:

The command kubectl drain <node name> is the correct one to run on the node to gracefully terminate all pods on the node, while marking the node as unschedulable. This command will safely evict all the pods from the node before you perform maintenance on the node, such as installing patches or an OS upgrade1. It will respect the PodDisruptionBudgets you have specified, if any, and allow the pod's containers to gracefully terminate1. It will also mark the node as unschedulable, so that no new pods can be scheduled on the node until it is ready1.

The other commands are not correct because:

* docker swarm leave will make the node leave the swarm cluster, but it will not affect the Kubernetes workloads on the node2.

* docker node update -availability drain <node name> will change the availability of the node to drain, which means that no new tasks can be assigned to the node, but it will not terminate the existing pods on the node3.

* kubectl cordon <node name> will mark the node as unschedulable, but it will not evict the pods on the node4.

* Safely Drain a Node | Kubernetes

* [docker swarm leave | Docker Docs]

* [docker node update | Docker Docs]

* [kubectl cordon | Kubernetes Docs]

Which networking drivers allow you to enable multi-host network connectivity between containers?

A.

macvlan, ipvlan, and overlay

A.

macvlan, ipvlan, and overlay

Answers
B.

bridge, user-defined, host

B.

bridge, user-defined, host

Answers
C.

host, macvlan, overlay, user-defined

C.

host, macvlan, overlay, user-defined

Answers
D.

bridge, macvlan, ipvlan, overlay

D.

bridge, macvlan, ipvlan, overlay

Answers
Suggested answer: D

Explanation:

: The networking drivers that allow you to enable multi-host network connectivity between containers are bridge, macvlan, ipvlan, and overlay. These drivers create networks that can span multiple Docker hosts, and therefore enable containers on different hosts to communicate with each other. The other drivers, such as host, user-defined, and none, create networks that are either isolated or limited to a single host. Here is a brief overview of each driver and how it supports multi-host networking:

* bridge: The bridge driver creates a network that connects containers on the same host using a Linux bridge. However, it can also be used to create a network that connects containers across multiple hosts using an external key-value store, such as Consul, Etcd, or ZooKeeper. This feature is deprecated and not recommended, as it requires manual configuration and has some limitations. The preferred driver for multi-host networking is overlay1.

* macvlan: The macvlan driver creates a network that assigns a MAC address to each container, making it appear as a physical device on the network. This allows the containers to communicate with other devices on the same network, regardless of the host they are running on. The macvlan driver can also use 802.1q trunking to create sub-interfaces and isolate traffic between different networks2.

* ipvlan: The ipvlan driver creates a network that assigns an IP address to each container, making it appear as a logical device on the network. This allows the containers to communicate with other devices on the same network, regardless of the host they are running on. The ipvlan driver can also use different modes, such as l2, l3, or l3s, to control the routing and isolation of traffic between different networks3.

* overlay: The overlay driver creates a network that connects multiple Docker daemons together using VXLAN tunnels. This allows the containers to communicate across different hosts, even if they are on different networks. The overlay driver also supports encryption, load balancing, and service discovery. The overlay driver is the default and recommended driver for multi-host networking, especially for Swarm services4.

* Use bridge networks

* Use macvlan networks

* Use ipvlan networks

* Use overlay networks

You want to mount external storage to a particular filesystem path in a container in a Kubernetes pod.

What is the correct set of objects to use for this?

A.

a persistentVolume in the pod specification, populated with a persistentVolumeClaim which is bound to a volume defined by a storageClass

A.

a persistentVolume in the pod specification, populated with a persistentVolumeClaim which is bound to a volume defined by a storageClass

Answers
B.

a storageClass in the pod's specification, populated with a volume which is bound to a provisioner defined by a persistentVolume

B.

a storageClass in the pod's specification, populated with a volume which is bound to a provisioner defined by a persistentVolume

Answers
C.

a volume in the pod specification, populated with a storageClass which is bound to a provisioner defined by a persistentVolume

C.

a volume in the pod specification, populated with a storageClass which is bound to a provisioner defined by a persistentVolume

Answers
D.

a volume in the pod specification, populated with a persistentVolumeClaim bound to a persistentVolume defined by a storageClass

D.

a volume in the pod specification, populated with a persistentVolumeClaim bound to a persistentVolume defined by a storageClass

Answers
Suggested answer: D

In Kubernetes, to mount external storage to a filesystem path in a container within a pod, you would use a volume in the pod specification. This volume is populated with a persistentVolumeClaim that is bound to an existing persistentVolume. The persistentVolume is defined and managed by the storageClass which provides dynamic or static provisioning of the volume and determines what type of storage will be provided1.

Reference:

* Dynamic Volume Provisioning | Kubernetes

Is this a supported user authentication method for Universal Control Plane?

Solution: Docker ID

A.

Yes

A.

Yes

Answers
B.

No

B.

No

Answers
Suggested answer: B

Explanation:

Docker Universal Control Plane (UCP) has its own built-in authentication mechanism and integrates with LDAP services1. It also has role-based access control (RBAC), so that you can control who can access and make changes to your cluster and applications1. However, there is no mention of Docker ID being a supported user authentication method for UCP in the resources provided1234.

A company's security policy specifies that development and production containers must run on separate nodes in a given Swarm cluster.

Can this be used to schedule containers to meet the security policy requirements?

Solution: node affinities

A.

Yes

A.

Yes

Answers
B.

No

B.

No

Answers
Suggested answer: A

Explanation:

They provide granular control over where pods (or in this case, containers) are scheduled, based on the labels of the nodes1. In the context of Docker Swarm, this means that you could use node affinities to ensure that development and production containers are scheduled on separate nodes, thus meeting the company's security policy requirements12345.

You created a new service named 'http' and discover it is not registering as healthy. Will this command enable you to view the list of historical tasks for this service?

Solution: 'docker ps http'

A.

Yes

A.

Yes

Answers
B.

No

B.

No

Answers
Suggested answer: B

Explanation:

The command 'docker ps http' is not the correct command to view the list of historical tasks for a service in Docker1. The 'docker ps' command is used to list containers1. If you want to view the list of historical tasks for a service, you should use the 'docker service ps' command2. This command lists the tasks that are running as part of the specified services and also shows the task history2. Therefore, to view the list of historical tasks for the 'http' service, you should use the command 'docker service ps http'2.

The Kubernetes yaml shown below describes a clusterIP service.

Is this a correct statement about how this service routes requests?

Solution: Traffic sent to the IP of this service on port 80 will be routed to port 8080 in a random pod with the label app:

nginx.

A.

Yes

A.

Yes

Answers
B.

No

B.

No

Answers
Suggested answer: B

Explanation:

The statement is not entirely correct. In Kubernetes, a service of type ClusterIP routes traffic sent to its IP address to the pods selected by its label selector1. However, the port to which the traffic is routed in the pod is determined by the targetPort specified in the service definition1. If targetPort is not specified, it defaults to being the same as the port field1. In the provided YAML snippet, there is no targetPort specified for port 80, so we cannot confirm that the traffic will be routed to port 8080 in the pod. Therefore, without additional information about the pod configuration, we cannot verify the provided solution statement1.

Is this a way to configure the Docker engine to use a registry without a trusted TLS certificate?

Solution: List insecure registries in the 'daemon.json configuration file under the \insecure-registries' key.

A.

Yes

A.

Yes

Answers
B.

No

B.

No

Answers
Suggested answer: A

Explanation:

Docker allows the use of insecure registries through a specific configuration in the Docker daemon. By listing the insecure registries in the 'daemon.json' configuration file under the 'insecure-registries' key, Docker can interact with these registries even without a trusted TLS certificate1. This is particularly useful when setting up a private Docker registry1. However, it's important to note that this configuration bypasses the security provided by TLS, and should be used with caution1.

Total 183 questions
Go to page: of 19