ExamGecko
Home Home / Docker / DCA

Docker DCA Practice Test - Questions Answers, Page 13

Question list
Search
Search

List of questions

Search

Related questions











You created a new service named 'http* and discover it is not registering as healthy. Will this command enable you to view the list of historical tasks for this service?

Solution. 'docker inspect http'

A.

Yes

A.

Yes

Answers
B.

No

B.

No

Answers
Suggested answer: B

Explanation:

The commanddocker inspect httpwill not enable you to view the list of historical tasks for the service.Thedocker inspectcommand returns low-level information on Docker objects, such as containers, images, networks, or volumes1.It does not work on services, which are higher-level objects that define the desired state of a set of tasks2.To view the list of historical tasks for a service, you need to use thedocker service pscommand, which shows the current and previous states of each task, as well as the node, error, and ports3.Reference:

docker inspect | Docker Docs

Services | Docker Docs

docker service ps | Docker Docs

A company's security policy specifies that development and production containers must run on separate nodes in a given Swarm cluster. Can this be used to schedule containers to meet the security policy requirements?

Solution. label constraints

A.

Yes

A.

Yes

Answers
B.

No

B.

No

Answers
Suggested answer: A

Explanation:

Label constraints can be used to schedule containers to meet the security policy requirements. Label constraints are a way to specify which nodes a service can run on based on the labels assigned to the nodes. Labels are key-value pairs that can be attached to any node in the swarm. For example, you can label nodes asdevelopmentorproductiondepending on their intended use. Then, you can use the--constraintoption when creating or updating a service to filter the nodes based on their labels. For example, to run a service only on development nodes, you can use:

docker service create --constraint 'node.labels.environment == development' ...

To run a service only on production nodes, you can use:

docker service create --constraint 'node.labels.environment == production' ...

This way, you can ensure that development and production containers run on separate nodes in the swarm, as required by the security policy.Reference:

Using placement constraints with Docker Swarm

Multiple label placement constraints in docker swarm

Machine constraints in Docker swarm

How can set service constraint to multiple value

A company's security policy specifies that development and production containers must run on separate nodes in a given Swarm cluster. Can this be used to schedule containers to meet the security policy requirements?

Solution. environment variables

A.

Yes

A.

Yes

Answers
B.

No

B.

No

Answers
Suggested answer: B

Explanation:

Environment variables cannot be used to schedule containers to meet the security policy requirements.Environment variables are used to pass configuration data to the containers, not to control where they run1.To schedule containers to run on separate nodes in a Swarm cluster, you need to use node labels and service constraints23.Node labels are key-value pairs that you can assign to nodes to organize them into groups4. Service constraints are expressions that you can use to limit the nodes where a service can run based on the node labels. For example, you can label some nodes asenv=devand others asenv=prod, and then use the constraint--constraint node.labels.env==devor--constraint node.labels.env==prodwhen creating a service to ensure that it runs only on the nodes with the matching label.Reference:

1: Environment variables in Compose | Docker Docs

2: Deploy services to a swarm | Docker Docs

3: How to use Docker Swarm labels to deploy containers on specific nodes

4: Manage nodes in a swarm | Docker Docs

[5]: Swarm mode routing mesh | Docker Docs

[6]: Docker Swarm - How to set environment variables for tasks on various nodes

Will a DTR security scan detect this?

Solution. private keys copied to the image

A.

Yes

A.

Yes

Answers
B.

No

B.

No

Answers
Suggested answer: A

Explanation:

: = A DTR security scan will detect private keys copied to the image.DTR security scan is a feature of Docker Trusted Registry (DTR) that scans images to detect any security vulnerability1.DTR security scan uses the open source tool SecretScanner2to find unprotected secrets in container images or file systems.SecretScanner can match the contents of images against a database of approximately 140 secret types, including private keys3. Therefore, if an image contains private keys, DTR security scan will report them as potential secrets and alert the user to remove them from the image.Reference:

Scan images for vulnerabilities | Docker Docs

GitHub - deepfence/SecretScanner: :unlock: Find secrets and passwords ...

SecretScanner/deepfence_secret_scanner.py at main * deepfence/SecretScanner

Will a DTR security scan detect this?

Solution. image configuration poor practices, such as exposed ports or inclusion of compilers in production images

A.

Yes

A.

Yes

Answers
B.

No

B.

No

Answers
Suggested answer: B

Explanation:

: A DTR security scan willnotdetect image configuration poor practices, such as exposed ports or inclusion of compilers in production images.A DTR security scan is designed to discover vulnerabilities in the images based on the MITRE CVE or NIST NVD databases1. It does not check the image configuration or best practices. To check the image configuration and best practices, you can use other tools, such as Dockerfile Linter) or Docker Bench for Security).Reference:Vulnerability scanning must be enabled for all repositories in the Docker Trusted Registry (DTR) component of Docker Enterprise), Dockerfile Linter), Docker Bench for Security)

You configure a local Docker engine to enforce content trust by setting the environment variable DOCKER_C0NTENT_TRUST=l. If myorg/myimage: 1.0 is unsigned, does Docker block this command?

Solution. docker image build, from a Dockeflle that begins FROM myorg/myimage: l1.0

A.

Yes

A.

Yes

Answers
B.

No

B.

No

Answers
Suggested answer: A

Explanation:

= Docker will block this command if you configure the local Docker engine to enforce content trust by setting the environment variable DOCKER_CONTENT_TRUST=1.This means that you can only pull, run, or build with trusted images that have been signed using Docker Content Trust (DCT)1.DCT is a feature that allows you to use digital signatures to verify the integrity and the publisher of specific image tags2. If myorg/myimage:1.0 is unsigned, it means that it does not have a valid signature from the image publisher or a trusted delegate. Therefore, Docker will not allow you to build an image from a Dockerfile that begins with FROM myorg/myimage:1.0, as it cannot verify the source or the content of the base image. You will get an error message like this:

No valid trust data for 1.0

To avoid this error, you need to either disable DCT by setting DOCKER_CONTENT_TRUST=0, or use a signed image tag as the base image in your Dockerfile3.Reference:

Content trust in Docker | Docker Docs

Docker Content Trust: What It Is and How It Secures Container Images

Automation with content trust | Docker Docs

Can this set of commands identify the published port(s) for a container?

Solution. 'docker container inspect', docker port'

A.

Yes

A.

Yes

Answers
B.

No

B.

No

Answers
Suggested answer: B

Explanation:

The set of commandsdocker container inspectanddocker portcannot identify the published port(s) for a container.Thedocker container inspectcommand returns low-level information on a container, such as its ID, name, state, network settings, mounts, etc1.However, it does not show the port mappings between the container and the host2.Thedocker portcommand lists the port mappings or a specific mapping for a container, but it requires the container name or ID as an argument3.Therefore, to identify the published port(s) for a container, you need to use both commands together, such asdocker port $(docker container inspect -f '{{.Name}}' CONTAINER)4.Reference:

docker container inspect | Docker Docs

How to inspect a running Docker container - Stack Overflow

docker port | Docker Docs

List port for Docker container using command line - Stack Overflow

Can this set of commands identify the published port(s) for a container?

Solution. 'docker port inspect', docker container inspect'

A.

Yes

A.

Yes

Answers
B.

No

B.

No

Answers
Suggested answer: B

Explanation:

The set of commandsdocker port inspectanddocker container inspectwill not identify the published port(s) for a container. The reason is that there is no such command asdocker port inspect.The correct command to inspect the port mappings of a container isdocker port1.The commanddocker container inspectcan also show the port mappings of a container, but it will display a lot of other information as well, so it is not as concise asdocker port2. To identify the published port(s) for a container, you can use either of these commands:

docker port CONTAINERwill list all the port mappings of the container1.

docker port CONTAINER PRIVATE_PORTwill list only the port mapping of the specified private port of the container1.

docker container inspect --format=' { {.NetworkSettings.Ports}}' CONTAINERwill list only the port mappings of the container in a JSON format23.

For example, if you have a container namedwebthat publishes port 80 to port 8080 on the host, you can use any of these commands to identify the published port:

$ docker port web

80/tcp -> 0.0.0.0:8080

$ docker port web 80

0.0.0.0:8080

$ docker container inspect --format=' { {.NetworkSettings.Ports}}' web

map[80/tcp:[map[HostIp:0.0.0.0 HostPort:8080]]]

docker port

docker container inspect

How can I grab exposed port from inspecting docker container?

You want to create a container that is reachable from its host's network.

Does this action accomplish this?

Solution. Use either EXPOSE or -publish to access the container on the bridge network.

A.

Yes

A.

Yes

Answers
B.

No

B.

No

Answers
Suggested answer: B

Explanation:

Using either EXPOSE or -publish to access the container on the bridge network will not accomplish the goal of creating a container that is reachable from its host's network.EXPOSE is a way of documenting which ports a container listens on, but it does not open any ports to the host1.-publish (or -p) is a way of mapping a host port to a container port, but it does not change the network mode of the container2.By default, Docker containers use the bridge network, which isolates them from the host network3.To create a container that is reachable from its host's network, you need to use the --network host option when running the container4.This will make the container share the host's network stack and have the same IP address as the host4.Reference:

1: Difference Between ''expose'' and ''publish'' in Docker | Baeldung on Ops

2: Deploy services to a swarm | Docker Docs

3: Bridge network | Docker Docs

4: Host network | Docker Docs

Your organization has a centralized logging solution, such as Sptunk.

Will this configure a Docker container to export container logs to the logging solution?

Solution. docker system events- -filter splunk

A.

Yes

A.

Yes

Answers
B.

No

B.

No

Answers
Suggested answer: B

Explanation:

The commanddocker system events --filter splunkwill not configure a Docker container to export container logs to the logging solution. The commanddocker system eventswill display real-time events from the Docker daemon, such as container creation, start, stop, etc. The--filteroption will filter the events by various criteria, such as type, label, name, etc. However, there is no filter forsplunk, and even if there was, it would only show the events related to Splunk, not the container logs. To configure a Docker container to export container logs to Splunk, you need to use the Splunk logging driver, which is a plugin that sends container logs to HTTP Event Collector in Splunk Enterprise and Splunk Cloud. You can use the--log-driverand--log-optoptions when creating or running a container to specify the Splunk logging driver and its options, such as the Splunk token, URL, source, sourcetype, index, etc. Alternatively, you can configure the Splunk logging driver as the default logging driver for the Docker daemon by setting thelog-driverandlog-optskeys in thedaemon.jsonfile and restarting Docker.Reference:

docker system events

Splunk logging driver

How to send Docker containers logs to Splunk?

Total 183 questions
Go to page: of 19