ExamGecko
Home Home / Microsoft / AZ-204

Microsoft AZ-204 Practice Test - Questions Answers, Page 14

Question list
Search
Search

List of questions

Search

Related questions











HOTSPOT

You develop a containerized application. You plan to deploy the application to a new Azure Container instance by using a third-party continuous integration and continuous delivery (CI/CD) utility.

The deployment must be unattended and include all application assets. The third-party utility must only be able to push and pull images from the registry. The authentication must be managed by Azure Active Directory (Azure AD). The solution must use the principle of least privilege.

You need to ensure that the third-party utility can access the registry.

Which authentication options should you use? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.


Question 131
Correct answer: Question 131

Explanation:

Box 1: Service principal Applications and container orchestrators can perform unattended, or "headless," authentication by using an Azure Active Directory (Azure AD) service principal.

Incorrect Answers:

Individual AD identity does not support unattended push/pull

Repository-scoped access token is not integrated with AD identity

Managed identity for Azure resources is used to authenticate to an Azure container registry from another Azure resource.

Box 2: AcrPush

AcrPush provides pull/push permissions only and meets the principle of least privilege.

Incorrect Answers:

AcrPull only allows pull permissions it does not allow push permissions.

Owner and Contributor allow pull/push permissions but does not meet the principle of least privilege.

Reference:

https://docs.microsoft.com/en-us/azure/container-registry/container-registry-authentication?tabs=azure-cli

https://docs.microsoft.com/en-us/azure/container-registry/container-registry-roles?tabs=azure-cli

You deploy an Azure App Service web app. You create an app registration for the app in Azure Active Directory (Azure AD) and Twitter.

The app must authenticate users and must use SSL for all communications. The app must use Twitter as the identity provider.

You need to validate the Azure AD request in the app code.

What should you validate?

A.
ID token header
A.
ID token header
Answers
B.
ID token signature
B.
ID token signature
Answers
C.
HTTP response code
C.
HTTP response code
Answers
D.
Tenant ID
D.
Tenant ID
Answers
Suggested answer: A

Explanation:

Reference:

https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-app?tabs=dotnet

A development team is creating a new REST API. The API will store data in Azure Blob storage. You plan to deploy the API to Azure App Service.

Developers must access the Azure Blob storage account to develop the API for the next two months. The Azure Blob storage account must not be accessible by the developers after the two-month time period.

You need to grant developers access to the Azure Blob storage account.

What should you do?

A.
Generate a shared access signature (SAS) for the Azure Blob storage account and provide the SAS to all developers.
A.
Generate a shared access signature (SAS) for the Azure Blob storage account and provide the SAS to all developers.
Answers
B.
Create and apply a new lifecycle management policy to include a last accessed date value. Apply the policy to the Azure Blob storage account.
B.
Create and apply a new lifecycle management policy to include a last accessed date value. Apply the policy to the Azure Blob storage account.
Answers
C.
Provide all developers with the access key for the Azure Blob storage account. Update the API to include the Coordinated Universal Time (UTC) timestamp for the request header.
C.
Provide all developers with the access key for the Azure Blob storage account. Update the API to include the Coordinated Universal Time (UTC) timestamp for the request header.
Answers
D.
Grant all developers access to the Azure Blob storage account by assigning role-based access control (RBAC) roles.
D.
Grant all developers access to the Azure Blob storage account by assigning role-based access control (RBAC) roles.
Answers
Suggested answer: A

Explanation:

Reference:

https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview

DRAG DROP

An organization plans to deploy Azure storage services.

You need to configure shared access signature (SAS) for granting access to Azure Storage.

Which SAS types should you use? To answer, drag the appropriate SAS types to the correct requirements. Each SAS type may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.


Question 134
Correct answer: Question 134

Explanation:

Reference:

https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview

You are developing an ASP.NET Core Web API web service. The web service uses Azure Application Insights for all telemetry and dependency tracking. The web service reads and writes data to a database other than Microsoft SQL

Server.

You need to ensure that dependency tracking works for calls to the third-party database.

Which two dependency telemetry properties should you use? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A.
Telemetry.Context.Cloud.RoleInstance
A.
Telemetry.Context.Cloud.RoleInstance
Answers
B.
Telemetry.Id
B.
Telemetry.Id
Answers
C.
Telemetry.Name
C.
Telemetry.Name
Answers
D.
Telemetry.Context.Operation.Id
D.
Telemetry.Context.Operation.Id
Answers
E.
Telemetry.Context.Session.Id
E.
Telemetry.Context.Session.Id
Answers
Suggested answer: B, D

Explanation:

Example:

public async Task Enqueue(string payload)

{

// StartOperation is a helper method that initializes the telemetry item

// and allows correlation of this operation with its parent and children.

var operation = telemetryClient.StartOperation<DependencyTelemetry>("enqueue " + queueName);

operation.Telemetry.Type = "Azure Service Bus";

operation.Telemetry.Data = "Enqueue " + queueName;

var message = new BrokeredMessage(payload);

// Service Bus queue allows the property bag to pass along with the message.

// We will use them to pass our correlation identifiers (and other context)

// to the consumer.

message.Properties.Add("ParentId", operation.Telemetry.Id);

message.Properties.Add("RootId", operation.Telemetry.Context.Operation.Id);

Reference:

https://docs.microsoft.com/en-us/azure/azure-monitor/app/custom-operations-tracking

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets the stated goals.

You are developing and deploying several ASP.NET web applications to Azure App Service. You plan to save session state information and HTML output.

You must use a storage mechanism with the following requirements:

Share session state across all ASP.NET web applications.

Support controlled, concurrent access to the same session state data for multiple readers and a single writer.

Save full HTTP responses for concurrent requests.

You need to store the information.

Proposed Solution: Enable Application Request Routing (ARR).

Does the solution meet the goal?

A.
Yes
A.
Yes
Answers
B.
No
B.
No
Answers
Suggested answer: B

Explanation:

Instead deploy and configure Azure Cache for Redis. Update the web applications.

Reference: https://docs.microsoft.com/en-us/azure/architecture/best-practices/caching#managing-concurrency-in-a-cache

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets the stated goals.

You are developing and deploying several ASP.NET web applications to Azure App Service. You plan to save session state information and HTML output.

You must use a storage mechanism with the following requirements:

Share session state across all ASP.NET web applications.

Support controlled, concurrent access to the same session state data for multiple readers and a single writer.

Save full HTTP responses for concurrent requests.

You need to store the information.

Proposed Solution: Deploy and configure an Azure Database for PostgreSQL. Update the web applications.

Does the solution meet the goal?

A.
Yes
A.
Yes
Answers
B.
No
B.
No
Answers
Suggested answer: B

Explanation:

Instead deploy and configure Azure Cache for Redis. Update the web applications.

Reference: https://docs.microsoft.com/en-us/azure/architecture/best-practices/caching#managing-concurrency-in-a-cache

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets the stated goals.

You are developing and deploying several ASP.NET web applications to Azure App Service. You plan to save session state information and HTML output.

You must use a storage mechanism with the following requirements:

Share session state across all ASP.NET web applications.

Support controlled, concurrent access to the same session state data for multiple readers and a single writer.

Save full HTTP responses for concurrent requests.

You need to store the information.

Proposed Solution: Deploy and configure Azure Cache for Redis. Update the web applications.

Does the solution meet the goal?

A.
Yes
A.
Yes
Answers
B.
No
B.
No
Answers
Suggested answer: A

Explanation:

The session state provider for Azure Cache for Redis enables you to share session information between different instances of an ASP.NET web application. The same connection can be used by multiple concurrent threads.

Redis supports both read and write operations.

The output cache provider for Azure Cache for Redis enables you to save the HTTP responses generated by an ASP.NET web application.

Note: Using the Azure portal, you can also configure the eviction policy of the cache, and control access to the cache by adding users to the roles provided. These roles, which define the operations that members can perform, include Owner, Contributor, and Reader. For example, members of the Owner role have complete control over the cache (including security) and its contents, members of the Contributor role can read and write information in the cache, and members of the Reader role can only retrieve data from the cache.

Reference:

https://docs.microsoft.com/en-us/azure/architecture/best-practices/caching

HOTSPOT

You are using Azure Front Door Service.

You are expecting inbound files to be compressed by using Brotli compression. You discover that inbound XML files are not compressed. The files are 9 megabytes (MB) in size.

You need to determine the root cause for the issue.

To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.


Question 139
Correct answer: Question 139

Explanation:

Box 1: No

Front Door can dynamically compress content on the edge, resulting in a smaller and faster response to your clients. All files are eligible for compression. However, a file must be of a MIME type that is eligible for compression list.

Box 2: No Sometimes you may wish to purge cached content from all edge nodes and force them all to retrieve new updated assets. This might be due to updates to your web application, or to quickly update assets that contain incorrect information.

Box 3: Yes

These profiles support the following compression encodings: Gzip (GNU zip), Brotli

Reference:

https://docs.microsoft.com/en-us/azure/frontdoor/front-door-caching

HOTSPOT

You are developing an Azure App Service hosted ASP.NET Core web app to deliver video-on-demand streaming media. You enable an Azure Content Delivery Network (CDN) Standard for the web endpoint. Customer videos are downloaded from the web app by using the following example URL: http://www.contoso.com/content.mp4?quality=1

All media content must expire from the cache after one hour. Customer videos with varying quality must be delivered to the closest regional point of presence (POP) node.

You need to configure Azure CDN caching rules.

Which options should you use? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.


Question 140
Correct answer: Question 140

Explanation:

Box 1: Override

Override: Ignore origin-provided cache duration; use the provided cache duration instead. This will not override cache-control: no-cache.

Set if missing: Honor origin-provided cache-directive headers, if they exist; otherwise, use the provided cache duration.

Incorrect:

Bypass cache: Do not cache and ignore origin-provided cache-directive headers.

Box 2: 1 hour

All media content must expire from the cache after one hour.

Box 3: Cache every unique URL

Cache every unique URL: In this mode, each request with a unique URL, including the query string, is treated as a unique asset with its own cache. For example, the response from the origin server for a request for example.ashx?q=test1 is cached at the POP node and returned for subsequent caches with the same query string. A request for example.ashx?q=test2 is cached as a separate asset with its own time-to-live setting.

Incorrect Answers:

Bypass caching for query strings: In this mode, requests with query strings are not cached at the CDN POP node. The POP node retrieves the asset directly from the origin server and passes it to the requestor with each request.

Ignore query strings: Default mode. In this mode, the CDN point-of-presence (POP) node passes the query strings from the requestor to the origin server on the first request and caches the asset. All subsequent requests for the asset that are served from the POP ignore the query strings until the cached asset expires.

Reference:

https://docs.microsoft.com/en-us/azure/cdn/cdn-query-string

Total 345 questions
Go to page: of 35