ExamGecko
Home Home / MuleSoft / MCPA - Level 1

MuleSoft MCPA - Level 1 Practice Test - Questions Answers, Page 3

Question list
Search
Search

List of questions

Search

Related questions











When using CloudHub with the Shared Load Balancer, what is managed EXCLUSIVELY by the API implementation (the Mule application) and NOT by Anypoint Platform?

A.
The assignment of each HTTP request to a particular CloudHub worker
A.
The assignment of each HTTP request to a particular CloudHub worker
Answers
B.
The logging configuration that enables log entries to be visible in Runtime Manager
B.
The logging configuration that enables log entries to be visible in Runtime Manager
Answers
C.
The SSL certificates used by the API implementation to expose HTTPS endpoints
C.
The SSL certificates used by the API implementation to expose HTTPS endpoints
Answers
D.
The number of DNS entries allocated to the API implementation
D.
The number of DNS entries allocated to the API implementation
Answers
Suggested answer: A, B, C, D

Explanation:

Answer: The SSL certificates used by the API implementation to expose HTTPS endpoints*****************************************

>> The assignment of each HTTP request to a particular CloudHub worker is taken care by AnypointPlatform itself. We need not manage it explicitly in the API implementation and in fact we CANNOTmanage it in the API implementation.

>> The logging configuration that enables log entries to be visible in Runtime Manager is ALWAYS managed in the API implementation and NOT just for SLB. So this is not something we do EXCLUSIVELY when using SLB.

>> We DO NOT manage the number of DNS entries allocated to the API implementation inside the code. Anypoint Platform takes care of this.

It is the SSL certificates used by the API implementation to expose HTTPS endpoints that is to bemanaged EXCLUSIVELY by the API implementation. Anypoint Platform does NOT do this when usingSLBs.

Refer to the exhibit.

What is the best way to decompose one end-to-end business process into a collaboration of Experience, Process, and System APIs?

A.
Handle customizations for the end-user application at the Process API level rather than the Experience API level
A.
Handle customizations for the end-user application at the Process API level rather than the Experience API level
Answers
B.
Allow System APIs to return data that is NOT currently required by the identified Process or Experience APIs
B.
Allow System APIs to return data that is NOT currently required by the identified Process or Experience APIs
Answers
C.
Always use a tiered approach by creating exactly one API for each of the 3 layers (Experience, Process and System APIs)
C.
Always use a tiered approach by creating exactly one API for each of the 3 layers (Experience, Process and System APIs)
Answers
D.
Use a Process API to orchestrate calls to multiple System APIs, but NOT to other Process APIs
D.
Use a Process API to orchestrate calls to multiple System APIs, but NOT to other Process APIs
Answers
Suggested answer: A, B, C, D

Explanation:

Answer: Allow System APIs to return data that is NOT currently required by the identified Process or Experience APIs.

*****************************************

>> All customizations for the end-user application should be handled in "Experience API" only. Not in Process API

>> We should use tiered approach but NOT always by creating exactly one API for each of the 3 layers. Experience APIs might be one but Process APIs and System APIs are often more than one.

System APIs for sure will be more than one all the time as they are the smallest modular APIs built in front of end systems.

>> Process APIs can call System APIs as well as other Process APIs. There is no such anti-design pattern in API-Led connectivity saying Process APIs should not call other Process APIs.

So, the right answer in the given set of options that makes sense as per API-Led connectivity principles is to allow System APIs to return data that is NOT currently required by the identified Process or Experience APIs. This way, some future Process APIs can make use of that data from System APIs and we need NOT touch the System layer APIs again and again.

What is true about where an API policy is defined in Anypoint Platform and how it is then applied to API instances?

A.
The API policy Is defined In Runtime Manager as part of the API deployment to a Mule runtime, and then ONLY applied to the specific API Instance
A.
The API policy Is defined In Runtime Manager as part of the API deployment to a Mule runtime, and then ONLY applied to the specific API Instance
Answers
B.
The API policy Is defined In API Manager for a specific API Instance, and then ONLY applied to the specific API instance
B.
The API policy Is defined In API Manager for a specific API Instance, and then ONLY applied to the specific API instance
Answers
C.
The API policy Is defined in API Manager and then automatically applied to ALL API instances
C.
The API policy Is defined in API Manager and then automatically applied to ALL API instances
Answers
D.
The API policy is defined in API Manager, and then applied to ALL API instances in the specified environment
D.
The API policy is defined in API Manager, and then applied to ALL API instances in the specified environment
Answers
Suggested answer: A, C, D

Explanation:

Answer: The API policy is defined in API Manager for a specific API instance, and then ONLY applied to the specific API instance.

*****************************************

>> Once our API specifications are ready and published to Exchange, we need to visit API Manager and register an API instance for each API.

>> API Manager is the place where management of API aspects takes place like addressing NFRs by enforcing policies on them.

>> We can create multiple instances for a same API and manage them differently for different purposes.

>> One instance can have a set of API policies applied and another instance of same API can have different set of policies applied for some other purpose.

>> These APIs and their instances are defined PER environment basis. So, one need to manage them seperately in each environment.

>> We can ensure that same configuration of API instances (SLAs, Policies etc..) gets promoted when promoting to higher environments using platform feature. But this is optional only. Still one can change them per environment basis if they have to.

>> Runtime Manager is the place to manage API Implementations and their Mule Runtimes but NOT APIs itself. Though API policies gets executed in Mule Runtimes, We CANNOT enforce API policies in Runtime Manager. We would need to do that via API Manager only for a cherry picked instance in an environment.

So, based on these facts, right statement in the given choices is - "The API policy is defined in API Manager for a specific API instance, and then ONLY applied to the specific API instance".

Reference: https://docs.mulesoft.com/api-manager/2.x/latest-overview-concept

An API implementation is deployed to CloudHub.

What conditions can be alerted on using the default Anypoint Platform functionality, where the alert conditions depend on the end-to-end request processing of the API implementation?

A.
When the API is invoked by an unrecognized API client
A.
When the API is invoked by an unrecognized API client
Answers
B.
When a particular API client invokes the API too often within a given time period
B.
When a particular API client invokes the API too often within a given time period
Answers
C.
When the response time of API invocations exceeds a threshold
C.
When the response time of API invocations exceeds a threshold
Answers
D.
When the API receives a very high number of API invocations
D.
When the API receives a very high number of API invocations
Answers
Suggested answer: A, C, D

Explanation:

Answer: When the response time of API invocations exceeds a threshold *****************************************

>> Alerts can be setup for all the given options using the default Anypoint Platform functionality

>> However, the question insists on an alert whose conditions depend on the end-to-end request processing of the API implementation.

>> Alert w.r.t "Response Times" is the only one which requires end-to-end request processing of API implementation in order to determine if the threshold is exceeded or not.

Reference: https://docs.mulesoft.com/api-manager/2.x/using-api-alerts

A Mule application exposes an HTTPS endpoint and is deployed to the CloudHub Shared WorkerCloud. All traffic to that Mule application must stay inside the AWS VPC.

To what TCP port do API invocations to that Mule application need to be sent?

A.
443
A.
443
Answers
B.
8081
B.
8081
Answers
C.
8091
C.
8091
Answers
D.
8082
D.
8082
Answers
Suggested answer:

Explanation:

Answer: 8082

*****************************************

>> 8091 and 8092 ports are to be used when keeping your HTTP and HTTPS app private to the LOCALVPC respectively.

>> Above TWO ports are not for Shared AWS VPC/ Shared Worker Cloud.

>> 8081 is to be used when exposing your HTTP endpoint app to the internet through Shared LB

>> 8082 is to be used when exposing your HTTPS endpoint app to the internet through Shared LBSo, API invocations should be sent to port 8082 when calling this HTTPS based app.

References:

https://docs.mulesoft.com/runtime-manager/cloudhub-networking-guide

https://help.mulesoft.com/s/article/Configure-Cloudhub-Application-to-Send-a-HTTPS-Request-Directly-to-Another-Cloudhub-Application

https://help.mulesoft.com/s/question/0D52T00004mXXULSA4/multiple-http-listerners-oncloudhub-one-with-port-9090

What is a key requirement when using an external Identity Provider for Client Management in Anypoint Platform?

A.
Single sign-on is required to sign in to Anypoint Platform
A.
Single sign-on is required to sign in to Anypoint Platform
Answers
B.
The application network must include System APIs that interact with the Identity Provider
B.
The application network must include System APIs that interact with the Identity Provider
Answers
C.
To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients must submit access tokens issued by that same Identity Provider
C.
To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients must submit access tokens issued by that same Identity Provider
Answers
D.
APIs managed by Anypoint Platform must be protected by SAML 2.0 policies
D.
APIs managed by Anypoint Platform must be protected by SAML 2.0 policies
Answers
Suggested answer: A, B, C, D

Explanation:

https://www.folkstalk.com/2019/11/mulesoft-integration-and-platform.html

Explanation:

Answer: To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients must submit access tokens issued by that same Identity Provider *****************************************

>> It is NOT necessary that single sign-on is required to sign in to Anypoint Platform because we are using an external Identity Provider for Client Management

>> It is NOT necessary that all APIs managed by Anypoint Platform must be protected by SAML 2.0 policies because we are using an external Identity Provider for Client Management

>> Not TRUE that the application network must include System APIs that interact with the Identity Provider because we are using an external Identity Provider for Client Management Only TRUE statement in the given options is - "To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients must submit access tokens issued by that same Identity Provider" References:

https://docs.mulesoft.com/api-manager/2.x/external-oauth-2.0-token-validation-policy

https://blogs.mulesoft.com/dev/api-dev/api-security-ways-to-authenticate-and-authorize/

The responses to some HTTP requests can be cached depending on the HTTP verb used in therequest. According to the HTTP specification, for what HTTP verbs is this safe to do?

A.
PUT, POST, DELETE
A.
PUT, POST, DELETE
Answers
B.
GET, HEAD, POST
B.
GET, HEAD, POST
Answers
C.
GET, PUT, OPTIONS
C.
GET, PUT, OPTIONS
Answers
D.
GET, OPTIONS, HEAD
D.
GET, OPTIONS, HEAD
Answers
Suggested answer: A, D

Explanation:

Answer: GET, OPTIONS, HEAD

http://restcookbook.com/HTTP%20Methods/idempotency/

What is the most performant out-of-the-box solution in Anypoint Platform to track transaction state in an asynchronously executing long-running process implemented as a Mule application deployed to multiple CloudHub workers?

A.
Redis distributed cache
A.
Redis distributed cache
Answers
B.
java.util.WeakHashMap
B.
java.util.WeakHashMap
Answers
C.
Persistent Object Store
C.
Persistent Object Store
Answers
D.
File-based storage
D.
File-based storage
Answers
Suggested answer: B, C

Explanation:

Answer: Persistent Object Store

*****************************************

>> Redis distributed cache is performant but NOT out-of-the-box solution in Anypoint Platform

>> File-storage is neither performant nor out-of-the-box solution in Anypoint Platform

>> java.util.WeakHashMap needs a completely custom implementation of cache from scratch using Java code and is limited to the JVM where it is running. Which means the state in the cache is not worker aware when running on multiple workers. This type of cache is local to the worker. So, this is neither out-of-the-box nor worker-aware among multiple workers on cloudhub.

https://www.baeldung.com/java-weakhashmap

>> Persistent Object Store is an out-of-the-box solution provided by Anypoint Platform which is performant as well as worker aware among multiple workers running on CloudHub.

https://docs.mulesoft.com/object-store/

So, Persistent Object Store is the right answer.

How can the application of a rate limiting API policy be accurately reflected in the RAML definition of an API?

A.
By refining the resource definitions by adding a description of the rate limiting policy behavior
A.
By refining the resource definitions by adding a description of the rate limiting policy behavior
Answers
B.
By refining the request definitions by adding a remaining Requests query parameter with description, type, and example
B.
By refining the request definitions by adding a remaining Requests query parameter with description, type, and example
Answers
C.
By refining the response definitions by adding the out-of-the-box Anypoint Platform rate-limitenforcement securityScheme with description, type, and example
C.
By refining the response definitions by adding the out-of-the-box Anypoint Platform rate-limitenforcement securityScheme with description, type, and example
Answers
D.
By refining the response definitions by adding the x-ratelimit-* response headers with description, type, and example
D.
By refining the response definitions by adding the x-ratelimit-* response headers with description, type, and example
Answers
Suggested answer: A, B, C, D

Explanation:

Answer: By refining the response definitions by adding the x-ratelimit-* response headers with description, type, and example

*****************************************

References:

https://docs.mulesoft.com/api-manager/2.x/rate-limiting-and-throttling#response-headers

https://docs.mulesoft.com/api-manager/2.x/rate-limiting-and-throttling-sla-basedpolicies#response-headers

An organization has several APIs that accept JSON data over HTTP POST. The APIs are all publiclyavailable and are associated with several mobile applications and web applications.

The organization does NOT want to use any authentication or compliance policies for these APIs, but at the same time, is worried that some bad actor could send payloads that could somehow compromise the applications or servers running the API implementations.

What out-of-the-box Anypoint Platform policy can address exposure to this threat?

A.
Shut out bad actors by using HTTPS mutual authentication for all API invocations
A.
Shut out bad actors by using HTTPS mutual authentication for all API invocations
Answers
B.
Apply an IP blacklist policy to all APIs; the blacklist will Include all bad actors
B.
Apply an IP blacklist policy to all APIs; the blacklist will Include all bad actors
Answers
C.
Apply a Header injection and removal policy that detects the malicious data before it is used
C.
Apply a Header injection and removal policy that detects the malicious data before it is used
Answers
D.
Apply a JSON threat protection policy to all APIs to detect potential threat vectors
D.
Apply a JSON threat protection policy to all APIs to detect potential threat vectors
Answers
Suggested answer: A, C, D

Explanation:

Answer: Apply a JSON threat protection policy to all APIs to detect potential threat vectors *****************************************

>> Usually, if the APIs are designed and developed for specific consumers (known consumers/customers) then we would IP Whitelist the same to ensure that traffic only comes from them.

>> However, as this scenario states that the APIs are publicly available and being used by so many mobile and web applications, it is NOT possible to identify and blacklist all possible bad actors.

>> So, JSON threat protection policy is the best chance to prevent any bad JSON payloads from such bad actors.

Total 95 questions
Go to page: of 10