ExamGecko
Home Home / MuleSoft / MCPA - Level 1

MuleSoft MCPA - Level 1 Practice Test - Questions Answers, Page 2

Question list
Search
Search

List of questions

Search

Related questions











In an organization, the InfoSec team is investigating Anypoint Platform related data traffic.

From where does most of the data available to Anypoint Platform for monitoring and alerting originate?

A.
From the Mule runtime or the API implementation, depending on the deployment model
A.
From the Mule runtime or the API implementation, depending on the deployment model
Answers
B.
From various components of Anypoint Platform, such as the Shared Load Balancer, VPC, and Mule runtimes
B.
From various components of Anypoint Platform, such as the Shared Load Balancer, VPC, and Mule runtimes
Answers
C.
From the Mule runtime or the API Manager, depending on the type of data
C.
From the Mule runtime or the API Manager, depending on the type of data
Answers
D.
From the Mule runtime irrespective of the deployment model
D.
From the Mule runtime irrespective of the deployment model
Answers
Suggested answer: C, D

Explanation:

Answer: From the Mule runtime irrespective of the deployment model *****************************************

>> Monitoring and Alerting metrics are always originated from Mule Runtimes irrespective of the deployment model.

>> It may seems that some metrics (Runtime Manager) are originated from Mule Runtime and some are (API Invocations/ API Analytics) from API Manager. However, this is realistically NOT TRUE. The reason is, API manager is just a management tool for API instances but all policies upon applying on APIs eventually gets executed on Mule Runtimes only (Either Embedded or API Proxy).

>> Similarly all API Implementations also run on Mule Runtimes.

So, most of the day required for monitoring and alerts are originated fron Mule Runtimes only irrespective of whether the deployment model is MuleSoft-hosted or Customer-hosted or Hybrid.

When designing an upstream API and its implementation, the development team has been advised to NOT set timeouts when invoking a downstream API, because that downstream API has no SLA that can be relied upon. This is the only downstream API dependency of that upstream API.

Assume the downstream API runs uninterrupted without crashing. What is the impact of this advice?

A.
An SLA for the upstream API CANNOT be provided
A.
An SLA for the upstream API CANNOT be provided
Answers
B.
The invocation of the downstream API will run to completion without timing out
B.
The invocation of the downstream API will run to completion without timing out
Answers
C.
A default timeout of 500 ms will automatically be applied by the Mule runtime in which the upstream API implementation executes
C.
A default timeout of 500 ms will automatically be applied by the Mule runtime in which the upstream API implementation executes
Answers
D.
A toad-dependent timeout of less than 1000 ms will be applied by the Mule runtime in which the downstream API implementation executes
D.
A toad-dependent timeout of less than 1000 ms will be applied by the Mule runtime in which the downstream API implementation executes
Answers
Suggested answer: A, B, C, D

Explanation:

Answer: An SLA for the upstream API CANNOT be provided.

*****************************************

>> First thing first, the default HTTP response timeout for HTTP connector is 10000 ms (10 seconds).

NOT 500 ms.

>> Mule runtime does NOT apply any such "load-dependent" timeouts. There is no such behavior currently in Mule.

>> As there is default 10000 ms time out for HTTP connector, we CANNOT always guarantee that theinvocation of the downstream API will run to completion without timing out due to its unreliable SLAtimes. If the response time crosses 10 seconds then the request may time out.

The main impact due to this is that a proper SLA for the upstream API CANNOT be provided.

Reference: https://docs.mulesoft.com/http-connector/1.5/http-documentation#parameters-3

What best explains the use of auto-discovery in API implementations?

A.
It makes API Manager aware of API implementations and hence enables it to enforce policies
A.
It makes API Manager aware of API implementations and hence enables it to enforce policies
Answers
B.
It enables Anypoint Studio to discover API definitions configured in Anypoint Platform
B.
It enables Anypoint Studio to discover API definitions configured in Anypoint Platform
Answers
C.
It enables Anypoint Exchange to discover assets and makes them available for reuse
C.
It enables Anypoint Exchange to discover assets and makes them available for reuse
Answers
D.
It enables Anypoint Analytics to gain insight into the usage of APIs
D.
It enables Anypoint Analytics to gain insight into the usage of APIs
Answers
Suggested answer: A, B, C, D

Explanation:

Answer: It makes API Manager aware of API implementations and hence enables it to enforce policies.

*****************************************

>> API Autodiscovery is a mechanism that manages an API from API Manager by pairing the deployed application to an API created on the platform.

>> API Management includes tracking, enforcing policies if you apply any, and reporting API analytics.

>> Critical to the Autodiscovery process is identifying the API by providing the API name and version.

References:

https://docs.mulesoft.com/api-manager/2.x/api-auto-discovery-new-concept

https://docs.mulesoft.com/api-manager/1.x/api-auto-discovery

https://docs.mulesoft.com/api-manager/2.x/api-auto-discovery-new-concept

What should be ensured before sharing an API through a public Anypoint Exchange portal?

A.
The visibility level of the API instances of that API that need to be publicly accessible should be set to public visibility
A.
The visibility level of the API instances of that API that need to be publicly accessible should be set to public visibility
Answers
B.
The users needing access to the API should be added to the appropriate role in Anypoint Platform
B.
The users needing access to the API should be added to the appropriate role in Anypoint Platform
Answers
C.
The API should be functional with at least an initial implementation deployed and accessible for users to interact with
C.
The API should be functional with at least an initial implementation deployed and accessible for users to interact with
Answers
D.
The API should be secured using one of the supported authentication/authorization mechanisms to ensure that data is not compromised
D.
The API should be secured using one of the supported authentication/authorization mechanisms to ensure that data is not compromised
Answers
Suggested answer: A, B, C, D

Explanation:

Answer: The visibility level of the API instances of that API that need to be publicly accessible should be set to public visibility.

*****************************************

Reference: https://docs.mulesoft.com/exchange/to-share-api-asset-to-portal

https://docs.mulesoft.com/exchange/to-share-api-asset-to-portal

Refer to the exhibit.

A RAML definition has been proposed for a new Promotions Process API, and has been published to Anypoint Exchange.

The Marketing Department, who will be an important consumer of the Promotions API, has important requirements and expectations that must be met.

What is the most effective way to use Anypoint Platform features to involve the Marketing Department in this early API design phase?

A.
Ask the Marketing Department to interact with a mocking implementation of the API using the automatically generated API Console
A.
Ask the Marketing Department to interact with a mocking implementation of the API using the automatically generated API Console
Answers
B.
Organize a design workshop with the DBAs of the Marketing Department in which the database schema of the Marketing IT systems is translated into RAML
B.
Organize a design workshop with the DBAs of the Marketing Department in which the database schema of the Marketing IT systems is translated into RAML
Answers
C.
Use Anypoint Studio to Implement the API as a Mule application, then deploy that API implementation to CloudHub and ask the Marketing Department to interact with it
C.
Use Anypoint Studio to Implement the API as a Mule application, then deploy that API implementation to CloudHub and ask the Marketing Department to interact with it
Answers
D.
Export an integration test suite from API designer and have the Marketing Department execute the tests In that suite to ensure they pass
D.
Export an integration test suite from API designer and have the Marketing Department execute the tests In that suite to ensure they pass
Answers
Suggested answer: A, C, D

Explanation:

Answer: Ask the Marketing Department to interact with a mocking implementation of the API using the automatically generated API Console.

*****************************************

As per MuleSoft's IT Operating Model:

>> API consumers need NOT wait until the full API implementation is ready.

>> NO technical test-suites needs to be shared with end users to interact with APIs.

>> Anypoint Platform offers a mocking capability on all the published API specifications to Anypoint Exchange which also will be rich in documentation covering all details of API functionalities and working nature.

>> No needs of arranging days of workshops with end users for feedback.

API consumers can use Anypoint Exchange features on the platform and interact with the API using its mocking feature. The feedback can be shared quickly on the same to incorporate any changes.

Refer to the exhibit.

what is true when using customer-hosted Mule runtimes with the MuleSoft-hosted Anypoint Platform control plane (hybrid deployment)?

A.
Anypoint Runtime Manager initiates a network connection to a Mule runtime in order to deploy Mule applications
A.
Anypoint Runtime Manager initiates a network connection to a Mule runtime in order to deploy Mule applications
Answers
B.
The MuleSoft-hosted Shared Load Balancer can be used to load balance API invocations to the Mule runtimes
B.
The MuleSoft-hosted Shared Load Balancer can be used to load balance API invocations to the Mule runtimes
Answers
C.
API implementations can run successfully in customer-hosted Mule runtimes, even when they are unable to communicate with the control plane
C.
API implementations can run successfully in customer-hosted Mule runtimes, even when they are unable to communicate with the control plane
Answers
D.
Anypoint Runtime Manager automatically ensures HA in the control plane by creating a new Mule runtime instance in case of a node failure
D.
Anypoint Runtime Manager automatically ensures HA in the control plane by creating a new Mule runtime instance in case of a node failure
Answers
Suggested answer: A, B, C, D

Explanation:

Answer: API implementations can run successfully in customer-hosted Mule runtimes, even when they are unable to communicate with the control plane.

*****************************************

>> We CANNOT use Shared Load balancer to load balance APIs on customer hosted runtimes

>> For Hybrid deployment models, the on-premises are first connected to Runtime Manager using Runtime Manager agent. So, the connection is initiated first from On-premises to Runtime Manager. Then all control can be done from

Runtime Manager.

>> Anypoint Runtime Manager CANNOT ensure automatic HA. Clusters/Server Groups etc should be configured before hand.

Only TRUE statement in the given choices is, API implementations can run successfully in customerhosted Mule runtimes, even when they are unable to communicate with the control plane. There are several references below to justify this statement.

References:

https://docs.mulesoft.com/runtime-manager/deployment-strategies#hybrid-deployments

https://help.mulesoft.com/s/article/On-Premise-Runtimes-Disconnected-From-US-Control-Plane-June-18th-2018

https://help.mulesoft.com/s/article/Runtime-Manager-cannot-manage-On-Prem-Applications-and-Servers-from-US-Control-Plane-June-25th-2019

https://help.mulesoft.com/s/article/On-premise-Runtimes-Appear-Disconnected-in-Runtime-Manager-May-29th-2018

============================

============================

A System API is designed to retrieve data from a backend system that has scalability challenges.

What API policy can best safeguard the backend system?

A.
IPwhitelist
A.
IPwhitelist
Answers
B.
SLA-based rate limiting
B.
SLA-based rate limiting
Answers
C.
Auth 2 token enforcement
C.
Auth 2 token enforcement
Answers
D.
Client ID enforcement
D.
Client ID enforcement
Answers
Suggested answer: A, B, D

Explanation:

Answer: SLA-based rate limiting

*****************************************

>> Client Id enforement policy is a "Compliance" related NFR and does not help in maintaining the "Quality of Service (QoS)". It CANNOT and NOT meant for protecting the backend systems from scalability challenges.

>> IP Whitelisting and OAuth 2.0 token enforcement are "Security" related NFRs and again does not help in maintaining the "Quality of Service (QoS)". They CANNOT and are NOT meant for protecting the backend systems from scalability challenges.

Rate Limiting, Rate Limiting-SLA, Throttling, Spike Control are the policies that are "Quality of Service (QOS)" related NFRs and are meant to help in protecting the backend systems from getting overloaded.

https://dzone.com/articles/how-to-secure-apis

Refer to the exhibit.

What is a valid API in the sense of API-led connectivity and application networks?

A.
Java RMI over TCP
A.
Java RMI over TCP
Answers
B.
Java RMI over TCP
B.
Java RMI over TCP
Answers
C.
CORBA over HOP
C.
CORBA over HOP
Answers
D.
XML over UDP
D.
XML over UDP
Answers
Suggested answer:

Explanation:

Answer: XML over HTTP

*****************************************

>> API-led connectivity and Application Networks urge to have the APIs on HTTP based protocols forbuilding most effective APIs and networks on top of them.

>> The HTTP based APIs allow the platform to apply various varities of policies to address many NFRs

>> The HTTP based APIs also allow to implement many standard and effective implementationpatterns that adhere to HTTP based w3c rules.

Refer to the exhibit.

Three business processes need to be implemented, and the implementations need to communicate with several different SaaS applications.

These processes are owned by separate (siloed) LOBs and are mainly independent of each other, but do share a few business entities. Each LOB has one development team and their own budget In this organizational context, what is the most effective approach to choose the API data models for the APIs that will implement these business processes with minimal redundancy of the data models?

A.
Build several Bounded Context Data Models that align with coherent parts of the business processes and the definitions of associated business entities
A.
Build several Bounded Context Data Models that align with coherent parts of the business processes and the definitions of associated business entities
Answers
B.
Build distinct data models for each API to follow established micro-services and Agile API-centric practices
B.
Build distinct data models for each API to follow established micro-services and Agile API-centric practices
Answers
C.
Build all API data models using XML schema to drive consistency and reuse across the organization
C.
Build all API data models using XML schema to drive consistency and reuse across the organization
Answers
D.
Build one centralized Canonical Data Model (Enterprise Data Model) that unifies all the data types from all three business processes, ensuring the data model is consistent and non-redundant
D.
Build one centralized Canonical Data Model (Enterprise Data Model) that unifies all the data types from all three business processes, ensuring the data model is consistent and non-redundant
Answers
Suggested answer: A, B, C, D

Explanation:

Answer: Build several Bounded Context Data Models that align with coherent parts of the business processes and the definitions of associated business entities.

*****************************************

>> The options w.r.t building API data models using XML schema/ Agile API-centric practices are irrelevant to the scenario given in the question. So these two are INVALID.

>> Building EDM (Enterprise Data Model) is not feasible or right fit for this scenario as the teams and LOBs work in silo and they all have different initiatives, budget etc.. Building EDM needs intensive coordination among all the team which evidently seems not possible in this scenario.

So, the right fit for this scenario is to build several Bounded Context Data Models that align with coherent parts of the business processes and the definitions of associated business entities.

What best describes the Fully Qualified Domain Names (FQDNs), also known as DNS entries, created when a Mule application is deployed to the CloudHub Shared Worker Cloud?

A.
A fixed number of FQDNs are created, IRRESPECTIVE of the environment and VPC design
A.
A fixed number of FQDNs are created, IRRESPECTIVE of the environment and VPC design
Answers
B.
The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region
B.
The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region
Answers
C.
The FQDNs are determined by the application name, but can be modified by an administrator after deployment
C.
The FQDNs are determined by the application name, but can be modified by an administrator after deployment
Answers
D.
The FQDNs are determined by both the application name and the Anypoint Platform organization
D.
The FQDNs are determined by both the application name and the Anypoint Platform organization
Answers
Suggested answer: A, B, C, D

Explanation:

Answer: The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region *****************************************

>> When deploying applications to Shared Worker Cloud, the FQDN are always determined by application name chosen.

>> It does NOT matter what region the app is being deployed to.

>> Although it is fact and true that the generated FQDN will have the region included in it (Ex: expsalesorder- api.au-s1.cloudhub.io), it does NOT mean that the same name can be used when deploying to another CloudHub region.

>> Application name should be universally unique irrespective of Region and Organization and solely determines the FQDN for Shared Load Balancers.

Total 95 questions
Go to page: of 10