ExamGecko
Home Home / MuleSoft / MCPA - Level 1

MuleSoft MCPA - Level 1 Practice Test - Questions Answers, Page 4

Question list
Search
Search

List of questions

Search

Related questions











An API experiences a high rate of client requests (TPS) vwth small message paytoads. How can usage limits be imposed on the API based on the type of client application?

A.
Use an SLA-based rate limiting policy and assign a client application to a matching SLA tier based on its type
A.
Use an SLA-based rate limiting policy and assign a client application to a matching SLA tier based on its type
Answers
B.
Use a spike control policy that limits the number of requests for each client application type
B.
Use a spike control policy that limits the number of requests for each client application type
Answers
C.
Use a cross-origin resource sharing (CORS) policy to limit resource sharing between client applications, configured by the client application type
C.
Use a cross-origin resource sharing (CORS) policy to limit resource sharing between client applications, configured by the client application type
Answers
D.
Use a rate limiting policy and a client ID enforcement policy, each configured by the client application type
D.
Use a rate limiting policy and a client ID enforcement policy, each configured by the client application type
Answers
Suggested answer: A, B, C, D

Explanation:

Answer: Use an SLA-based rate limiting policy and assign a client application to a matching SLA tier based on its type.

*****************************************

>> SLA tiers will come into play whenever any limits to be imposed on APIs based on client type

Reference: https://docs.mulesoft.com/api-manager/2.x/rate-limiting-and-throttling-sla-basedpolicies

A code-centric API documentation environment should allow API consumers to investigate and execute API client source code that demonstrates invoking one or more APIs as part of representative scenarios.

What is the most effective way to provide this type of code-centric API documentation environment using Anypoint Platform?

A.
Enable mocking services for each of the relevant APIs and expose them via their Anypoint Exchange entry
A.
Enable mocking services for each of the relevant APIs and expose them via their Anypoint Exchange entry
Answers
B.
Ensure the APIs are well documented through their Anypoint Exchange entries and API Consoles and share these pages with all API consumers
B.
Ensure the APIs are well documented through their Anypoint Exchange entries and API Consoles and share these pages with all API consumers
Answers
C.
Create API Notebooks and include them in the relevant Anypoint Exchange entries
C.
Create API Notebooks and include them in the relevant Anypoint Exchange entries
Answers
D.
Make relevant APIs discoverable via an Anypoint Exchange entry
D.
Make relevant APIs discoverable via an Anypoint Exchange entry
Answers
Suggested answer: A, B, C, D

Explanation:

Answer: Create API Notebooks and Include them in the relevant Anypoint exchange entries

*****************************************

>> API Notebooks are the one on Anypoint Platform that enable us to provide code-centric API documentation

Reference: https://docs.mulesoft.com/exchange/to-use-api-notebook

Refer to the exhibit. An organization is running a Mule standalone runtime and has configured Active Directory as the Anypoint Platform external Identity Provider. The organization does not have budget for other system components.

What policy should be applied to all instances of APIs in the organization to most effecuvelyKestrict access to a specific group of internal users?

A.
Apply a basic authentication - LDAP policy; the internal Active Directory will be configured as the LDAP source for authenticating users
A.
Apply a basic authentication - LDAP policy; the internal Active Directory will be configured as the LDAP source for authenticating users
Answers
B.
Apply a client ID enforcement policy; the specific group of users will configure their client applications to use their specific client credentials
B.
Apply a client ID enforcement policy; the specific group of users will configure their client applications to use their specific client credentials
Answers
C.
Apply an IP whitelist policy; only the specific users' workstations will be in the whitelist
C.
Apply an IP whitelist policy; only the specific users' workstations will be in the whitelist
Answers
D.
Apply an OAuth 2.0 access token enforcement policy; the internal Active Directory will be configured as the OAuth server
D.
Apply an OAuth 2.0 access token enforcement policy; the internal Active Directory will be configured as the OAuth server
Answers
Suggested answer: A, B, C, D

Explanation:

Answer: Apply a basic authentication - LDAP policy; the internal Active Directory will be configured as the LDAP source for authenticating users.

*****************************************

>> IP Whitelisting does NOT fit for this purpose. Moreover, the users workstations may not necessarily have static IPs in the network.

>> OAuth 2.0 enforcement requires a client provider which isn't in the organizations system components.

>> It is not an effective approach to let every user create separate client credentials and configure those for their usage.

The effective way it to apply a basic authentication - LDAP policy and the internal Active Directory will be configured as the LDAP source for authenticating users.

Reference: https://docs.mulesoft.com/api-manager/2.x/basic-authentication-ldap-concept

What is a best practice when building System APIs?

A.
Document the API using an easily consumable asset like a RAML definition
A.
Document the API using an easily consumable asset like a RAML definition
Answers
B.
Model all API resources and methods to closely mimic the operations of the backend system
B.
Model all API resources and methods to closely mimic the operations of the backend system
Answers
C.
Build an Enterprise Data Model (Canonical Data Model) for each backend system and apply it to System APIs
C.
Build an Enterprise Data Model (Canonical Data Model) for each backend system and apply it to System APIs
Answers
D.
Expose to API clients all technical details of the API implementation's interaction wifch the backend system
D.
Expose to API clients all technical details of the API implementation's interaction wifch the backend system
Answers
Suggested answer: A, B, C, D

Explanation:

Answer: Model all API resources and methods to closely mimic the operations of the backend system.

*****************************************

>> There are NO fixed and straight best practices while opting data models for APIs. They are completly contextual and depends on number of factors. Based upon those factors, an enterprise can choose if they have to go with Enterprise

Canonical Data Model or Bounded Context Model etc.

>> One should NEVER expose the technical details of API implementation to their API clients. Only the API interface/ RAML is exposed to API clients.

>> It is true that the RAML definitions of APIs should be as detailed as possible and should reflect most of the documentation. However, just that is NOT enough to call your API as best documented API. There should be even more documentation on Anypoint Exchange with API Notebooks etc. to make and create a developer friendly API and repository..

>> The best practice always when creating System APIs is to create their API interfaces by modeling their resources and methods to closely reflect the operations and functionalities of that backend system.

What CANNOT be effectively enforced using an API policy in Anypoint Platform?

A.
Guarding against Denial of Service attacks
A.
Guarding against Denial of Service attacks
Answers
B.
Maintaining tamper-proof credentials between APIs
B.
Maintaining tamper-proof credentials between APIs
Answers
C.
Logging HTTP requests and responses
C.
Logging HTTP requests and responses
Answers
D.
Backend system overloading
D.
Backend system overloading
Answers
Suggested answer: A, C, D

Explanation:

Answer: Guarding against Denial of Service attacks

*****************************************

>> Backend system overloading can be handled by enforcing "Spike Control Policy"

>> Logging HTTP requests and responses can be done by enforcing "Message Logging Policy"

>> Credentials can be tamper-proofed using "Security" and "Compliance" Policies However, unfortunately, there is no proper way currently on Anypoint Platform to guard against DOS attacks.

Reference: https://help.mulesoft.com/s/article/DDos-Dos-at

An organization makes a strategic decision to move towards an IT operating model that emphasizes consumption of reusable IT assets using modern APIs (as defined by MuleSoft).

What best describes each modern API in relation to this new IT operating model?

A.
Each modern API has its own software development lifecycle, which reduces the need for documentation and automation
A.
Each modern API has its own software development lifecycle, which reduces the need for documentation and automation
Answers
B.
Each modem API must be treated like a product and designed for a particular target audience (for instance, mobile app developers)
B.
Each modem API must be treated like a product and designed for a particular target audience (for instance, mobile app developers)
Answers
C.
Each modern API must be easy to consume, so should avoid complex authentication mechanisms such as SAML or JWT D
C.
Each modern API must be easy to consume, so should avoid complex authentication mechanisms such as SAML or JWT D
Answers
D.
Each modern API must be REST and HTTP based
D.
Each modern API must be REST and HTTP based
Answers
Suggested answer: B

Explanation:

Correct Answers:

1. Each modern API must be treated like a product and designed for a particular target audience (for instance mobile app developers)

*****************************************

What API policy would be LEAST LIKELY used when designing an Experience API that is intended to work with a consumer mobile phone or tablet application?

A.
OAuth 2.0 access token enforcement
A.
OAuth 2.0 access token enforcement
Answers
B.
Client ID enforcement
B.
Client ID enforcement
Answers
C.
JSON threat protection
C.
JSON threat protection
Answers
D.
IPwhitellst
D.
IPwhitellst
Answers
Suggested answer:

Explanation:

Answer: IP whitelist

*****************************************

>> OAuth 2.0 access token and Client ID enforcement policies are VERY common to apply on Experience APIs as API consumers need to register and access the APIs using one of these mechanisms

>> JSON threat protection is also VERY common policy to apply on Experience APIs to prevent bad or suspicious payloads hitting the API implementations.

>> IP whitelisting policy is usually very common in Process and System APIs to only whitelist the IP range inside the local VPC. But also applied occassionally on some experience APIs where the End User/ API Consumers are FIXED.

>> When we know the API consumers upfront who are going to access certain Experience APIs, then we can request for static IPs from such consumers and whitelist them to prevent anyone else hitting the API.

However, the experience API given in the question/ scenario is intended to work with a consumer mobile phone or tablet application. Which means, there is no way we can know all possible IPs that are to be whitelisted as mobile phones and tablets can so many in number and any device in the city/state/country/globe.

So, It is very LEAST LIKELY to apply IP Whitelisting on such Experience APIs whose consumers are typically Mobile Phones or Tablets.

A new upstream API Is being designed to offer an SLA of 500 ms median and 800 ms maximum (99th percentile) response time. The corresponding API implementation needs to sequentially invoke 3 downstream APIs of very similar complexity.

The first of these downstream APIs offers the following SLA for its response time: median: 100 ms, 80th percentile: 500 ms, 95th percentile: 1000 ms.

If possible, how can a timeout be set in the upstream API for the invocation of the first downstream API to meet the new upstream API's desired SLA?

A.
Set a timeout of 50 ms; this times out more invocations of that API but gives additional room for retries
A.
Set a timeout of 50 ms; this times out more invocations of that API but gives additional room for retries
Answers
B.
Set a timeout of 100 ms; that leaves 400 ms for the other two downstream APIs to complete
B.
Set a timeout of 100 ms; that leaves 400 ms for the other two downstream APIs to complete
Answers
C.
No timeout is possible to meet the upstream API's desired SLA; a different SLA must be negotiated with the first downstream API or invoke an alternative API
C.
No timeout is possible to meet the upstream API's desired SLA; a different SLA must be negotiated with the first downstream API or invoke an alternative API
Answers
D.
Do not set a timeout; the Invocation of this API Is mandatory and so we must wait until it responds
D.
Do not set a timeout; the Invocation of this API Is mandatory and so we must wait until it responds
Answers
Suggested answer: A, C, D

Explanation:

Answer: Set a timeout of 100ms; that leaves 400ms for other two downstream APIs to complete

*****************************************

Key details to take from the given scenario:

>> Upstream API's designed SLA is 500ms (median). Lets ignore maximum SLA response times.

>> This API calls 3 downstream APIs sequentially and all these are of similar complexity.

>> The first downstream API is offering median SLA of 100ms, 80th percentile: 500ms; 95th percentile: 1000ms.

Based on the above details:

>> We can rule out the option which is suggesting to set 50ms timeout. Because, if the median SLA itself being offered is 100ms then most of the calls are going to timeout and time gets wasted in retried them and eventually gets exhausted with all retries. Even if some retries gets successful, the remaining time wont leave enough room for 2nd and 3rd downstream APIs to respond within time.

>> The option suggesting to NOT set a timeout as the invocation of this API is mandatory and so we must wait until it responds is silly. As not setting time out would go against the good implementation pattern and moreover if the first API is not responding within its offered median SLA 100ms then most probably it would either respond in 500ms (80th percentile) or 1000ms (95th percentile). In BOTH cases, getting a successful response from 1st downstream API does NO

GOOD because already by this time the Upstream API SLA of 500 ms is breached. There is no time left to call 2nd and 3rd downstream APIs.

>> It is NOT true that no timeout is possible to meet the upstream APIs desired SLA.

As 1st downstream API is offering its median SLA of 100ms, it means MOST of the time we would get the responses within that time. So, setting a timeout of 100ms would be ideal for MOST calls as it leaves enough room of 400ms for remaining 2 downstream API calls.

What is true about automating interactions with Anypoint Platform using tools such as Anypoint Platform REST APIs, Anypoint CU, or the Mule Maven plugin?

A.
Access to Anypoint Platform APIs and Anypoint CU can be controlled separately through the roles and permissions in Anypoint Platform, so that specific users can get access to Anypoint CLI white others get access to the platform APIs
A.
Access to Anypoint Platform APIs and Anypoint CU can be controlled separately through the roles and permissions in Anypoint Platform, so that specific users can get access to Anypoint CLI white others get access to the platform APIs
Answers
B.
Anypoint Platform APIs can ONLY automate interactions with CloudHub, while the Mule Maven plugin is required for deployment to customer-hosted Mule runtimes
B.
Anypoint Platform APIs can ONLY automate interactions with CloudHub, while the Mule Maven plugin is required for deployment to customer-hosted Mule runtimes
Answers
C.
By default, the Anypoint CLI and Mule Maven plugin are NOT included in the Mule runtime, so are NOT available to be used by deployed Mule applications
C.
By default, the Anypoint CLI and Mule Maven plugin are NOT included in the Mule runtime, so are NOT available to be used by deployed Mule applications
Answers
D.
API policies can be applied to the Anypoint Platform APIs so that ONLY certain LOBs have access to specific functions
D.
API policies can be applied to the Anypoint Platform APIs so that ONLY certain LOBs have access to specific functions
Answers
Suggested answer: A, B, C, D

Explanation:

Answer: By default, the Anypoint CLI and Mule Maven plugin are NOT included in the Mule runtime, so are NOT available to be used by deployed Mule applications *****************************************

>> We CANNOT apply API policies to the Anypoint Platform APIs like we can do on our custom written API instances. So, option suggesting this is FALSE.

>> Anypoint Platform APIs can be used for automating interactions with both CloudHub and customer-hosted Mule runtimes. Not JUST the CloudHub. So, option opposing this is FALSE.

>> Mule Maven plugin is NOT mandatory for deployment to customer-hosted Mule runtimes. It just helps your CI/CD to have smoother automation. But not a compulsory requirement to deploy. So, option opposing this is FALSE.

>> We DO NOT have any such special roles and permissions on the platform to separately control access for some users to have Anypoint CLI and others to have Anypoint Platform APIs. With proper general roles/permissions (API Owner,

Cloudhub Admin etc..), one can use any of the options

(Anypoint CLI or Platform APIs). So, option suggesting this is FALSE.

Only TRUE statement given in the choices is that - Anypoint CLI and Mule Maven plugin are NOT included in the Mule runtime, so are NOT available to be used by deployed Mule applications.

Maven is part of Studio or you can use other Maven installation for development.

CLI is convenience only. It is one of many ways how to install app to the runtime.

These are definitely NOT part of anything except your process of deployment or automation.

What Mule application deployment scenario requires using Anypoint Platform Private Cloud Edition or Anypoint Platform for Pivotal Cloud Foundry?

A.
When it Is required to make ALL applications highly available across multiple data centers
A.
When it Is required to make ALL applications highly available across multiple data centers
Answers
B.
When it is required that ALL APIs are private and NOT exposed to the public cloud
B.
When it is required that ALL APIs are private and NOT exposed to the public cloud
Answers
C.
When regulatory requirements mandate on-premises processing of EVERY data item, including meta-data
C.
When regulatory requirements mandate on-premises processing of EVERY data item, including meta-data
Answers
D.
When ALL backend systems in the application network are deployed in the organization's intranet
D.
When ALL backend systems in the application network are deployed in the organization's intranet
Answers
Suggested answer: A, C, D

Explanation:

Answer: When regulatory requirements mandate on-premises processing of EVERY data item, including meta-data.

*****************************************

We need NOT require to use Anypoint Platform PCE or PCF for the below. So these options are OUT.

>> We can make ALL applications highly available across multiple data centers using CloudHub too.

>> We can use Anypoint VPN and tunneling from CloudHub to connect to ALL backend systems in the application network that are deployed in the organization's intranet.

>> We can use Anypoint VPC and Firewall Rules to make ALL APIs private and NOT exposed to the public cloud.

Only valid reason in the given options that requires to use Anypoint Platform PCE/ PCF is - When regulatory requirements mandate on-premises processing of EVERY data item, including meta-data.

Total 95 questions
Go to page: of 10