ExamGecko
Home Home / Salesforce / Certified MuleSoft Developer I

Salesforce Certified MuleSoft Developer I Practice Test - Questions Answers, Page 2

Question list
Search
Search

List of questions

Search

Related questions











A set of tests must be performed prior to deploying API implementations to a staging environment. Due to data security and access restrictions, untested APIs cannot be granted access to the backend systems, so instead mocked data must be used for these tests. The amount of available mocked data and its contents is sufficient to entirely test the API implementations with no active connections to the backend systems. What type of tests should be used to incorporate this mocked data?

A.
Integration tests
A.
Integration tests
Answers
B.
Performance tests
B.
Performance tests
Answers
C.
Functional tests (Blackbox)
C.
Functional tests (Blackbox)
Answers
D.
Unit tests (Whitebox)
D.
Unit tests (Whitebox)
Answers
Suggested answer: D

Explanation:

Unit tests (Whitebox). As per general IT testing practice and MuleSoft recommended practice, Integration and Performance tests should be done on full end to end setup for right evaluation. Which means all end systems should be connected while doing the tests. So, these options are OUT and we are left with Unit Tests and Functional Tests.As per attached reference documentation from MuleSoft:Unit Tests - are limited to the code that can be realistically exercised without the need to run it inside Mule itself. So good candidates are Small pieces of modular code, Sub Flows, Custom transformers, Custom components, Custom expression evaluators etc.Functional Tests - are those that most extensively exercise your application configuration. In these tests, you have the freedom and tools for simulating happy and unhappy paths. You also have the possibility to create stubs for target services and make them success or fail to easily simulate happy and unhappy paths respectively.As the scenario in the question demands for API implementation to be tested before deployment to Staging and also clearly indicates that there is enough/ sufficient amount of mock data to test the various components of API implementations with no active connections to the backend systems, Unit Tests are the one to be used to incorporate this mocked data.

Which of the below, when used together, makes the IT Operational Model effective?

A.
Create reusable assets, Do marketing on the created assets across organization, Arrange time to time LOB reviews to ensure assets are being consumed or not
A.
Create reusable assets, Do marketing on the created assets across organization, Arrange time to time LOB reviews to ensure assets are being consumed or not
Answers
B.
Create reusable assets, Make them discoverable so that LOB teams can self-serve and browse the APIs, Get active feedback and usage metrics
B.
Create reusable assets, Make them discoverable so that LOB teams can self-serve and browse the APIs, Get active feedback and usage metrics
Answers
C.
Create resuable assets, make them discoverable so that LOB teams can self-serve and browse the APIs
C.
Create resuable assets, make them discoverable so that LOB teams can self-serve and browse the APIs
Answers
Suggested answer: C

Explanation:

Create reusable assets, Make them discoverable so that LOB teams can self-serve and browse the APIs, Get active feedback and usage metrics..

Which of the following sequence is correct?

A.
API Client implementes logic to call an API >> API Consumer requests access to API >> API Implementation routes the request to >> API
A.
API Client implementes logic to call an API >> API Consumer requests access to API >> API Implementation routes the request to >> API
Answers
B.
API Consumer requests access to API >> API Client implementes logic to call an API >> API routes the request to >> API Implementation
B.
API Consumer requests access to API >> API Client implementes logic to call an API >> API routes the request to >> API Implementation
Answers
C.
API Consumer implementes logic to call an API >> API Client requests access to API >> API Implementation routes the request to >> API
C.
API Consumer implementes logic to call an API >> API Client requests access to API >> API Implementation routes the request to >> API
Answers
D.
API Client implementes logic to call an API >> API Consumer requests access to API >> API routes the request to >> API Implementation
D.
API Client implementes logic to call an API >> API Consumer requests access to API >> API routes the request to >> API Implementation
Answers
Suggested answer: B

Explanation:

API Consumer requests access to API >> API Client implementes logic to call an API >> API routes the request to >> API Implementation. >> API consumer does not implement any logic to invoke APIs. It is just a role. So, the option stating 'API Consumer implementes logic to call an API' is INVALID.>> API Implementation does not route any requests. It is a final piece of logic where functionality of target systems is exposed. So, the requests should be routed to the API implementation by some other entity. So, the options stating 'API Implementation routes the request to >> API' is INVALID>> The statements in one of the options are correct but sequence is wrong. The sequence is given as 'API Client implementes logic to call an API >> API Consumer requests access to API >> API routes the request to >> API Implementation'. Here, the statements in the options are VALID but sequence is WRONG.>> Right option and sequence is the one where API consumer first requests access to API on Anypoint Exchange and obtains client credentials. API client then writes logic to call an API by using the access client credentials requested by API consumer and the requests will be routed to API implementation via the API which is managed by API Manager.

An organization has created an API-led architecture that uses various API layers to integrate mobile clients with a backend system. The backend system consists of a number of specialized components and can be accessed via a REST API. The process and experience APIs share the same bounded-context model that is different from the backend data model. What additional canonical models, bounded-context models, or anti-corruption layers are best added to this architecture to help process data consumed from the backend system?

A.
Create a bounded-context model for every layer and overlap them when the boundary contexts overlap, letting API developers know about the differences between upstream and downstream data models
A.
Create a bounded-context model for every layer and overlap them when the boundary contexts overlap, letting API developers know about the differences between upstream and downstream data models
Answers
B.
Create a canonical model that combines the backend and API-led models to simplify and unify data models, and minimize data transformations.
B.
Create a canonical model that combines the backend and API-led models to simplify and unify data models, and minimize data transformations.
Answers
C.
Create a bounded-context model for the system layer to closely match the backend data model, and add an anti-corruption layer to let the different bounded contexts cooperate across the system and process layers
C.
Create a bounded-context model for the system layer to closely match the backend data model, and add an anti-corruption layer to let the different bounded contexts cooperate across the system and process layers
Answers
D.
Create an anti-corruption layer for every API to perform transformation for every data model to match each other, and let data simply travel between APIs to avoid the complexity and overhead of building canonical models
D.
Create an anti-corruption layer for every API to perform transformation for every data model to match each other, and let data simply travel between APIs to avoid the complexity and overhead of building canonical models
Answers
Suggested answer: C

Explanation:

Create a bounded-context model for the system layer to closely match the backend data model, and add an anti-corruption layer to let the different bounded contexts cooperate across the system and process layers. >> Canonical models are not an option here as the organization has already put in efforts and created bounded-context models for Experience and Process APIs.>> Anti-corruption layers for ALL APIs is unnecessary and invalid because it is mentioned that experience and process APIs share same bounded-context model. It is just the System layer APIs that need to choose their approach now.>> So, having an anti-corruption layer just between the process and system layers will work well. Also to speed up the approach, system APIs can mimic the backend system data model.

An API client calls one method from an existing API implementation. The API implementation is later updated. What change to the API implementation would require the API client's invocation logic to also be updated?

A.
When the data type of the response is changed for the method called by the API client
A.
When the data type of the response is changed for the method called by the API client
Answers
B.
When a new method is added to the resource used by the API client
B.
When a new method is added to the resource used by the API client
Answers
C.
When a new required field is added to the method called by the API client
C.
When a new required field is added to the method called by the API client
Answers
D.
When a child method is added to the method called by the API client
D.
When a child method is added to the method called by the API client
Answers
Suggested answer: C

Explanation:

When a new required field is added to the method called by the API client. >> Generally, the logic on API clients need to be updated when the API contract breaks.>> When a new method or a child method is added to an API , the API client does not break as it can still continue to use its existing method. So these two options are out.>> We are left for two more where 'datatype of the response if changed' and 'a new required field is added'.>> Changing the datatype of the response does break the API contract. However, the question is insisting on the 'invocation' logic and not about the response handling logic. The API client can still invoke the API successfully and receive the response but the response will have a different datatype for some field.>> Adding a new required field will break the API's invocation contract. When adding a new required field, the API contract breaks the RAML or API spec agreement that the API client/API consumer and API provider has between them. So this requires the API client invocation logic to also be updated.

Traffic is routed through an API proxy to an API implementation. The API proxy is managed by API Manager and the API implementation is deployed to a CloudHub VPC using Runtime Manager. API policies have been applied to this API. In this deployment scenario, at what point are the API policies enforced on incoming API client requests?

A.
At the API proxy
A.
At the API proxy
Answers
B.
At the API implementation
B.
At the API implementation
Answers
C.
At both the API proxy and the API implementation
C.
At both the API proxy and the API implementation
Answers
D.
At a MuleSoft-hosted load balancer
D.
At a MuleSoft-hosted load balancer
Answers
Suggested answer: A

Explanation:

At the API proxy. >> API Policies can be enforced at two places in Mule platform.>> One - As an Embedded Policy enforcement in the same Mule Runtime where API implementation is running.>> Two - On an API Proxy sitting in front of the Mule Runtime where API implementation is running.>> As the deployment scenario in the question has API Proxy involved, the policies will be enforced at the API Proxy.

Once an API Implementation is ready and the API is registered on API Manager, who should request the access to the API on Anypoint Exchange?

A.
None
A.
None
Answers
B.
Both
B.
Both
Answers
C.
API Client
C.
API Client
Answers
D.
API Consumer
D.
API Consumer
Answers
Suggested answer: D

Explanation:

API Consumer. >> API clients are piece of code or programs that use the client credentials of API consumer but does not directly interact with Anypoint Exchange to get the access>> API consumer is the one who should get registered and request access to API and then API client needs to use those client credentials to hit the APIsSo, API consumer is the one who needs to request access on the API from Anypoint Exchange

What is a key requirement when using an external Identity Provider for Client Management in Anypoint Platform?

A.
Single sign-on is required to sign in to Anypoint Platform
A.
Single sign-on is required to sign in to Anypoint Platform
Answers
B.
The application network must include System APIs that interact with the Identity Provider
B.
The application network must include System APIs that interact with the Identity Provider
Answers
C.
To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients must submit access tokens issued by that same Identity Provider
C.
To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients must submit access tokens issued by that same Identity Provider
Answers
D.
APIs managed by Anypoint Platform must be protected by SAML 2.0 policies
D.
APIs managed by Anypoint Platform must be protected by SAML 2.0 policies
Answers
Suggested answer: C

Explanation:

https://www.folkstalk.com/2019/11/mulesoft-integration-and-platform.htmlTo invoke OAuth 2.0-protected APIs managed by AnypointPlatform, API clients must submit access tokens issued by that same Identity Provider. >> It is NOT necessary that single sign-on is required to sign in to Anypoint Platform because we are using an external Identity Provider for Client Management>> It is NOT necessary that all APIs managed by Anypoint Platform must be protected by SAML 2.0 policies because we are using an external Identity Provider for Client Management>> Not TRUE that the application network must include System APIs that interact with the Identity Provider because we are using an external Identity Provider for Client ManagementOnly TRUE statement in the given options is - 'To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients must submit access tokens issued by that same Identity Provider'https://docs.mulesoft.com/api-manager/2.x/external-oauth-2.0-token-validation-policyhttps://blogs.mulesoft.com/dev/api-dev/api-security-ways-to-authenticate-and-authorize/

The responses to some HTTP requests can be cached depending on the HTTP verb used in the request. According to the HTTP specification, for what HTTP verbs is this safe to do?

A.
PUT, POST, DELETE
A.
PUT, POST, DELETE
Answers
B.
GET, HEAD, POST
B.
GET, HEAD, POST
Answers
C.
GET, PUT, OPTIONS
C.
GET, PUT, OPTIONS
Answers
D.
GET, OPTIONS, HEAD
D.
GET, OPTIONS, HEAD
Answers
Suggested answer: D

Explanation:

GET, OPTIONS, HEAD

http://restcookbook.com/HTTP%20Methods/idempotency/

What is the most performant out-of-the-box solution in Anypoint Platform to track transaction state in an asynchronously executing long-running process implemented as a Mule application deployed to multiple CloudHub workers?

A.
Redis distributed cache
A.
Redis distributed cache
Answers
B.
java.util.WeakHashMap
B.
java.util.WeakHashMap
Answers
C.
Persistent Object Store
C.
Persistent Object Store
Answers
D.
File-based storage
D.
File-based storage
Answers
Suggested answer: C

Explanation:

Persistent Object Store. >> Redis distributed cache is performant but NOT out-of-the-box solution in Anypoint Platform>> File-storage is neither performant nor out-of-the-box solution in Anypoint Platform>> java.util.WeakHashMap needs a completely custom implementation of cache from scratch using Java code and is limited to the JVM where it is running. Which means the state in the cache is not worker aware when running on multiple workers. This type of cache is local to the worker. So, this is neither out-of-the-box nor worker-aware among multiple workers on cloudhub. https://www.baeldung.com/java-weakhashmap>> Persistent Object Store is an out-of-the-box solution provided by Anypoint Platform which is performant as well as worker aware among multiple workers running on CloudHub. https://docs.mulesoft.com/object-store/So, Persistent Object Store is the right answer.

Total 95 questions
Go to page: of 10