ExamGecko
Ask Question

Salesforce Certified MuleSoft Developer I Practice Test - Questions Answers, Page 2

List of questions

Question 11

Report
Export
Collapse

A set of tests must be performed prior to deploying API implementations to a staging environment. Due to data security and access restrictions, untested APIs cannot be granted access to the backend systems, so instead mocked data must be used for these tests. The amount of available mocked data and its contents is sufficient to entirely test the API implementations with no active connections to the backend systems. What type of tests should be used to incorporate this mocked data?

Integration tests
Integration tests
Performance tests
Performance tests
Functional tests (Blackbox)
Functional tests (Blackbox)
Unit tests (Whitebox)
Unit tests (Whitebox)
Suggested answer: D

Explanation:

Unit tests (Whitebox). As per general IT testing practice and MuleSoft recommended practice, Integration and Performance tests should be done on full end to end setup for right evaluation. Which means all end systems should be connected while doing the tests. So, these options are OUT and we are left with Unit Tests and Functional Tests.As per attached reference documentation from MuleSoft:Unit Tests - are limited to the code that can be realistically exercised without the need to run it inside Mule itself. So good candidates are Small pieces of modular code, Sub Flows, Custom transformers, Custom components, Custom expression evaluators etc.Functional Tests - are those that most extensively exercise your application configuration. In these tests, you have the freedom and tools for simulating happy and unhappy paths. You also have the possibility to create stubs for target services and make them success or fail to easily simulate happy and unhappy paths respectively.As the scenario in the question demands for API implementation to be tested before deployment to Staging and also clearly indicates that there is enough/ sufficient amount of mock data to test the various components of API implementations with no active connections to the backend systems, Unit Tests are the one to be used to incorporate this mocked data.

asked 23/09/2024
john lopez
27 questions

Question 12

Report
Export
Collapse

Which of the below, when used together, makes the IT Operational Model effective?

Create reusable assets, Do marketing on the created assets across organization, Arrange time to time LOB reviews to ensure assets are being consumed or not
Create reusable assets, Do marketing on the created assets across organization, Arrange time to time LOB reviews to ensure assets are being consumed or not
Create reusable assets, Make them discoverable so that LOB teams can self-serve and browse the APIs, Get active feedback and usage metrics
Create reusable assets, Make them discoverable so that LOB teams can self-serve and browse the APIs, Get active feedback and usage metrics
Create resuable assets, make them discoverable so that LOB teams can self-serve and browse the APIs
Create resuable assets, make them discoverable so that LOB teams can self-serve and browse the APIs
Suggested answer: C

Explanation:

Create reusable assets, Make them discoverable so that LOB teams can self-serve and browse the APIs, Get active feedback and usage metrics..

Salesforce Certified MuleSoft Developer I image Question 12 explanation 65874 09232024002846000000

asked 23/09/2024
Heidar Heidari nia
33 questions

Question 13

Report
Export
Collapse

Which of the following sequence is correct?

API Client implementes logic to call an API >> API Consumer requests access to API >> API Implementation routes the request to >> API
API Client implementes logic to call an API >> API Consumer requests access to API >> API Implementation routes the request to >> API
API Consumer requests access to API >> API Client implementes logic to call an API >> API routes the request to >> API Implementation
API Consumer requests access to API >> API Client implementes logic to call an API >> API routes the request to >> API Implementation
API Consumer implementes logic to call an API >> API Client requests access to API >> API Implementation routes the request to >> API
API Consumer implementes logic to call an API >> API Client requests access to API >> API Implementation routes the request to >> API
API Client implementes logic to call an API >> API Consumer requests access to API >> API routes the request to >> API Implementation
API Client implementes logic to call an API >> API Consumer requests access to API >> API routes the request to >> API Implementation
Suggested answer: B

Explanation:

API Consumer requests access to API >> API Client implementes logic to call an API >> API routes the request to >> API Implementation. >> API consumer does not implement any logic to invoke APIs. It is just a role. So, the option stating 'API Consumer implementes logic to call an API' is INVALID.>> API Implementation does not route any requests. It is a final piece of logic where functionality of target systems is exposed. So, the requests should be routed to the API implementation by some other entity. So, the options stating 'API Implementation routes the request to >> API' is INVALID>> The statements in one of the options are correct but sequence is wrong. The sequence is given as 'API Client implementes logic to call an API >> API Consumer requests access to API >> API routes the request to >> API Implementation'. Here, the statements in the options are VALID but sequence is WRONG.>> Right option and sequence is the one where API consumer first requests access to API on Anypoint Exchange and obtains client credentials. API client then writes logic to call an API by using the access client credentials requested by API consumer and the requests will be routed to API implementation via the API which is managed by API Manager.

asked 23/09/2024
Arno Rodenhuis
49 questions

Question 14

Report
Export
Collapse

An organization has created an API-led architecture that uses various API layers to integrate mobile clients with a backend system. The backend system consists of a number of specialized components and can be accessed via a REST API. The process and experience APIs share the same bounded-context model that is different from the backend data model. What additional canonical models, bounded-context models, or anti-corruption layers are best added to this architecture to help process data consumed from the backend system?

Create a bounded-context model for every layer and overlap them when the boundary contexts overlap, letting API developers know about the differences between upstream and downstream data models
Create a bounded-context model for every layer and overlap them when the boundary contexts overlap, letting API developers know about the differences between upstream and downstream data models
Create a canonical model that combines the backend and API-led models to simplify and unify data models, and minimize data transformations.
Create a canonical model that combines the backend and API-led models to simplify and unify data models, and minimize data transformations.
Create a bounded-context model for the system layer to closely match the backend data model, and add an anti-corruption layer to let the different bounded contexts cooperate across the system and process layers
Create a bounded-context model for the system layer to closely match the backend data model, and add an anti-corruption layer to let the different bounded contexts cooperate across the system and process layers
Create an anti-corruption layer for every API to perform transformation for every data model to match each other, and let data simply travel between APIs to avoid the complexity and overhead of building canonical models
Create an anti-corruption layer for every API to perform transformation for every data model to match each other, and let data simply travel between APIs to avoid the complexity and overhead of building canonical models
Suggested answer: C

Explanation:

Create a bounded-context model for the system layer to closely match the backend data model, and add an anti-corruption layer to let the different bounded contexts cooperate across the system and process layers. >> Canonical models are not an option here as the organization has already put in efforts and created bounded-context models for Experience and Process APIs.>> Anti-corruption layers for ALL APIs is unnecessary and invalid because it is mentioned that experience and process APIs share same bounded-context model. It is just the System layer APIs that need to choose their approach now.>> So, having an anti-corruption layer just between the process and system layers will work well. Also to speed up the approach, system APIs can mimic the backend system data model.

asked 23/09/2024
marco giovinazzo
29 questions

Question 15

Report
Export
Collapse

An API client calls one method from an existing API implementation. The API implementation is later updated. What change to the API implementation would require the API client's invocation logic to also be updated?

When the data type of the response is changed for the method called by the API client
When the data type of the response is changed for the method called by the API client
When a new method is added to the resource used by the API client
When a new method is added to the resource used by the API client
When a new required field is added to the method called by the API client
When a new required field is added to the method called by the API client
When a child method is added to the method called by the API client
When a child method is added to the method called by the API client
Suggested answer: C

Explanation:

When a new required field is added to the method called by the API client. >> Generally, the logic on API clients need to be updated when the API contract breaks.>> When a new method or a child method is added to an API , the API client does not break as it can still continue to use its existing method. So these two options are out.>> We are left for two more where 'datatype of the response if changed' and 'a new required field is added'.>> Changing the datatype of the response does break the API contract. However, the question is insisting on the 'invocation' logic and not about the response handling logic. The API client can still invoke the API successfully and receive the response but the response will have a different datatype for some field.>> Adding a new required field will break the API's invocation contract. When adding a new required field, the API contract breaks the RAML or API spec agreement that the API client/API consumer and API provider has between them. So this requires the API client invocation logic to also be updated.

asked 23/09/2024
Muhammad Atif Tasneem
36 questions

Question 16

Report
Export
Collapse

Traffic is routed through an API proxy to an API implementation. The API proxy is managed by API Manager and the API implementation is deployed to a CloudHub VPC using Runtime Manager. API policies have been applied to this API. In this deployment scenario, at what point are the API policies enforced on incoming API client requests?

At the API proxy
At the API proxy
At the API implementation
At the API implementation
At both the API proxy and the API implementation
At both the API proxy and the API implementation
At a MuleSoft-hosted load balancer
At a MuleSoft-hosted load balancer
Suggested answer: A

Explanation:

At the API proxy. >> API Policies can be enforced at two places in Mule platform.>> One - As an Embedded Policy enforcement in the same Mule Runtime where API implementation is running.>> Two - On an API Proxy sitting in front of the Mule Runtime where API implementation is running.>> As the deployment scenario in the question has API Proxy involved, the policies will be enforced at the API Proxy.

asked 23/09/2024
Robert Andrade
47 questions

Question 17

Report
Export
Collapse

Once an API Implementation is ready and the API is registered on API Manager, who should request the access to the API on Anypoint Exchange?

None
None
Both
Both
API Client
API Client
API Consumer
API Consumer
Suggested answer: D

Explanation:

API Consumer. >> API clients are piece of code or programs that use the client credentials of API consumer but does not directly interact with Anypoint Exchange to get the access>> API consumer is the one who should get registered and request access to API and then API client needs to use those client credentials to hit the APIsSo, API consumer is the one who needs to request access on the API from Anypoint Exchange

asked 23/09/2024
Timothy Smith
38 questions

Question 18

Report
Export
Collapse

What is a key requirement when using an external Identity Provider for Client Management in Anypoint Platform?

Single sign-on is required to sign in to Anypoint Platform
Single sign-on is required to sign in to Anypoint Platform
The application network must include System APIs that interact with the Identity Provider
The application network must include System APIs that interact with the Identity Provider
To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients must submit access tokens issued by that same Identity Provider
To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients must submit access tokens issued by that same Identity Provider
APIs managed by Anypoint Platform must be protected by SAML 2.0 policies
APIs managed by Anypoint Platform must be protected by SAML 2.0 policies
Suggested answer: C

Explanation:

https://www.folkstalk.com/2019/11/mulesoft-integration-and-platform.htmlTo invoke OAuth 2.0-protected APIs managed by AnypointPlatform, API clients must submit access tokens issued by that same Identity Provider. >> It is NOT necessary that single sign-on is required to sign in to Anypoint Platform because we are using an external Identity Provider for Client Management>> It is NOT necessary that all APIs managed by Anypoint Platform must be protected by SAML 2.0 policies because we are using an external Identity Provider for Client Management>> Not TRUE that the application network must include System APIs that interact with the Identity Provider because we are using an external Identity Provider for Client ManagementOnly TRUE statement in the given options is - 'To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients must submit access tokens issued by that same Identity Provider'https://docs.mulesoft.com/api-manager/2.x/external-oauth-2.0-token-validation-policyhttps://blogs.mulesoft.com/dev/api-dev/api-security-ways-to-authenticate-and-authorize/

asked 23/09/2024
deborah lockett
29 questions

Question 19

Report
Export
Collapse

The responses to some HTTP requests can be cached depending on the HTTP verb used in the request. According to the HTTP specification, for what HTTP verbs is this safe to do?

PUT, POST, DELETE
PUT, POST, DELETE
GET, HEAD, POST
GET, HEAD, POST
GET, PUT, OPTIONS
GET, PUT, OPTIONS
GET, OPTIONS, HEAD
GET, OPTIONS, HEAD
Suggested answer: D

Explanation:

GET, OPTIONS, HEAD

Salesforce Certified MuleSoft Developer I image Question 19 explanation 65881 09232024002846000000http://restcookbook.com/HTTP%20Methods/idempotency/

asked 23/09/2024
Sushil Karki
38 questions

Question 20

Report
Export
Collapse

What is the most performant out-of-the-box solution in Anypoint Platform to track transaction state in an asynchronously executing long-running process implemented as a Mule application deployed to multiple CloudHub workers?

Redis distributed cache
Redis distributed cache
java.util.WeakHashMap
java.util.WeakHashMap
Persistent Object Store
Persistent Object Store
File-based storage
File-based storage
Suggested answer: C

Explanation:

Persistent Object Store. >> Redis distributed cache is performant but NOT out-of-the-box solution in Anypoint Platform>> File-storage is neither performant nor out-of-the-box solution in Anypoint Platform>> java.util.WeakHashMap needs a completely custom implementation of cache from scratch using Java code and is limited to the JVM where it is running. Which means the state in the cache is not worker aware when running on multiple workers. This type of cache is local to the worker. So, this is neither out-of-the-box nor worker-aware among multiple workers on cloudhub. https://www.baeldung.com/java-weakhashmap>> Persistent Object Store is an out-of-the-box solution provided by Anypoint Platform which is performant as well as worker aware among multiple workers running on CloudHub. https://docs.mulesoft.com/object-store/So, Persistent Object Store is the right answer.

asked 23/09/2024
SERGIO FREITAS
38 questions
Total 95 questions
Go to page: of 10
Search

Related questions