ExamGecko
Home Home / Salesforce / Certified MuleSoft Developer I

Salesforce Certified MuleSoft Developer I Practice Test - Questions Answers

Question list
Search
Search

List of questions

Search

Related questions











What is true about where an API policy is defined in Anypoint Platform and how it is then applied to API instances?

A.
The API policy Is defined In Runtime Manager as part of the API deployment to a Mule runtime, and then ONLY applied to the specific API Instance
A.
The API policy Is defined In Runtime Manager as part of the API deployment to a Mule runtime, and then ONLY applied to the specific API Instance
Answers
B.
The API policy Is defined In API Manager for a specific API Instance, and then ONLY applied to the specific API instance
B.
The API policy Is defined In API Manager for a specific API Instance, and then ONLY applied to the specific API instance
Answers
C.
The API policy Is defined in API Manager and then automatically applied to ALL API instances
C.
The API policy Is defined in API Manager and then automatically applied to ALL API instances
Answers
D.
The API policy is defined in API Manager, and then applied to ALL API instances in the specified environment
D.
The API policy is defined in API Manager, and then applied to ALL API instances in the specified environment
Answers
Suggested answer: B

Explanation:

The API policy is defined in API Manager for a specific APIinstance, and then ONLY applied to the specific API instance.. >> Once our API specifications are ready and published to Exchange, we need to visit API Manager and register an API instance for each API.>> API Manager is the place where management of API aspects takes place like addressing NFRs by enforcing policies on them.>> We can create multiple instances for a same API and manage them differently for different purposes.>> One instance can have a set of API policies applied and another instance of same API can have different set of policies applied for some other purpose.>> These APIs and their instances are defined PER environment basis. So, one need to manage them seperately in each environment.>> We can ensure that same configuration of API instances (SLAs, Policies etc..) gets promoted when promoting to higher environments using platform feature. But this is optional only. Still one can change them per environment basis if they have to.>> Runtime Manager is the place to manage API Implementations and their Mule Runtimes but NOT APIs itself. Though API policies gets executed in Mule Runtimes, We CANNOT enforce API policies in Runtime Manager. We would need to do that via API Manager only for a cherry picked instance in an environment.So, based on these facts, right statement in the given choices is - 'The API policy is defined in API Manager for a specific API instance, and then ONLY applied to the specific API instance'.

An API implementation is deployed to CloudHub.

What conditions can be alerted on using the default Anypoint Platform functionality, where the alert conditions depend on the end-to-end request processing of the API implementation?

A.
When the API is invoked by an unrecognized API client
A.
When the API is invoked by an unrecognized API client
Answers
B.
When a particular API client invokes the API too often within a given time period
B.
When a particular API client invokes the API too often within a given time period
Answers
C.
When the response time of API invocations exceeds a threshold
C.
When the response time of API invocations exceeds a threshold
Answers
D.
When the API receives a very high number of API invocations
D.
When the API receives a very high number of API invocations
Answers
Suggested answer: C

Explanation:

When the response time of API invocations exceeds a threshold. >> Alerts can be setup for all the given options using the default Anypoint Platform functionality>> However, the question insists on an alert whose conditions depend on the end-to-end request processing of the API implementation.>> Alert w.r.t 'Response Times' is the only one which requires end-to-end request processing of API implementation in order to determine if the threshold is exceeded or not.

A Mule application exposes an HTTPS endpoint and is deployed to the CloudHub Shared Worker Cloud. All traffic to that Mule application must stay inside the AWS VPC.

To what TCP port do API invocations to that Mule application need to be sent?

A.
443
A.
443
Answers
B.
8081
B.
8081
Answers
C.
8091
C.
8091
Answers
D.
8082
D.
8082
Answers
Suggested answer: D

Explanation:

8082. >> 8091 and 8092 ports are to be used when keeping your HTTP and HTTPS app private to the LOCAL VPC respectively.>> Above TWO ports are not for Shared AWS VPC/ Shared Worker Cloud.>> 8081 is to be used when exposing your HTTP endpoint app to the internet through Shared LB>> 8082 is to be used when exposing your HTTPS endpoint app to the internet through Shared LBSo, API invocations should be sent to port 8082 when calling this HTTPS based app.https://docs.mulesoft.com/runtime-manager/cloudhub-networking-guidehttps://help.mulesoft.com/s/article/Configure-Cloudhub-Application-to-Send-a-HTTPS-Request-Directly-to-Another-Cloudhub-Applicationhttps://help.mulesoft.com/s/question/0D52T00004mXXULSA4/multiple-http-listerners-on-cloudhub-one-with-port-9090

What is the main change to the IT operating model that MuleSoft recommends to organizations to improve innovation and clock speed?

A.
Drive consumption as much as production of assets; this enables developers to discover and reuse assets from other projects and encourages standardization
A.
Drive consumption as much as production of assets; this enables developers to discover and reuse assets from other projects and encourages standardization
Answers
B.
Expose assets using a Master Data Management (MDM) system; this standardizes projects and enables developers to quickly discover and reuse assets from other projects
B.
Expose assets using a Master Data Management (MDM) system; this standardizes projects and enables developers to quickly discover and reuse assets from other projects
Answers
C.
Implement SOA for reusable APIs to focus on production over consumption; this standardizes on XML and WSDL formats to speed up decision making
C.
Implement SOA for reusable APIs to focus on production over consumption; this standardizes on XML and WSDL formats to speed up decision making
Answers
D.
Create a lean and agile organization that makes many small decisions everyday; this speeds up decision making and enables each line of business to take ownership of its projects
D.
Create a lean and agile organization that makes many small decisions everyday; this speeds up decision making and enables each line of business to take ownership of its projects
Answers
Suggested answer: A

Explanation:

Drive consumption as much as production of assets; this enables developers to discover and reuse assets from other projects and encourages standardization. >> The main motto of the new IT Operating Model that MuleSoft recommends and made popular is to change the way that they are delivered from a production model to a production + consumption model, which is done through an API strategy called API-led connectivity.>> The assets built should also be discoverable and self-serveable for reusablity across LOBs and organization.>> MuleSoft's IT operating model does not talk about SDLC model (Agile/ Lean etc) or MDM at all. So, options suggesting these are not valid.https://blogs.mulesoft.com/biz/connectivity/what-is-a-center-for-enablement-c4e/https://www.mulesoft.com/resources/api/secret-to-managing-it-projects

Version 3.0.1 of a REST API implementation represents time values in PST time using ISO 8601 hh:mm:ss format. The API implementation needs to be changed to instead represent time values in CEST time using ISO 8601 hh:mm:ss format. When following the semver.org semantic versioning specification, what version should be assigned to the updated API implementation?

A.
3.0.2
A.
3.0.2
Answers
B.
4.0.0
B.
4.0.0
Answers
C.
3.1.0
C.
3.1.0
Answers
D.
3.0.1
D.
3.0.1
Answers
Suggested answer: B

Explanation:

4.0.0. As per semver.org semantic versioning specification:Given a version number MAJOR.MINOR.PATCH, increment the:- MAJOR version when you make incompatible API changes.- MINOR version when you add functionality in a backwards compatible manner.- PATCH version when you make backwards compatible bug fixes.As per the scenario given in the question, the API implementation is completely changing its behavior. Although the format of the time is still being maintained as hh:mm:ss and there is no change in schema w.r.t format, the API will start functioning different after this change as the times are going to come completely different.Example: Before the change, say, time is going as 09:00:00 representing the PST. Now on, after the change, the same time will go as 18:00:00 as Central European Summer Time is 9 hours ahead of Pacific Time.>> This may lead to some uncertain behavior on API clients depending on how they are handling the times in the API response. All the API clients need to be informed that the API functionality is going to change and will return in CEST format. So, this considered as a MAJOR change and the version of API for this new change would be 4.0.0

In which layer of API-led connectivity, does the business logic orchestration reside?

A.
System Layer
A.
System Layer
Answers
B.
Experience Layer
B.
Experience Layer
Answers
C.
Process Layer
C.
Process Layer
Answers
Suggested answer: C

Explanation:

Process Layer. >> Experience layer is dedicated for enrichment of end user experience. This layer is to meet the needs of different API clients/ consumers.>> System layer is dedicated to APIs which are modular in nature and implement/ expose various individual functionalities of backend systems>> Process layer is the place where simple or complex business orchestration logic is written by invoking one or many System layer modular APIsSo, Process Layer is the right answer.

A system API is deployed to a primary environment as well as to a disaster recovery (DR) environment, with different DNS names in each environment. A process API is a client to the system API and is being rate limited by the system API, with different limits in each of the environments. The system API's DR environment provides only 20% of the rate limiting offered by the primary environment. What is the best API fault-tolerant invocation strategy to reduce overall errors in the process API, given these conditions and constraints?

A.
Invoke the system API deployed to the primary environment; add timeout and retry logic to the process API to avoid intermittent failures; if it still fails, invoke the system API deployed to the DR environment
A.
Invoke the system API deployed to the primary environment; add timeout and retry logic to the process API to avoid intermittent failures; if it still fails, invoke the system API deployed to the DR environment
Answers
B.
Invoke the system API deployed to the primary environment; add retry logic to the process API to handle intermittent failures by invoking the system API deployed to the DR environment
B.
Invoke the system API deployed to the primary environment; add retry logic to the process API to handle intermittent failures by invoking the system API deployed to the DR environment
Answers
C.
In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment; add timeout and retry logic to the process API to avoid intermittent failures; add logic to the process API to combine the results
C.
In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment; add timeout and retry logic to the process API to avoid intermittent failures; add logic to the process API to combine the results
Answers
D.
Invoke the system API deployed to the primary environment; add timeout and retry logic to the process API to avoid intermittent failures; if it still fails, invoke a copy of the process API deployed to the DR environment
D.
Invoke the system API deployed to the primary environment; add timeout and retry logic to the process API to avoid intermittent failures; if it still fails, invoke a copy of the process API deployed to the DR environment
Answers
Suggested answer: A

Explanation:

Invoke the system API deployed to the primary environment; add timeout and retry logic to the process API to avoid intermittent failures; if it still fails, invoke the system API deployed to the DR environment. There is one important consideration to be noted in the question which is - System API in DR environment provides only 20% of the rate limiting offered by the primary environment. So, comparitively, very less calls will be allowed into the DR environment API opposed to its primary environment. With this in mind, lets analyse what is the right and best fault-tolerant invocation strategy.1. Invoking both the system APIs in parallel is definitely NOT a feasible approach because of the 20% limitation we have on DR environment. Calling in parallel every time would easily and quickly exhaust the rate limits on DR environment and may not give chance to genuine intermittent error scenarios to let in during the time of need.2. Another option given is suggesting to add timeout and retry logic to process API while invoking primary environment's system API. This is good so far. However, when all retries failed, the option is suggesting to invoke the copy of process API on DR environment which is not right or recommended. Only system API is the one to be considered for fallback and not the whole process API. Process APIs usually have lot of heavy orchestration calling many other APIs which we do not want to repeat again by calling DR's process API. So this option is NOT right.3. One more option given is suggesting to add the retry (no timeout) logic to process API to directly retry on DR environment's system API instead of retrying the primary environment system API first. This is not at all a proper fallback. A proper fallback should occur only after all retries are performed and exhausted on Primary environment first. But here, the option is suggesting to directly retry fallback API on first failure itself without trying main API. So, this option is NOT right too.This leaves us one option which is right and best fit.- Invoke the system API deployed to the primary environment- Add Timeout and Retry logic on it in process API- If it fails even after all retries, then invoke the system API deployed to the DR environment.

A company uses a hybrid Anypoint Platform deployment model that combines the EU control plane with customer-hosted Mule runtimes. After successfully testing a Mule API implementation in the Staging environment, the Mule API implementation is set with environment-specific properties and must be promoted to the Production environment. What is a way that MuleSoft recommends to configure the Mule API implementation and automate its promotion to the Production environment?

A.
Bundle properties files for each environment into the Mule API implementation's deployable archive, then promote the Mule API implementation to the Production environment using Anypoint CLI or the Anypoint Platform REST APIsB.
A.
Bundle properties files for each environment into the Mule API implementation's deployable archive, then promote the Mule API implementation to the Production environment using Anypoint CLI or the Anypoint Platform REST APIsB.
Answers
B.
Modify the Mule API implementation's properties in the API Manager Properties tab, then promote the Mule API implementation to the Production environment using API Manager
B.
Modify the Mule API implementation's properties in the API Manager Properties tab, then promote the Mule API implementation to the Production environment using API Manager
Answers
C.
Modify the Mule API implementation's properties in Anypoint Exchange, then promote the Mule API implementation to the Production environment using Runtime Manager
C.
Modify the Mule API implementation's properties in Anypoint Exchange, then promote the Mule API implementation to the Production environment using Runtime Manager
Answers
D.
Use an API policy to change properties in the Mule API implementation deployed to the Staging environment and another API policy to deploy the Mule API implementation to the Production environment
D.
Use an API policy to change properties in the Mule API implementation deployed to the Staging environment and another API policy to deploy the Mule API implementation to the Production environment
Answers
Suggested answer: A

Explanation:

Bundle properties files for each environment into the Mule API implementation's deployable archive, then promote the Mule API implementation to the Production environment using Anypoint CLI or the Anypoint Platform REST APIs. >> Anypoint Exchange is for asset discovery and documentation. It has got no provision to modify the properties of Mule API implementations at all.>> API Manager is for managing API instances, their contracts, policies and SLAs. It has also got no provision to modify the properties of API implementations.>> API policies are to address Non-functional requirements of APIs and has again got no provision to modify the properties of API implementations.So, the right way and recommended way to do this as part of development practice is to bundle properties files for each environment into the Mule API implementation and just point and refer to respective file per environment.

An organization wants to make sure only known partners can invoke the organization's APIs. To achieve this security goal, the organization wants to enforce a Client ID Enforcement policy in API Manager so that only registered partner applications can invoke the organization's APIs. In what type of API implementation does MuleSoft recommend adding an API proxy to enforce the Client ID Enforcement policy, rather than embedding the policy directly in the application's JVM?

A.
A Mule 3 application using APIkit
A.
A Mule 3 application using APIkit
Answers
B.
A Mule 3 or Mule 4 application modified with custom Java code
B.
A Mule 3 or Mule 4 application modified with custom Java code
Answers
C.
A Mule 4 application with an API specification
C.
A Mule 4 application with an API specification
Answers
D.
A Non-Mule application
D.
A Non-Mule application
Answers
Suggested answer: D

Explanation:

A Non-Mule application. >> All type of Mule applications (Mule 3/ Mule 4/ with APIkit/ with Custom Java Code etc) running on Mule Runtimes support the Embedded Policy Enforcement on them.>> The only option that cannot have or does not support embedded policy enforcement and must have API Proxy is for Non-Mule Applications.So, Non-Mule application is the right answer.

A company wants to move its Mule API implementations into production as quickly as possible. To protect access to all Mule application data and metadata, the company requires that all Mule applications be deployed to the company's customer-hosted infrastructure within the corporate firewall. What combination of runtime plane and control plane options meets these project lifecycle goals?

A.
Manually provisioned customer-hosted runtime plane and customer-hosted control plane
A.
Manually provisioned customer-hosted runtime plane and customer-hosted control plane
Answers
B.
MuleSoft-hosted runtime plane and customer-hosted control plane
B.
MuleSoft-hosted runtime plane and customer-hosted control plane
Answers
C.
Manually provisioned customer-hosted runtime plane and MuleSoft-hosted control plane
C.
Manually provisioned customer-hosted runtime plane and MuleSoft-hosted control plane
Answers
D.
iPaaS provisioned customer-hosted runtime plane and MuleSoft-hosted control plane
D.
iPaaS provisioned customer-hosted runtime plane and MuleSoft-hosted control plane
Answers
Suggested answer: A

Explanation:

Manually provisioned customer-hosted runtime plane and customer-hosted control plane. There are two key factors that are to be taken into consideration from the scenario given in the question.>> Company requires both data and metadata to be resided within the corporate firewall>> Company would like to go with customer-hosted infrastructure.Any deployment model that is to deal with the cloud directly or indirectly (Mulesoft-hosted or Customer's own cloud like Azure, AWS) will have to share atleast the metadata.Application data can be controlled inside firewall by having Mule Runtimes on customer hosted runtime plane. But if we go with Mulsoft-hosted/ Cloud-based control plane, the control plane required atleast some minimum level of metadata to be sent outside the corporate firewall.As the customer requirement is pretty clear about the data and metadata both to be within the corporate firewall, even though customer wants to move to production as quickly as possible, unfortunately due to the nature of their security requirements, they have no other option but to go with manually provisioned customer-hosted runtime plane and customer-hosted control plane.

Total 95 questions
Go to page: of 10