ExamGecko
Home Home / Salesforce / Certified MuleSoft Developer I

Salesforce Certified MuleSoft Developer I Practice Test - Questions Answers, Page 8

Question list
Search
Search

List of questions

Search

Related questions











A company has created a successful enterprise data model (EDM). The company is committed to building an application network by adopting modern APIs as a core enabler of the company's IT operating model. At what API tiers (experience, process, system) should the company require reusing the EDM when designing modern API data models?

A.
At the experience and process tiers
A.
At the experience and process tiers
Answers
B.
At the experience and system tiers
B.
At the experience and system tiers
Answers
C.
At the process and system tiers
C.
At the process and system tiers
Answers
D.
At the experience, process, and system tiers
D.
At the experience, process, and system tiers
Answers
Suggested answer: C

Explanation:

At the process and system tiers. >> Experience Layer APIs are modeled and designed exclusively for the end user's experience. So, the data models of experience layer vary based on the nature and type of such API consumer. For example, Mobile consumers will need light-weight data models to transfer with ease on the wire, where as web-based consumers will need detailed data models to render most of the info on web pages, so on. So, enterprise data models fit for the purpose of canonical models but not of good use for experience APIs.>> That is why, EDMs should be used extensively in process and system tiers but NOT in experience tier.

The application network is recomposable: it is built for change because it 'bends but does not break'

A.
TRUE
A.
TRUE
Answers
B.
FALSE
B.
FALSE
Answers
Suggested answer: A

Explanation:

.

>> Application Network is a disposable architecture.

>> Which means, it can be altered without disturbing entire architecture and its components.

>> It bends as per requirements or design changes but does not break

A system API has a guaranteed SLA of 100 ms per request. The system API is deployed to a primary environment as well as to a disaster recovery (DR) environment, with different DNS names in each environment. An upstream process API invokes the system API and the main goal of this process API is to respond to client requests in the least possible time. In what order should the system APIs be invoked, and what changes should be made in order to speed up the response time for requests from the process API?

A.
In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment, and ONLY use the first response
A.
In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment, and ONLY use the first response
Answers
B.
In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment using a scatter-gather configured with a timeout, and then merge the responses
B.
In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment using a scatter-gather configured with a timeout, and then merge the responses
Answers
C.
Invoke the system API deployed to the primary environment, and if it fails, invoke the system API deployed to the DR environment
C.
Invoke the system API deployed to the primary environment, and if it fails, invoke the system API deployed to the DR environment
Answers
D.
Invoke ONLY the system API deployed to the primary environment, and add timeout and retry logic to avoid intermittent failures
D.
Invoke ONLY the system API deployed to the primary environment, and add timeout and retry logic to avoid intermittent failures
Answers
Suggested answer: A

Explanation:

In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment, and ONLY use the first response.. >> The API requirement in the given scenario is to respond in least possible time.>> The option that is suggesting to first try the API in primary environment and then fallback to API in DR environment would result in successful response but NOT in least possible time. So, this is NOT a right choice of implementation for given requirement.>> Another option that is suggesting to ONLY invoke API in primary environment and to add timeout and retries may also result in successful response upon retries but NOT in least possible time. So, this is also NOT a right choice of implementation for given requirement.>> One more option that is suggesting to invoke API in primary environment and API in DR environment in parallel using Scatter-Gather would result in wrong API response as it would return merged results and moreover, Scatter-Gather does things in parallel which is true but still completes its scope only on finishing all routes inside it. So again, NOT a right choice of implementation for given requirementThe Correct choice is to invoke the API in primary environment and the API in DR environment parallelly, and using ONLY the first response received from one of them.

An organization uses various cloud-based SaaS systems and multiple on-premises systems. The on-premises systems are an important part of the organization's application network and can only be accessed from within the organization's intranet.

What is the best way to configure and use Anypoint Platform to support integrations with both the cloud-based SaaS systems and on-premises systems?

A) Use CloudHub-deployed Mule runtimes in an Anypoint VPC managed by Anypoint Platform Private Cloud Edition control plane

B) Use CloudHub-deployed Mule runtimes in the shared worker cloud managed by the MuleSoft-hosted Anypoint Platform control plane

C) Use an on-premises installation of Mule runtimes that are completely isolated with NO external network access, managed by the Anypoint Platform Private Cloud Edition control plane

D) Use a combination of Cloud Hub-deployed and manually provisioned on-premises Mule runtimes managed by the MuleSoft-hosted Anypoint Platform control plane

A.
Option A
A.
Option A
Answers
B.
Option B
B.
Option B
Answers
C.
Option C
C.
Option C
Answers
D.
Option D
D.
Option D
Answers
Suggested answer: B

Explanation:

Use a combination of CloudHub-deployed and manually provisionedon-premises Mule runtimes managed by the MuleSoft-hosted Platform control plane.. Key details to be taken from the given scenario:>> Organization uses BOTH cloud-based and on-premises systems>> On-premises systems can only be accessed from within the organization's intranetLet us evaluate the given choices based on above key details:>> CloudHub-deployed Mule runtimes can ONLY be controlled using MuleSoft-hosted control plane. We CANNOT use Private Cloud Edition's control plane to control CloudHub Mule Runtimes. So, option suggesting this is INVALID>> Using CloudHub-deployed Mule runtimes in the shared worker cloud managed by the MuleSoft-hosted Anypoint Platform is completely IRRELEVANT to given scenario and silly choice. So, option suggesting this is INVALID>> Using an on-premises installation of Mule runtimes that are completely isolated with NO external network access, managed by the Anypoint Platform Private Cloud Edition control plane would work for On-premises integrations. However, with NO external access, integrations cannot be done to SaaS-based apps. Moreover CloudHub-hosted apps are best-fit for integrating with SaaS-based applications. So, option suggesting this is BEST WAY.The best way to configure and use Anypoint Platform to support these mixed/hybrid integrations is to use a combination of CloudHub-deployed and manually provisioned on-premises Mule runtimes managed by the MuleSoft-hosted Platform control plane.

When must an API implementation be deployed to an Anypoint VPC?

A.
When the API Implementation must invoke publicly exposed services that are deployed outside of CloudHub in a customer- managed AWS instance
A.
When the API Implementation must invoke publicly exposed services that are deployed outside of CloudHub in a customer- managed AWS instance
Answers
B.
When the API implementation must be accessible within a subnet of a restricted customer-hosted network that does not allow public access
B.
When the API implementation must be accessible within a subnet of a restricted customer-hosted network that does not allow public access
Answers
C.
When the API implementation must be deployed to a production AWS VPC using the Mule Maven plugin
C.
When the API implementation must be deployed to a production AWS VPC using the Mule Maven plugin
Answers
D.
When the API Implementation must write to a persistent Object Store
D.
When the API Implementation must write to a persistent Object Store
Answers
Suggested answer: A

What is true about API implementations when dealing with legal regulations that require all data processing to be performed within a certain jurisdiction (such as in the USA or the EU)?

A.
They must avoid using the Object Store as it depends on services deployed ONLY to the US East region
A.
They must avoid using the Object Store as it depends on services deployed ONLY to the US East region
Answers
B.
They must use a Jurisdiction-local external messaging system such as Active MQ rather than Anypoint MQ
B.
They must use a Jurisdiction-local external messaging system such as Active MQ rather than Anypoint MQ
Answers
C.
They must te deployed to Anypoint Platform runtime planes that are managed by Anypoint Platform control planes, with both planes in the same Jurisdiction
C.
They must te deployed to Anypoint Platform runtime planes that are managed by Anypoint Platform control planes, with both planes in the same Jurisdiction
Answers
D.
They must ensure ALL data is encrypted both in transit and at rest
D.
They must ensure ALL data is encrypted both in transit and at rest
Answers
Suggested answer: C

Explanation:

They must be deployed to Anypoint Platform runtime planes thatare managed by Anypoint Platform control planes, with both planes in the same Jurisdiction.. >> As per legal regulations, all data processing to be performed within a certain jurisdiction. Meaning, the data in USA should reside within USA and should not go out. Same way, the data in EU should reside within EU and should not go out.>> So, just encrypting the data in transit and at rest does not help to be compliant with the rules. We need to make sure that data does not go out too.>> The data that we are talking here is not just about the messages that are published to Anypoint MQ. It includes the apps running, transaction states, application logs, events, metric info and any other metadata. So, just replacing Anypoint MQ with a locally hosted ActiveMQ does NOT help.>> The data that we are talking here is not just about the key/value pairs that are stored in Object Store. It includes the messages published, apps running, transaction states, application logs, events, metric info and any other metadata. So, just avoiding using Object Store does NOT help.>> The only option left and also the right option in the given choices is to deploy application on runtime and control planes that are both within the jurisdiction.

An API has been updated in Anypoint Exchange by its API producer from version 3.1.1 to 3.2.0 following accepted semantic versioning practices and the changes have been communicated via the API's public portal.

The API endpoint does NOT change in the new version.

How should the developer of an API client respond to this change?

A.
The update should be identified as a project risk and full regression testing of the functionality that uses this API should be run
A.
The update should be identified as a project risk and full regression testing of the functionality that uses this API should be run
Answers
B.
The API producer should be contacted to understand the change to existing functionality
B.
The API producer should be contacted to understand the change to existing functionality
Answers
C.
The API producer should be requested to run the old version in parallel with the new one
C.
The API producer should be requested to run the old version in parallel with the new one
Answers
D.
The API client code ONLY needs to be changed if it needs to take advantage of new features
D.
The API client code ONLY needs to be changed if it needs to take advantage of new features
Answers
Suggested answer: D

Mule applications that implement a number of REST APIs are deployed to their own subnet that is inaccessible from outside the organization.

External business-partners need to access these APIs, which are only allowed to be invoked from a separate subnet dedicated to partners - called Partner-subnet. This subnet is accessible from the public internet, which allows these external partners to reach it.

Anypoint Platform and Mule runtimes are already deployed in Partner-subnet. These Mule runtimes can already access the APIs.

What is the most resource-efficient solution to comply with these requirements, while having the least impact on other applications that are currently using the APIs?

A.
Implement (or generate) an API proxy Mule application for each of the APIs, then deploy the API proxies to the Mule runtimes
A.
Implement (or generate) an API proxy Mule application for each of the APIs, then deploy the API proxies to the Mule runtimes
Answers
B.
Redeploy the API implementations to the same servers running the Mule runtimes
B.
Redeploy the API implementations to the same servers running the Mule runtimes
Answers
C.
Add an additional endpoint to each API for partner-enablement consumption
C.
Add an additional endpoint to each API for partner-enablement consumption
Answers
D.
Duplicate the APIs as Mule applications, then deploy them to the Mule runtimes
D.
Duplicate the APIs as Mule applications, then deploy them to the Mule runtimes
Answers
Suggested answer: A

When could the API data model of a System API reasonably mimic the data model exposed by the corresponding backend system, with minimal improvements over the backend system's data model?

A.
When there is an existing Enterprise Data Model widely used across the organization
A.
When there is an existing Enterprise Data Model widely used across the organization
Answers
B.
When the System API can be assigned to a bounded context with a corresponding data model
B.
When the System API can be assigned to a bounded context with a corresponding data model
Answers
C.
When a pragmatic approach with only limited isolation from the backend system is deemed appropriate
C.
When a pragmatic approach with only limited isolation from the backend system is deemed appropriate
Answers
D.
When the corresponding backend system is expected to be replaced in the near future
D.
When the corresponding backend system is expected to be replaced in the near future
Answers
Suggested answer: C

Explanation:

When a pragmatic approach with only limited isolation from thebackend system is deemed appropriate.. General guidance w.r.t choosing Data Models:>> If an Enterprise Data Model is in use then the API data model of System APIs should make use of data types from that Enterprise Data Model and the corresponding API implementation should translate between these data types from the Enterprise Data Model and the native data model of the backend system.>> If no Enterprise Data Model is in use then each System API should be assigned to a Bounded Context, the API data model of System APIs should make use of data types from the corresponding Bounded Context Data Model and the corresponding API implementation should translate between these data types from the Bounded Context Data Model and the native data model of the backend system. In this scenario, the data types in the Bounded Context Data Model are defined purely in terms of their business characteristics and are typically not related to the native data model of the backend system. In other words, the translation effort may be significant.>> If no Enterprise Data Model is in use, and the definition of a clean Bounded Context Data Model is considered too much effort, then the API data model of System APIs should make use of data types that approximately mirror those from the backend system, same semantics and naming as backend system, lightly sanitized, expose all fields needed for the given System API's functionality, but not significantly more and making good use of REST conventions.The latter approach, i.e., exposing in System APIs an API data model that basically mirrors that of the backend system, does not provide satisfactory isolation from backend systems through the System API tier on its own. In particular, it will typically not be possible to 'swap out' a backend system without significantly changing all System APIs in front of that backend system and therefore the API implementations of all Process APIs that depend on those System APIs! This is so because it is not desirable to prolong the life of a previous backend system's data model in the form of the API data model of System APIs that now front a new backend system. The API data models of System APIs following this approach must therefore change when the backend system is replaced.On the other hand:>> It is a very pragmatic approach that adds comparatively little overhead over accessing the backend system directly>> Isolates API clients from intricacies of the backend system outside the data model (protocol, authentication, connection pooling, network address, ...)>> Allows the usual API policies to be applied to System APIs>> Makes the API data model for interacting with the backend system explicit and visible, by exposing it in the RAML definitions of the System APIs>> Further isolation from the backend system data model does occur in the API implementations of the Process API tier

Refer to the exhibit.

An organization uses one specific CloudHub (AWS) region for all CloudHub deployments.

How are CloudHub workers assigned to availability zones (AZs) when the organization's Mule applications are deployed to CloudHub in that region?

A.
Workers belonging to a given environment are assigned to the same AZ within that region
A.
Workers belonging to a given environment are assigned to the same AZ within that region
Answers
B.
AZs are selected as part of the Mule application's deployment configuration
B.
AZs are selected as part of the Mule application's deployment configuration
Answers
C.
Workers are randomly distributed across available AZs within that region
C.
Workers are randomly distributed across available AZs within that region
Answers
D.
An AZ is randomly selected for a Mule application, and all the Mule application's CloudHub workers are assigned to that one AZ
D.
An AZ is randomly selected for a Mule application, and all the Mule application's CloudHub workers are assigned to that one AZ
Answers
Suggested answer: D

Explanation:

Workers are randomly distributed across available AZs within that region.. >>Currently, we only have control to choose which AWS Region to choose but there is no control at all using any configurations or deployment options to decide what Availability Zone (AZ) to assign to what worker.>>There areNOfixed or implicit rules on platform too w.r.t assignment of AZ to workers based on environment or application.>>They are completely assigned inrandom. However, cloudhub definitely ensures that HA is achieved by assigning the workers to more than on AZ so that all workers are not assigned to same AZ for same application.Bottom of FormTop of Form

Total 95 questions
Go to page: of 10