ExamGecko
Home Home / MuleSoft / MCIA - Level 1

MuleSoft MCIA - Level 1 Practice Test - Questions Answers, Page 4

Question list
Search
Search

List of questions

Search

Related questions











Mule application A receives a request Anypoint MQ message REQU with a payload containing a variable-length list of request objects. Application A uses the For Each scope to split the list into individual objects and sends each object as a message to an Anypoint MQ queue.

Service S listens on that queue, processes each message independently of all other messages, and sends a response message to a response queue.

Application A listens on that response queue and must in turn create and publish a response Anypoint MQ message RESP with a payload containing the list of responses sent by service S in the same order as the request objects originally sent in REQU.

Assume successful response messages are returned by service S for all request messages.

What is required so that application A can ensure that the length and order of the list of objects in RESP and REQU match, while at the same time maximizing message throughput?

A.
Use a Scatter-Gather within the For Each scope to ensure response message order Configure the Scatter-Gather with a persistent object store
A.
Use a Scatter-Gather within the For Each scope to ensure response message order Configure the Scatter-Gather with a persistent object store
Answers
B.
Perform all communication involving service S synchronously from within the For Each scope, so objects in RESP are in the exact same order as request objects in REQU
B.
Perform all communication involving service S synchronously from within the For Each scope, so objects in RESP are in the exact same order as request objects in REQU
Answers
C.
Use an Async scope within the For Each scope and collect response messages in a second For Each scope in the order In which they arrive, then send RESP using this list of responses
C.
Use an Async scope within the For Each scope and collect response messages in a second For Each scope in the order In which they arrive, then send RESP using this list of responses
Answers
D.
Keep track of the list length and all object indices in REQU, both in the For Each scope and in all communication involving service Use persistent storage when creating RESP
D.
Keep track of the list length and all object indices in REQU, both in the For Each scope and in all communication involving service Use persistent storage when creating RESP
Answers
Suggested answer: D

Explanation:

Correct answer is Perform all communication involving service S synchronously from within the For Each scope, so objects in RESP are in the exact same order as request objects in REQU : Using Anypoint MQ, you can create two types of queues: Standard queue These queues don't guarantee a specific message order. Standard queues are the best fit for applications in which messages must be delivered quickly. FIFO (first in, first out) queue These queues ensure that your messages arrive in order. FIFO queues are the best fit for applications requiring strict message ordering and exactly-once delivery, but in which message delivery speed is of less importance Use of FIFO queue is no where in the option and also it decreased throughput. Similarly persistent object store is not the preferred solution approach when you maximizing message throughput. This rules out one of the options.

Scatter Gather does not support ObjectStore. This rules out one of the options. Standard Anypoint MQ queues don't guarantee a specific message order hence using another for each block to collect response wont work as requirement here is to ensure the order. Hence considering all the above factors the feasible approach is Perform all communication involving service S synchronously from within the For Each scope, so objects in RESP are in the exact same order as request objects in REQU

Refer to the exhibit.

A Mule application is being designed to expose a SOAP web service to its clients.

What language is typically used inside the web service's interface definition to define the data structures that the web service Is expected to exchange with its clients?

A.
WSDL
A.
WSDL
Answers
B.
XSD
B.
XSD
Answers
C.
JSON Schema
C.
JSON Schema
Answers
D.
RAMI
D.
RAMI
Answers
Suggested answer: A, B, C, D

Explanation:

Answer: XSD In this approach to developing a web service, you begin with

Explanation:an XML schema (XSD file) that defines XML data structures to be used as parameters and return types in the web service operations.

----------------------------------------------------------------------------------------------------------------- Reference:

https://www.w3schools.com/xml/schema_intro.asp

An organization has various integrations implemented as Mule applications. Some of these Mule applications are deployed to custom hosted Mule runtimes (on-premises) while others execute in the MuleSoft-hosted runtime plane (CloudHub). To perform the Integra functionality, these Mule applications connect to various backend systems, with multiple applications typically needing to access the backend systems.

How can the organization most effectively avoid creating duplicates in each Mule application of the credentials required to access the backend systems?

A.
Create a Mule domain project that maintains the credentials as Mule domain-shared resources Deploy the Mule applications to the Mule domain, so the credentials are available to the Mule applications
A.
Create a Mule domain project that maintains the credentials as Mule domain-shared resources Deploy the Mule applications to the Mule domain, so the credentials are available to the Mule applications
Answers
B.
Store the credentials in properties files in a shared folder within the organization's data center Have the Mule applications load properties files from this shared location at startup
B.
Store the credentials in properties files in a shared folder within the organization's data center Have the Mule applications load properties files from this shared location at startup
Answers
C.
Segregate the credentials for each backend system into environment-specific properties files Package these properties files in each Mule application, from where they are loaded at startup
C.
Segregate the credentials for each backend system into environment-specific properties files Package these properties files in each Mule application, from where they are loaded at startup
Answers
D.
Configure or create a credentials service that returns the credentials for each backend system, and that is accessible from customer-hosted and MuleSoft-hosted Mule runtimes Have the Mule applications toad the properties at startup by invoking that credentials service
D.
Configure or create a credentials service that returns the credentials for each backend system, and that is accessible from customer-hosted and MuleSoft-hosted Mule runtimes Have the Mule applications toad the properties at startup by invoking that credentials service
Answers
Suggested answer: D

Explanation:

* "Create a Mule domain project that maintains the credentials as Mule domain-shared resources" is wrong as domain project is not supported in Cloudhub

* We should Avoid Creating duplicates in each Mule application but below two options cause duplication of credentials - Store the credentials in properties files in a shared folder within the organization's data center. Have the Mule applications load properties files from this shared location at startup - Segregate the credentials for each backend system into environment-specific properties files. Package these properties files in each Mule application, from where they are loaded at startup So these are also wrong choices

* Credentials service is the best approach in this scenario. Mule domain projects are not supported on CloudHub.

Also its is not recommended to have multiple copies of configuration values as this makes difficult to maintain Use the Mule Credentials Vault to encrypt data in a .properties file. (In the context of this document, we refer to the .properties file simply as the properties file.) The properties file in Mule stores data as key-value pairs which may contain information such as usernames, first and last names, and credit card numbers. A Mule application may access this data as it processes messages, for example, to acquire login credentials for an external Web service. However, though this sensitive, private data must be stored in a properties file for Mule to access, it must also be protected against unauthorized ñ and potentially malicious ñ use by anyone with access to the Mule application

Refer to the exhibit.

A Mule application is deployed to a cluster of two customer-hosted Mute runtimes. The Mute application has a flow that polls a database and another flow with an HTTP Listener.

HTTP clients send HTTP requests directly to individual cluster nodes.

What happens to database polling and HTTP request handling in the time after the primary (master)node of the cluster has railed, but before that node is restarted?

A.
Database polling continues Only HTTP requests sent to the remaining node continue to beaccepted
A.
Database polling continues Only HTTP requests sent to the remaining node continue to beaccepted
Answers
B.
Database polling stops All HTTP requests continue to be accepted
B.
Database polling stops All HTTP requests continue to be accepted
Answers
C.
Database polling continues All HTTP requests continue to be accepted, but requests to the failednode Incur increased latency
C.
Database polling continues All HTTP requests continue to be accepted, but requests to the failednode Incur increased latency
Answers
D.
Database polling stops All HTTP requests are rejected
D.
Database polling stops All HTTP requests are rejected
Answers
Suggested answer: A

Explanation:

Correct answer is Database polling continues Only HTTP requests sent to the remaining nodecontinue to be accepted. : Architecture descripted in the question could be described asfollows.When node 1 is down , DB polling will still continue via node 2 . Also requests which arecoming directly to node 2 will also be accepted and processed in BAU fashion. Only thing that wontwork is when requests are sent to Node 1 HTTP connector. The flaw with this architecture is HTTPclients are sending HTTP requests directly to individual cluster nodes. By default, clustering Muleruntime engines ensures high system availability. If a Mule runtime engine node becomesunavailable due to failure or planned downtime, another node in the cluster can assume theworkload and continue to process existing events and messages

A global organization operates datacenters in many countries. There are private network links between these datacenters because all business data (but NOT metadata) must be exchanged over these private network connections.

The organization does not currently use AWS in any way.

The strategic decision has Just been made to rigorously minimize IT operations effort and investment going forward.

What combination of deployment options of the Anypoint Platform control plane and runtime plane(s) best serves this organization at the start of this strategic journey?

A.
MuleSoft-hosted Anypoint Platform control plane CloudHub Shared Worker Cloud in multiple AWS regions
A.
MuleSoft-hosted Anypoint Platform control plane CloudHub Shared Worker Cloud in multiple AWS regions
Answers
B.
Anypoint Platform - Private Cloud Edition Customer-hosted runtime plane in each datacenter
B.
Anypoint Platform - Private Cloud Edition Customer-hosted runtime plane in each datacenter
Answers
C.
MuleSoft-hosted Anypoint Platform control plane Customer-hosted runtime plane in multiple AWS regions
C.
MuleSoft-hosted Anypoint Platform control plane Customer-hosted runtime plane in multiple AWS regions
Answers
D.
MuleSoft-hosted Anypoint Platform control plane Customer-hosted runtime plane in each datacenter
D.
MuleSoft-hosted Anypoint Platform control plane Customer-hosted runtime plane in each datacenter
Answers
Suggested answer: D

Explanation:

Correct answer is MuleSoft-hosted Anypoint Platform control plane Customer-hosted runtime plane in each datacenter There are two things to note about the question which can help us figure out correct answer..

* Business data must be exchanged over these private network connections which means we can not use MuleSoft provided Cloudhub option. So we are left with either customer hosted runtime in external cloud provider or customer hosted runtime in their own premises. As customer does not use AWS at the moment. Hence that don't have the immediate option of using Customer-hosted runtime plane in multiple AWS regions. hence the most suitable option for runtime plane is Customer-hosted runtime plane in each datacenter

* Metadata has no limitation to reside in organization premises. Hence for control plane MuleSoft hosted Anypoint platform can be used as a strategic solution.

Hybrid is the best choice to start. Mule hosted Control plane and Customer hosted Runtime to start with.

Once they mature in cloud migration, everything can be in Mule hosted.

Refer to the exhibit.

A Mule application is being designed to be deployed to several CIoudHub workers. The Mule application's integration logic is to replicate changed Accounts from Satesforce to a backend system every 5 minutes.

A watermark will be used to only retrieve those Satesforce Accounts that have been modified since the last time the integration logic ran.

What is the most appropriate way to implement persistence for the watermark in order to support the required data replication integration logic?

A.
Persistent Anypoint MQ Queue
A.
Persistent Anypoint MQ Queue
Answers
B.
Persistent Object Store
B.
Persistent Object Store
Answers
C.
Persistent Cache Scope
C.
Persistent Cache Scope
Answers
D.
Persistent VM Queue
D.
Persistent VM Queue
Answers
Suggested answer: B

Explanation:

* An object store is a facility for storing objects in or across Mule applications. Mule uses object stores to persist data for eventual retrieval.

* Mule provides two types of object stores:

1) In-memory store ñ stores objects in local Mule runtime memory. Objects are lost on shutdown of the Mule runtime.

2) Persistent store ñ Mule persists data when an object store is explicitly configured to be persistent.

In a standalone Mule runtime, Mule creates a default persistent store in the file system. If you do not specify an object store, the default persistent object store is used.

MuleSoft Reference: https://docs.mulesoft.com/mule-runtime/3.9/mule-object-stores

A new Mule application under development must implement extensive data transformation logic.

Some of the data transformation functionality is already available as external transformation services that are mature and widely used across the organization; the rest is highly specific to the new Mule application.

The organization follows a rigorous testing approach, where every service and application must be extensively acceptance tested before it is allowed to go into production.

What is the best way to implement the data transformation logic for this new Mule application while minimizing the overall testing effort?

A.
Implement and expose all transformation logic as mlaoservices using DataWeave, so it can be reused by any application component that needs it, including the new Mule application
A.
Implement and expose all transformation logic as mlaoservices using DataWeave, so it can be reused by any application component that needs it, including the new Mule application
Answers
B.
Implement transformation logic in the new Mute application using DataWeave, replicating the transformation logic of existing transformation services
B.
Implement transformation logic in the new Mute application using DataWeave, replicating the transformation logic of existing transformation services
Answers
C.
Extend the existing transformation services with new transformation logic and Invoke them from the new Mule application
C.
Extend the existing transformation services with new transformation logic and Invoke them from the new Mule application
Answers
D.
Implement transformation logic in the new Mute application using DataWeave, invoking existing transformation services when possible
D.
Implement transformation logic in the new Mute application using DataWeave, invoking existing transformation services when possible
Answers
Suggested answer: D

Explanation:

Correct answer is Implement transformation logic in the new Mule application using DataWeave, invoking existing transformation services when possible. * The key here minimal testing effort, "Extend existing transformation logic" is not a feasible option because additional functionality is highly specific to the new Mule application so it should not be a part of commonly used functionality.

So this option is ruled out.

* "Implement transformation logic in the new Mule application using DataWeave, replicating the transformation logic of existing transformation services" Replicating the transformation logic of existing transformation services will cause duplicity of code. So this option is ruled out.

* "Implement and expose all transformation logic as microservices using DataWeave, so it can be reused by any application component that needs it, including the new Mule application" as question specifies that the transformation is app specific and wont be used outside

A Mule application uses the Database connector.

What condition can the Mule application automatically adjust to or recover from without needing to restart or redeploy the Mule application?

A.
One of the stored procedures being called by the Mule application has been renamed
A.
One of the stored procedures being called by the Mule application has been renamed
Answers
B.
The database server was unavailable for four hours due to a major outage but is now fully operational again
B.
The database server was unavailable for four hours due to a major outage but is now fully operational again
Answers
C.
The credentials for accessing the database have been updated and the previous credentials are no longer valid
C.
The credentials for accessing the database have been updated and the previous credentials are no longer valid
Answers
D.
The database server has been updated and hence the database driver library/JAR needs a minor version upgrade
D.
The database server has been updated and hence the database driver library/JAR needs a minor version upgrade
Answers
Suggested answer: B

Explanation:

* Any change in the application will require a restart except when the issue outside the app. For below situations , you would need to redeploy the code after doing necessary changes -- One of the stored procedures being called by the Mule application has been renamed. In this case, in the Mule application you will have to do changes to accommodate the new stored procedure name.

-- Required redesign of Mule applications to follow microservice architecture principles. As code is changed, deployment is must -- If the credentials changed and you need to update the connector or the properties.

-- The credentials for accessing the database have been updated and the previous credentials are no longer valid. In this situation you need to restart or redeploy depending on how credentials are configured in Mule application.

* So Correct answer is The database server was unavailable for four hours due to a major outage but is now fully operational again as this is the only external issue to application.

Refer to the exhibit.

Anypoint Platform supports role-based access control (RBAC) to features of the platform. An organization has configured an external Identity Provider for identity management with Anypoint Platform.

What aspects of RBAC must ALWAYS be controlled from the Anypoint Platform control plane and CANNOT be controlled via the external Identity Provider?

A.
Controlling the business group within Anypoint Platform to which the user belongs
A.
Controlling the business group within Anypoint Platform to which the user belongs
Answers
B.
Assigning Anypoint Platform permissions to a role
B.
Assigning Anypoint Platform permissions to a role
Answers
C.
Assigning Anypoint Platform role(s) to a user
C.
Assigning Anypoint Platform role(s) to a user
Answers
D.
Removing a user's access to Anypoint Platform when they no longer work for the organization
D.
Removing a user's access to Anypoint Platform when they no longer work for the organization
Answers
Suggested answer: B

Explanation:

* By default, Anypoint Platform performs its own user management

ñ For user management, one external IdP can be integrated with the Anypoint Platform organization (note: not at business group level)

ñ Permissions and access control are still enforced inside Anypoint Platform and CANNOT be controlled via the external Identity Provider

* As the Anypoint Platform organization administrator, you can configure identity management in

Anypoint Platform to set up users for single sign-on (SSO).

* You can map users in a federated organization's group to a role which also gives the flexibility of controlling the business group within Anypoint Platform to which the user belongs to. Also user can nbe removed from external identity management system when they no longer work for the organization. So they wont be able to authenticate using SSO to login to Anypoint Platform.

* Using external identity we can no change permissions of a particular role in Mulesoft

Anypoint platform.

* So Correct answer is Assigning Anypoint Platform permissions to a role

An organization uses Mule runtimes which are managed by Anypoint Platform - Private Cloud Edition. What MuleSoft component is responsible for feeding analytics data to non-MuleSoft analytics platforms?

A.
Anypoint Exchange
A.
Anypoint Exchange
Answers
B.
The Mule runtimes
B.
The Mule runtimes
Answers
C.
Anypoint API Manager
C.
Anypoint API Manager
Answers
D.
Anypoint Runtime Manager
D.
Anypoint Runtime Manager
Answers
Suggested answer: D

Explanation:

Correct answer is Anypoint Runtime Manager MuleSoft Anypoint Runtime Manager (ARM) provides connectivity to Mule Runtime engines deployed across your organization to provide centralized management, monitoring and analytics reporting. However, most enterprise customers find it necessary for these on-premises runtimes to integrate with their existing non MuleSoft analytics / monitoring systems such as Splunk and ELK to support a single pane of glass view across the infrastructure.

* You can configure the Runtime Manager agent to export data to external analytics tools.

Using either the Runtime Manager cloud console or Anypoint Platform Private Cloud Edition, you can:

--> Send Mule event notifications, including flow executions and exceptions, to Splunk or ELK.

--> Send API Analytics to Splunk or ELK. Sending data to third-party tools is not supported for applications deployed on CloudHub.

You can use the CloudHub custom log appender to integrate with your logging system. Reference:

https://docs.mulesoft.com/runtime-manager/ https://docs.mulesoft.com/release-notes/runtimemanager-agent/runtime-manager-agent-release-notes

Additional Info:

It can be achieved in 3 steps:

1) register an agent to a runtime manager,

2) configure a gateway to enable API analytics to be sent to non MuleSoft analytics platform (Splunk for ex.) ñ as highlighted in the following diagram and

3) setup dashboards.

Total 244 questions
Go to page: of 25