ExamGecko
Home Home / MuleSoft / MCIA - Level 1

MuleSoft MCIA - Level 1 Practice Test - Questions Answers, Page 3

Question list
Search
Search

List of questions

Search

Related questions











Refer to the exhibit.

A Mule application is deployed to a multi-node Mule runtime cluster. The Mule application uses the competing consumer pattern among its cluster replicas to receive JMS messages from a JMS queue.

To process each received JMS message, the following steps are performed in a flow:

Step l: The JMS Correlation ID header is read from the received JMS message.

Step 2: The Mule application invokes an idempotent SOAP webservice over HTTPS, passing the JMSCorrelation ID as one parameter in the SOAP request.

Step 3: The response from the SOAP webservice also returns the same JMS Correlation ID.

Step 4: The JMS Correlation ID received from the SOAP webservice is validated to be identical to the JMS Correlation ID received in Step 1.

Step 5: The Mule application creates a response JMS message, setting the JMS Correlation ID message header to the validated JMS Correlation ID and publishes that message to a response JMS queue.

Where should the Mule application store the JMS Correlation ID values received in Step 1 and Step 3 so that the validation in Step 4 can be performed, while also making the overall Mule application highly available, fault-tolerant, performant, and maintainable?

A.
Both Correlation ID values should be stored in a persistent object store
A.
Both Correlation ID values should be stored in a persistent object store
Answers
B.
Both Correlation ID values should be stored In a non-persistent object store
B.
Both Correlation ID values should be stored In a non-persistent object store
Answers
C.
The Correlation ID value in Step 1 should be stored in a persistent object store The Correlation ID value in step 3 should be stored as a Mule event variable/attribute
C.
The Correlation ID value in Step 1 should be stored in a persistent object store The Correlation ID value in step 3 should be stored as a Mule event variable/attribute
Answers
D.
Both Correlation ID values should be stored as Mule event variable/attribute
D.
Both Correlation ID values should be stored as Mule event variable/attribute
Answers
Suggested answer: C

Explanation:

* If we store Correlation id value in step 1 as Mule event variables/attributes, the values will be cleared after server restart and we want system to be fault tolerant.

* The Correlation ID value in Step 1 should be stored in a persistent object store.

* We don't need to store Correlation ID value in Step 3 to persistent object store. We can store it but as we also need to make application performant. We can avoid this step of accessing persistent object store.

* Accessing persistent object stores slow down the performance as persistent object stores are by default stored in shared file systems.

* As the SOAP service is idempotent in nature. In case of any failures , using this Correlation ID saved in first step we can make call to SOAP service and validate the Correlation ID.

Additional Information:

* Competing Consumers are multiple consumers that are all created to receive messages from a single Point-to-Point Channel. When the channel delivers a message, any of the consumers could potentially receive it. The messaging system's implementation determines which consumer actually receives the message, but in effect the consumers compete with each other to be the receiver. Once a consumer receives a message, it can delegate to the rest of its application to help process the message.

* In case you are unaware about term idempotent re is more info:

Idempotent operations means their result will always same no matter how many times these operations are invoked.

An integration Mute application is being designed to process orders by submitting them to a backend system for offline processing. Each order will be received by the Mute application through an HTTPS POST and must be acknowledged immediately. Once acknowledged, the order will be submitted to a backend system. Orders that cannot be successfully submitted due to rejections from the backend system will need to be processed manually (outside the backend system).

The Mule application will be deployed to a customer-hosted runtime and is able to use an existing ActiveMQ broker if needed.

The backend system has a track record of unreliability both due to minor network connectivity issues and longer outages.

What idiomatic (used for their intended purposes) combination of Mule application components and ActiveMQ queues are required to ensure automatic submission of orders to the backend system, while minimizing manual order processing?

A.
An On Error scope Non-persistent VM ActiveMQ Dead Letter Queue for manual processing
A.
An On Error scope Non-persistent VM ActiveMQ Dead Letter Queue for manual processing
Answers
B.
An On Error scope MuleSoft Object Store ActiveMQ Dead Letter Queue for manual processing
B.
An On Error scope MuleSoft Object Store ActiveMQ Dead Letter Queue for manual processing
Answers
C.
Until Successful component MuleSoft Object Store ActiveMQ is NOT needed or used
C.
Until Successful component MuleSoft Object Store ActiveMQ is NOT needed or used
Answers
D.
Until Successful component ActiveMQ long retry Queue ActiveMQ Dead Letter Queue for manual processing
D.
Until Successful component ActiveMQ long retry Queue ActiveMQ Dead Letter Queue for manual processing
Answers
Suggested answer: D

Explanation:

Correct answer is using below set of activities Until Successful component ActiveMQ long retry Queue ActiveMQ Dead Letter Queue for manual processing We will see why this is correct answer but before that lets understand few of the concepts which we need to know. Until Successful Scope The Until Successful scope processes messages through its processors until the entire operation succeeds. Until Successful repeatedly retries to process a message that is attempting to complete an activity such as: - Dispatching to outbound endpoints, for example, when calling a remote web service that may have availability issues. - Executing a component method, for example, when executing on a Spring bean that may depend on unreliable resources. - A sub-flow execution, to keep re-executing several actions until they all succeed, - Any other message processor execution, to allow more complex scenarios. How this will help requirement :

Using Until Successful Scope we can retry sending the order to backend systems in case of error to avoid manual processing later. Retry values can be configured in Until Successful Scope Apache ActiveMQ It is an open source message broker written in Java together with a full Java Message Service client ActiveMQ has the ability to deliver messages with delays thanks to its scheduler. This functionality is the base for the broker redelivery plug-in. The redelivery plug-in can intercept dead letter processing and reschedule the failing messages for redelivery. Rather than being delivered to a DLQ, a failing message is scheduled to go to the tail of the original queue and redelivered to a message consumer.

How this will help requirement : If backend application is down for a longer duration where Until Successful Scope wont work, then we can make use of ActiveMQ long retry Queue. The redelivery plug-in can intercept dead letter processing and reschedule the failing messages for redelivery. Mule Reference:

https://docs.mulesoft.com/mule-runtime/4.3/migration-core-until-successful

What comparison is true about a CloudHub Dedicated Load Balancer (DLB) vs. the CloudHub Shared Load Balancer (SLB)?

A.
Only a DLB allows the configuration of a custom TLS server certificate
A.
Only a DLB allows the configuration of a custom TLS server certificate
Answers
B.
Only the SLB can forward HTTP traffic to the VPC-internal ports of the CloudHub workers
B.
Only the SLB can forward HTTP traffic to the VPC-internal ports of the CloudHub workers
Answers
C.
Both a DLB and the SLB allow the configuration of access control via IP whitelists
C.
Both a DLB and the SLB allow the configuration of access control via IP whitelists
Answers
D.
Both a DLB and the SLB implement load balancing by sending HTTP requests to workers with thelowest workloads
D.
Both a DLB and the SLB implement load balancing by sending HTTP requests to workers with thelowest workloads
Answers
Suggested answer: A

Explanation:

* Shared load balancers don't allow you to configure custom SSL certificates or proxy rules * Dedicated Load Balancer are optional but you need to purchase them additionally if needed.

* TLS is a cryptographic protocol that provides communications security for your Mule app. TLS offers many different ways of exchanging keys for authentication, encrypting data, and guaranteeing message integrity.

* The CloudHub Shared Load Balancer terminates TLS connections and uses its own server-side certificate.

* Only a DLB allows the configuration of a custom TLS server certificate * DLB enables you to define SSL configurations to provide custom certificates and optionally enforce two-way SSL client authentication.

* To use a DLB in your environment, you must first create an Anypoint VPC. Because you can associate multiple environments with the same Anypoint VPC, you can use the same dedicated load balancer for your different environments.

* MuleSoft Reference: https://docs.mulesoft.com/runtime-manager/dedicated-load-balancertutorialAdditional Info on SLB Vs DLB:

Additional nodes are being added to an existing customer-hosted Mule runtime cluster to improve performance. Mule applications deployed to this cluster are invoked by API clients through a load balancer.

What is also required to carry out this change?

A.
A new load balancer must be provisioned to allow traffic to the new nodes in a round-robin fashion
A.
A new load balancer must be provisioned to allow traffic to the new nodes in a round-robin fashion
Answers
B.
External monitoring tools or log aggregators must be configured to recognize the new nodes
B.
External monitoring tools or log aggregators must be configured to recognize the new nodes
Answers
C.
API implementations using an object store must be adjusted to recognize the new nodes and persist to them
C.
API implementations using an object store must be adjusted to recognize the new nodes and persist to them
Answers
D.
New firewall rules must be configured to accommodate communication between API clients and the new nodes
D.
New firewall rules must be configured to accommodate communication between API clients and the new nodes
Answers
Suggested answer: B

Explanation:

* Clustering is a group of servers or mule runtime which acts as a single unit.

* Mulesoft Enterprise Edition supports scalable clustering to provide high availability for the Mulesoft application.

* In simple terms, virtual servers composed of multiple nodes and they communicate and share information through a distributed shared memory grid.

* By default, Mulesoft ensures the High availability of applications if clustering implemented.

* Let's consider the scenario one of the nodes in cluster crashed or goes down and under maintenance. In such cases, Mulesoft will ensure that requests are processed by other nodes in the cluster. Mulesoft clustering also ensures that the request is load balanced between all the nodes in a cluster.

* Clustering is only supported by on-premise Mule runtime and it is not supported in Cloudhub.

Correct answer is External monitoring tools or log aggregators must be configured to recognize the new nodes * Rest of the options are automatically taken care of when a new node is added in cluster.

Reference: https://docs.mulesoft.com/runtime-manager/cluster-about

Refer to the exhibit.

An organization is designing a Mule application to receive data from one external business partner.

The two companies currently have no shared IT infrastructure and do not want to establish one.

Instead, all communication should be over the public internet (with no VPN).

What Anypoint Connector can be used in the organization's Mule application to securely receive data from this external business partner?

A.
File connector
A.
File connector
Answers
B.
VM connector
B.
VM connector
Answers
C.
SFTP connector
C.
SFTP connector
Answers
D.
Object Store connector
D.
Object Store connector
Answers
Suggested answer: C

Explanation:

* Object Store and VM Store is used for sharing data inter or intra mule applications in same setup.

Can't be used with external Business Partner

* Also File connector will not be useful as the two companies currently have no shared IT infrastructure. It's specific for local use.

* Correct answer is SFTP connector. The SFTP Connector implements a secure file transport channel so that your Mule application can exchange files with external resources. SFTP uses the SSH security protocol to transfer messages.

You can implement the SFTP endpoint as an inbound endpoint with a one-way exchange pattern, or as an outbound endpoint configured for either a one-way or requestresponse exchange pattern.

An organization is creating a set of new services that are critical for their business. The project team prefers using REST for all services but is willing to use SOAP with common WS-" standards if a particular service requires it.

What requirement would drive the team to use SOAP/WS-* for a particular service?

A.
Must use XML payloads for the service and ensure that it adheres to a specific schema
A.
Must use XML payloads for the service and ensure that it adheres to a specific schema
Answers
B.
Must publish and share the service specification (including data formats) with the consumers of the service
B.
Must publish and share the service specification (including data formats) with the consumers of the service
Answers
C.
Must support message acknowledgement and retry as part of the protocol
C.
Must support message acknowledgement and retry as part of the protocol
Answers
D.
Must secure the service, requiring all consumers to submit a valid SAML token
D.
Must secure the service, requiring all consumers to submit a valid SAML token
Answers
Suggested answer: D

Explanation:

Security Assertion Markup Language (SAML) is an open standard that allows identity providers (IdP) to pass authorization credentials to service providers (SP).

SAML transactions use Extensible Markup Language (XML) for standardized communications between the identity provider and service providers.

SAML is the link between the authentication of a user's identity and the authorization to use a service.

WS-Security is the key extension that supports many authentication models including: basic username/password credentials, SAML, OAuth and more.

A common way that SOAP API's are authenticated is via SAML Single Sign On (SSO). SAML works by facilitating the exchange of authentication and authorization credentials across applications.

However, there is no specification that describes how to add SAML to REST web services.

Reference : https://www.oasis-open.org/committees/download.php/16768/wss-v1.1-spec-os-SAMLTokenProfile.pdf

Refer to the exhibit.

A business process involves two APIs that interact with each other asynchronously over HTTP. EachAPI is implemented as a Mule application. API 1 receives the initial HTTP request and invokes API 2 (in a fire and forget fashion) while API 2, upon completion of the processing, calls back into API l to notify about completion of the asynchronous process.

Each API Is deployed to multiple redundant Mule runtimes and a separate load balancer, and is deployed to a separate network zone.

In the network architecture, how must the firewall rules be configured to enable the above Interaction between API 1 and API 2?

A.
To authorize the certificate to be used both APIs
A.
To authorize the certificate to be used both APIs
Answers
B.
To enable communication from each API's Mule Runtimes and Network zone to the load balancer of the other API
B.
To enable communication from each API's Mule Runtimes and Network zone to the load balancer of the other API
Answers
C.
To open direct two-way communication between the Mule Runtimes of both API's
C.
To open direct two-way communication between the Mule Runtimes of both API's
Answers
D.
To allow communication between load balancers used by each API
D.
To allow communication between load balancers used by each API
Answers
Suggested answer: B

Explanation:

* If your API implementation involves putting a load balancer in front of your APIkit application, configure the load balancer to redirect URLs that reference the baseUri of the application directly. If the load balancer does not redirect URLs, any calls that reach the load balancer looking for the application do not reach their destination.

* When you receive incoming traffic through the load balancer, the responses will go out the same way. However, traffic that is originating from your instance will not pass through the load balancer.

Instead, it is sent directly from the public IP address of your instance out to the Internet. The ELB is not involved in that scenario.

* The question says "each API is deployed to multiple redundant Mule runtimes", that seems to be a hint for self hosted Mule runtime cluster. Set Inbound allowed for the LB, outbound allowed for runtime to request out.

* Hence correct way is to enable communication from each API's Mule Runtimes and Network zone to the load balancer of the other API. Because communication is asynchronous one

Reference: https://docs.mulesoft.com/apikit/4.x/configure-load-balancer-task

An organization is designing the following two Mule applications that must share data via a common persistent object store instance:

- Mule application P will be deployed within their on-premises datacenter.

- Mule application C will run on CloudHub in an Anypoint VPC.

The object store implementation used by CloudHub is the Anypoint Object Store v2 (OSv2). what type of object store(s) should be used, and what design gives both Mule applications access to the same object store instance?

A.
Application P uses the Object Store connector to access a persistent object store Application C accesses this persistent object store via the Object Store REST API through an IPsec tunnel
A.
Application P uses the Object Store connector to access a persistent object store Application C accesses this persistent object store via the Object Store REST API through an IPsec tunnel
Answers
B.
Application C and P both use the Object Store connector to access the Anypoint Object Store v2
B.
Application C and P both use the Object Store connector to access the Anypoint Object Store v2
Answers
C.
Application C uses the Object Store connector to access a persistent object Application P accesses the persistent object store via the Object Store REST API
C.
Application C uses the Object Store connector to access a persistent object Application P accesses the persistent object store via the Object Store REST API
Answers
D.
Application C and P both use the Object Store connector to access a persistent object store
D.
Application C and P both use the Object Store connector to access a persistent object store
Answers
Suggested answer: C

Explanation:

Correct answer is Application A accesses the persistent object store via the Object Store REST API Application B uses the Object Store connector to access a persistent object * Object Store v2 lets CloudHub applications store data and states across batch processes, Mule components and applications, from within an application or by using the Object Store REST API. * On-premise Mule applications cannot use Object Store v2. * You can select Object Store v2 as the implementation for Mule 3 and Mule 4 in CloudHub by checking the Object Store V2 checkbox in Runtime Manager at deployment time. * CloudHub Mule applications can use Object Store connector to write to the object store * The only way on-premises Mule applications can access Object Store v2 is via the Object Store REST API.

* You can configure a Mule app to use the Object Store REST API to store and retrieve values from an object store in another Mule app.

What limits if a particular Anypoint Platform user can discover an asset in Anypoint Exchange?

A.
Design Center and RAML were both used to create the asset
A.
Design Center and RAML were both used to create the asset
Answers
B.
The existence of a public Anypoint Exchange portal to which the asset has been published
B.
The existence of a public Anypoint Exchange portal to which the asset has been published
Answers
C.
The type of the asset in Anypoint Exchange
C.
The type of the asset in Anypoint Exchange
Answers
D.
The business groups to which the user belongs
D.
The business groups to which the user belongs
Answers
Suggested answer: D

Explanation:

* "The existence of a public Anypoint Exchange portal to which the asset has been published" -question does not mention anything about the public portal. Beside the public portal is open to the internet, to anyone.

* If you cannot find an asset in the current business group scopes, search in other scopes. In the left navigation bar click All assets (assets provided by MuleSoft and your own master organization), Provided by MuleSoft, or a business group scope. User belonging to one Business Group can see assets related to his group only Reference:

https://docs.mulesoft.com/exchange/to-find-info https://docs.mulesoft.com/exchange/asset-detailsCorrect answer is The business groups to which the user belongs

When using Anypoint Platform across various lines of business with their own Anypoint Platform business groups, what configuration of Anypoint Platform is always performed at the organization level as opposed to at the business group level?

A.
Environment setup
A.
Environment setup
Answers
B.
Identity management setup
B.
Identity management setup
Answers
C.
Role and permission setup
C.
Role and permission setup
Answers
D.
Dedicated Load Balancer setup
D.
Dedicated Load Balancer setup
Answers
Suggested answer: B

Explanation:

* Roles are business group specific. Configure identity management in the Anypoint Platform master organization. As the Anypoint Platform organization administrator, you can configure identity management in Anypoint Platform to set up users for single sign-on (SSO).

* Roles and permissions can be set up at business group and organization level also. But Identity Management setup is only done at Organization level

* Business groups are self-contained resource groups that contain Anypoint Platform resources such as applications and APIs. Business groups provide a way to separate and control access to Anypoint Platform resources because users have access only to the busine

Total 244 questions
Go to page: of 25