ExamGecko
Ask Question

Salesforce Certified MuleSoft Integration Architect I Practice Test - Questions Answers

List of questions

Question 1

Report
Export
Collapse

A mule application is being designed to perform product orchestration. The Mule application needs to join together the responses from an inventory API and a Product Sales History API with the least latency.

To minimize the overall latency. What is the most idiomatic (used for its intended purpose) design to call each API request in the Mule application?

Call each API request in a separate lookup call from Dataweave reduce operator
Call each API request in a separate lookup call from Dataweave reduce operator
Call each API request in a separate route of a Scatter-Gather
Call each API request in a separate route of a Scatter-Gather
Call each API request in a separate route of a Parallel For Each scope
Call each API request in a separate route of a Parallel For Each scope
Call each API request in a separate Async scope
Call each API request in a separate Async scope
Suggested answer: B

Explanation:

Scatter-Gather sends a request message to multiple targets concurrently. It collects the responses from all routes, and aggregates them into a single message.

asked 23/09/2024
ASDASDASDA SDASD
36 questions

Question 2

Report
Export
Collapse

An organization will deploy Mule applications to Cloudhub, Business requirements mandate that all application logs be stored ONLY in an external splunk consolidated logging service and NOT in Cloudhub.

In order to most easily store Mule application logs ONLY in Splunk, how must Mule application logging be configured in Runtime Manager, and where should the log4j2 splunk appender be defined?

Keep the default logging configuration in RuntimeManager Define the splunk appender in ONE global log4j.xml file that is uploaded once to Runtime Manager to support at Mule application deployments.
Keep the default logging configuration in RuntimeManager Define the splunk appender in ONE global log4j.xml file that is uploaded once to Runtime Manager to support at Mule application deployments.
Disable Cloudhub logging in Runtime Manager Define the splunk appender in EACH Mule application's log4j2.xml file
Disable Cloudhub logging in Runtime Manager Define the splunk appender in EACH Mule application's log4j2.xml file
Disable Cloudhub logging in Runtime Manager Define the splunk appender in ONE global log4j.xml file that is uploaded once to Runtime Manger to support at Mule application deployments.
Disable Cloudhub logging in Runtime Manager Define the splunk appender in ONE global log4j.xml file that is uploaded once to Runtime Manger to support at Mule application deployments.
Keep the default logging configuration in Runtime Manager Define the Splunk appender in EACH Mule application log4j2.xml file
Keep the default logging configuration in Runtime Manager Define the Splunk appender in EACH Mule application log4j2.xml file
Suggested answer: B

Explanation:

By default, CloudHub replaces a Mule application's log4j2.xml file with a CloudHub log4j2.xml file. In CloudHub, you can disable the CloudHub provided Mule application log4j2 file. This allows integrating Mule application logs with custom or third-party log management systems

asked 23/09/2024
J. Cuylits
34 questions

Question 3

Report
Export
Collapse

What aspect of logging is only possible for Mule applications deployed to customer-hosted Mule runtimes, but NOT for Mule applications deployed to CloudHub?

To send Mule application log entries to Splunk
To send Mule application log entries to Splunk
To change tog4j2 tog levels in Anypoint Runtime Manager without having to restart the Mule application
To change tog4j2 tog levels in Anypoint Runtime Manager without having to restart the Mule application
To log certain messages to a custom log category
To log certain messages to a custom log category
To directly reference one shared and customized log4j2.xml file from multiple Mule applications
To directly reference one shared and customized log4j2.xml file from multiple Mule applications
Suggested answer: D

Explanation:

*Correct answer is To directly reference one shared and customized log4j2.xml file from multiple Mule applications. Key word to note in the answer is directly.

*By default, CloudHub replaces a Mule application's log4j2.xml file with a CloudHub log4j2.xml file. This specifies the CloudHub appender to write logs to the CloudHub logging service.

*You cannot modify CloudHub log4j2.xml file to add any custom appender. But there is a process in order to achieve this. You need to raise a request on support portal to disable CloudHub provided Mule application log4j2 file.

Salesforce Certified MuleSoft Integration Architect I image Question 3 explanation 66020 09232024002916000000

* Once this is done , Mule application's log4j2.xml file is used which you can use to send/export application logs to other log4j2 appenders, such as a custom logging system MuleSoft does not own any responsibility for lost logging data due to misconfiguration of your own log4j appender if it happens by any chance.

Salesforce Certified MuleSoft Integration Architect I image Question 3 explanation 66020 09232024002916000000

* One more difference between customer-hosted Mule runtimes and CloudHub deployed mule instance is that

- CloudHub system log messages cannot be sent to external log management system without installing custom CH logging configuration through support

- where as Customer-hosted runtime can send system and application log to external log management system

MuleSoft

Reference:

https://docs.mulesoft.com/runtime-manager/viewing-log-data

https://docs.mulesoft.com/runtime-manager/custom-log-appender

asked 23/09/2024
Brent Varona
25 questions

Question 4

Report
Export
Collapse

What is true about the network connections when a Mule application uses a JMS connector to interact with a JMS provider (message broker)?

To complete sending a JMS message, the JMS connector must establish a network connection with the JMS message recipient
To complete sending a JMS message, the JMS connector must establish a network connection with the JMS message recipient
To receive messages into the Mule application, the JMS provider initiates a network connection to the JMS connector and pushes messages along this connection
To receive messages into the Mule application, the JMS provider initiates a network connection to the JMS connector and pushes messages along this connection
The JMS connector supports both sending and receiving of JMS messages over the protocol determined by the JMS provider
The JMS connector supports both sending and receiving of JMS messages over the protocol determined by the JMS provider
The AMQP protocol can be used by the JMS connector to portably establish connections to various types of JMS providers
The AMQP protocol can be used by the JMS connector to portably establish connections to various types of JMS providers
Suggested answer: C

Explanation:

* To send message or receive JMS (Java Message Service) message no separate network connection need to be established. So option A, C and D are ruled out.Answer:: The JMS connector supports both sending and receiving of JMS* JMS Connector enables sending and receiving messages to queues and topics for any message service that implements the JMS specification.* JMS is a widely used API for message-oriented middleware.* It enables the communication between different components of a distributed application to be loosely coupled, reliable, and asynchronous.MuleSoft Doc

Reference:https://docs.mulesoft.com/jms-connector/1.7/

Salesforce Certified MuleSoft Integration Architect I image Question 4 explanation 66021 09232024002916000000

asked 23/09/2024
Ilia Voronkov
41 questions

Question 5

Report
Export
Collapse

Refer to the exhibit.

Salesforce Certified MuleSoft Integration Architect I image Question 5 66022 09232024002916000000

A business process involves the receipt of a file from an external vendor over SFTP. The file needs to be parsed and its content processed, validated, and ultimately persisted to a database. The delivery mechanism is expected to change in the future as more vendors send similar files using other mechanisms such as file transfer or HTTP POST.

What is the most effective way to design for these requirements in order to minimize the impact of future change?

Use a MuleSoft Scatter-Gather and a MuleSoft Batch Job to handle the different files coming from different sources
Use a MuleSoft Scatter-Gather and a MuleSoft Batch Job to handle the different files coming from different sources
Create a Process API to receive the file and process it using a MuleSoft Batch Job while delegating the data save process to a System API
Create a Process API to receive the file and process it using a MuleSoft Batch Job while delegating the data save process to a System API
Create an API that receives the file and invokes a Process API with the data contained In the file, then have the Process API process the data using a MuleSoft Batch Job and other System APIs as needed
Create an API that receives the file and invokes a Process API with the data contained In the file, then have the Process API process the data using a MuleSoft Batch Job and other System APIs as needed
Use a composite data source so files can be retrieved from various sources and delivered to a MuleSoft Batch Job for processing
Use a composite data source so files can be retrieved from various sources and delivered to a MuleSoft Batch Job for processing
Suggested answer: C

Explanation:

* Scatter-Gather is used for parallel processing, to improve performance. In this scenario, input files are coming from different vendors so mostly at different times. Goal here is to minimize the impact of future change. So scatter Gather is not the correct choice.

* If we use 1 API to receive all files from different Vendors, any new vendor addition will need changes to that 1 API to accommodate new requirements. So Option A and C are also ruled out.

* Correct answer is Create an API that receives the file and invokes a Process API with the data contained in the file, then have the Process API process the data using a MuleSoft Batch Job and other System APIs as needed. Answer to this question lies in the API led connectivity approach.

* API-led connectivity is a methodical way to connect data to applications through a series of reusable and purposeful modern APIs that are each developed to play a specific role -- unlock data from systems, compose data into processes, or deliver an experience. System API : System API tier, which provides consistent, managed, and secure access to backend systems. Process APIs : Process APIs take core assets and combines them with some business logic to create a higher level of value. Experience APIs : These are designed specifically for consumption by a specific end-user app or device.

So in case of any future plans , organization can only add experience API on addition of new Vendors, which reuse the already existing process API. It will keep impact minimal.

Salesforce Certified MuleSoft Integration Architect I image Question 5 explanation 66022 09232024002916000000

asked 23/09/2024
ILLIA VELIASEVICH
46 questions

Question 6

Report
Export
Collapse

Refer to the exhibit.

Salesforce Certified MuleSoft Integration Architect I image Question 6 66023 09232024002916000000

A shopping cart checkout process consists of a web store backend sending a sequence of API invocations to an Experience API, which in turn invokes a Process API. All API invocations are over HTTPS POST. The Java web store backend executes in a Java EE application server, while all API implementations are Mule applications executing in a customer -hosted Mule runtime.

End-to-end correlation of all HTTP requests and responses belonging to each individual checkout Instance is required. This is to be done through a common correlation ID, so that all log entries written by the web store backend, Experience API implementation, and Process API implementation include the same correlation ID for all requests and responses belonging to the same checkout instance.

What is the most efficient way (using the least amount of custom coding or configuration) for the web store backend and the implementations of the Experience API and Process API to participate in end-to-end correlation of the API invocations for each checkout instance?

A)

The web store backend, being a Java EE application, automatically makes use of the thread-local correlation ID generated by the Java EE application server and automatically transmits that to the Experience API using HTTP-standard headers

No special code or configuration is included in the web store backend, Experience API, and Process API implementations to generate and manage the correlation ID

Salesforce Certified MuleSoft Integration Architect I image Question 6 66023 09232024002916000000

B)

The web store backend generates a new correlation ID value at the start of checkout and sets it on the X-CORRELATlON-lt HTTP request header In each API invocation belonging to that checkout

No special code or configuration is included in the Experience API and Process API implementations to generate and manage the correlation ID

Salesforce Certified MuleSoft Integration Architect I image Question 6 66023 09232024002916000000

C)

The Experience API implementation generates a correlation ID for each incoming HTTP request and passes it to the web store backend in the HTTP response, which includes it in all subsequent API invocations to the Experience API.

The Experience API implementation must be coded to also propagate the correlation ID to the Process API in a suitable HTTP request header

Salesforce Certified MuleSoft Integration Architect I image Question 6 66023 09232024002916000000

D)

The web store backend sends a correlation ID value in the HTTP request body In the way required by the Experience API

The Experience API and Process API implementations must be coded to receive the custom correlation ID In the HTTP requests and propagate It in suitable HTTP request headers

Salesforce Certified MuleSoft Integration Architect I image Question 6 66023 09232024002916000000

Option A
Option A
Option B
Option B
Option C
Option C
Option D
Option D
Suggested answer: B

Explanation:

Correct answer is 'The web store backend generates a new correlation ID value at the start of checkout and sets it on the XCORRELATION-ID HTTP request header in each API invocation belonging to that checkout No special code or configuration is included in the Experience API and Process API implementations to generate and manage the correlation ID' : By design, Correlation Ids cannot be changed within a flow in Mule 4 applications and can be set only at source. This ID is part of the Event Context and is generated as soon as the message is received by the application. When a HTTP Request is received, the request is inspected for 'X-Correlation-Id' header. If 'X-Correlation-Id' header is present, HTTP connector uses this as the Correlation Id. If 'X-Correlation-Id' header is NOT present, a Correlation Id is randomly generated. For Incoming HTTP Requests: In order to set a custom Correlation Id, the client invoking the HTTP request must set 'X-Correlation-Id' header. This will ensure that the Mule Flow uses this Correlation Id. For Outgoing HTTP Requests: You can also propagate the existing Correlation Id to downstream APIs. By default, all outgoing HTTP Requests send 'X-Correlation-Id' header. However, you can choose to set a different value to 'X-Correlation-Id' header or set 'Send Correlation Id' to NEVER.

Mulesoft

Reference: https://help.mulesoft.com/s/article/How-to-Set-Custom-Correlation-Id-for-Flows-with-HTTP-Endpoint-in-Mule-4

Salesforce Certified MuleSoft Integration Architect I image Question 6 explanation 66023 09232024002916000000

asked 23/09/2024
Vincent Dsouza
37 questions

Question 7

Report
Export
Collapse

What operation can be performed through a JMX agent enabled in a Mule application?

View object store entries
View object store entries
Replay an unsuccessful message
Replay an unsuccessful message
Set a particular tog4J2 log level to TRACE
Set a particular tog4J2 log level to TRACE
Deploy a Mule application
Deploy a Mule application
Suggested answer: A

Explanation:

JMX Management Java Management Extensions (JMX) is a simple and standard way to manage applications, devices, services, and other resources. JMX is dynamic, so you can use it to monitor and manage resources as they are created, installed, and implemented. You can also use JMX to monitor and manage the Java Virtual Machine (JVM). Each resource is instrumented by one or more Managed Beans, or MBeans. All MBeans are registered in an MBean Server. The JMX server agent consists of an MBean Server and a set of services for handling Mbeans. There are several agents provided with Mule for JMX support. The easiest way to configure JMX is to use the default JMX support agent. Log4J Agent The log4j agent exposes the configuration of the Log4J instance used by Mule for JMX management. You enable the Log4J agent using the <jmx-log4j> element. It does not take any additional properties MuleSoft

Reference: https://docs.mulesoft.com/mule-runtime/3.9/jmx-management

asked 23/09/2024
- Paulo Fonseca
38 questions

Question 8

Report
Export
Collapse

Refer to the exhibit.

Salesforce Certified MuleSoft Integration Architect I image Question 8 66025 09232024002916000000

A Mule application is deployed to a multi-node Mule runtime cluster. The Mule application uses the competing consumer pattern among its cluster replicas to receive JMS messages from a JMS queue. To process each received JMS message, the following steps are performed in a flow:

Step l: The JMS Correlation ID header is read from the received JMS message.

Step 2: The Mule application invokes an idempotent SOAP webservice over HTTPS, passing the JMS Correlation ID as one parameter in the SOAP request.

Step 3: The response from the SOAP webservice also returns the same JMS Correlation ID.

Step 4: The JMS Correlation ID received from the SOAP webservice is validated to be identical to the JMS Correlation ID received in Step 1.

Step 5: The Mule application creates a response JMS message, setting the JMS Correlation ID message header to the validated JMS Correlation ID and publishes that message to a response JMS queue.

Where should the Mule application store the JMS Correlation ID values received in Step 1 and Step 3 so that the validation in Step 4 can be performed, while also making the overall Mule application highly available, fault-tolerant, performant, and maintainable?

Both Correlation ID values should be stored in a persistent object store
Both Correlation ID values should be stored in a persistent object store
Both Correlation ID values should be stored In a non-persistent object store
Both Correlation ID values should be stored In a non-persistent object store
The Correlation ID value in Step 1 should be stored in a persistent object store The Correlation ID value in step 3 should be stored as a Mule event variable/attribute
The Correlation ID value in Step 1 should be stored in a persistent object store The Correlation ID value in step 3 should be stored as a Mule event variable/attribute
Both Correlation ID values should be stored as Mule event variable/attribute
Both Correlation ID values should be stored as Mule event variable/attribute
Suggested answer: C

Explanation:

*If we store Correlation id value in step 1 as Mule event variables/attributes, the values will be cleared after server restart and we want system to be fault tolerant.

*The Correlation ID value in Step 1 should be stored in a persistent object store.

*We don't need to store Correlation ID value in Step 3 to persistent object store. We can store it but as we also need to make application performant. We can avoid this step of accessing persistent object store.

*Accessing persistent object stores slow down the performance as persistent object stores are by default stored in shared file systems.

* As the SOAP service is idempotent in nature. In case of any failures , using this Correlation ID saved in first step we can make call to SOAP service and validate the Correlation ID.

Top of Form

Additional Information:

*Competing Consumersare multiple consumers that are all created to receive messages from a singlePoint-to-Point Channel. When the channel delivers a message, any of the consumers could potentially receive it. The messaging system's implementation determines which consumer actually receives the message, but in effect the consumers compete with each other to be the receiver. Once a consumer receives a message, it can delegate to the rest of its application to help process the message.

Salesforce Certified MuleSoft Integration Architect I image Question 8 explanation 66025 09232024002916000000

* In case you are unaware about term idempotent re is more info:

Idempotent operations means their result will always same no matter how many times these operations are invoked.

Salesforce Certified MuleSoft Integration Architect I image Question 8 explanation 66025 09232024002916000000

Bottom of Form

asked 23/09/2024
sarath raj
41 questions

Question 9

Report
Export
Collapse

An integration Mute application is being designed to process orders by submitting them to a backend system for offline processing. Each order will be received by the Mute application through an HTTPS POST and must be acknowledged immediately. Once acknowledged, the order will be submitted to a backend system. Orders that cannot be successfully submitted due to rejections from the backend system will need to be processed manually (outside the backend system).

The Mule application will be deployed to a customer-hosted runtime and is able to use an existing ActiveMQ broker if needed.

The backend system has a track record of unreliability both due to minor network connectivity issues and longer outages.

What idiomatic (used for their intended purposes) combination of Mule application components and ActiveMQ queues are required to ensure automatic submission of orders to the backend system, while minimizing manual order processing?

An On Error scope Non-persistent VM ActiveMQ Dead Letter Queue for manual processing
An On Error scope Non-persistent VM ActiveMQ Dead Letter Queue for manual processing
An On Error scope MuleSoft Object Store ActiveMQ Dead Letter Queue for manual processing
An On Error scope MuleSoft Object Store ActiveMQ Dead Letter Queue for manual processing
Until Successful component MuleSoft Object Store ActiveMQ is NOT needed or used
Until Successful component MuleSoft Object Store ActiveMQ is NOT needed or used
Until Successful component ActiveMQ long retry Queue ActiveMQ Dead Letter Queue for manual processing
Until Successful component ActiveMQ long retry Queue ActiveMQ Dead Letter Queue for manual processing
Suggested answer: D

Explanation:

Correct answer is using below set of activities Until Successful component ActiveMQ long retry Queue ActiveMQ Dead Letter Queue for manual processing We will see why this is correct answer but before that lets understand few of the concepts which we need to know. Until Successful Scope The Until Successful scope processes messages through its processors until the entire operation succeeds. Until Successful repeatedly retries to process a message that is attempting to complete an activity such as: - Dispatching to outbound endpoints, for example, when calling a remote web service that may have availability issues. - Executing a component method, for example, when executing on a Spring bean that may depend on unreliable resources. - A sub-flow execution, to keep re-executing several actions until they all succeed, - Any other message processor execution, to allow more complex scenarios. How this will help requirement : Using Until Successful Scope we can retry sending the order to backend systems in case of error to avoid manual processing later. Retry values can be configured in Until Successful Scope Apache ActiveMQ It is an open source message broker written in Java together with a full Java Message Service client ActiveMQ has the ability to deliver messages with delays thanks to its scheduler. This functionality is the base for the broker redelivery plug-in. The redelivery plug-in can intercept dead letter processing and reschedule the failing messages for redelivery. Rather than being delivered to a DLQ, a failing message is scheduled to go to the tail of the original queue and redelivered to a message consumer. How this will help requirement : If backend application is down for a longer duration where Until Successful Scope wont work, then we can make use of ActiveMQ long retry Queue. The redelivery plug-in can intercept dead letter processing and reschedule the failing messages for redelivery. Mule

Reference: https://docs.mulesoft.com/mule-runtime/4.3/migration-core-until-successful

asked 23/09/2024
Ramesh Kumar Patel
32 questions

Question 10

Report
Export
Collapse

What comparison is true about a CloudHub Dedicated Load Balancer (DLB) vs. the CloudHub Shared Load Balancer (SLB)?

Only a DLB allows the configuration of a custom TLS server certificate
Only a DLB allows the configuration of a custom TLS server certificate
Only the SLB can forward HTTP traffic to the VPC-internal ports of the CloudHub workers
Only the SLB can forward HTTP traffic to the VPC-internal ports of the CloudHub workers
Both a DLB and the SLB allow the configuration of access control via IP whitelists
Both a DLB and the SLB allow the configuration of access control via IP whitelists
Both a DLB and the SLB implement load balancing by sending HTTP requests to workers with the lowest workloads
Both a DLB and the SLB implement load balancing by sending HTTP requests to workers with the lowest workloads
Suggested answer: A

Explanation:

* Shared load balancers don't allow you to configure custom SSL certificates or proxy rules

* Dedicated Load Balancer are optional but you need to purchase them additionally if needed.

* TLS is a cryptographic protocol that provides communications security for your Mule app. TLS offers many different ways of exchanging keys for authentication, encrypting data, and guaranteeing message integrity.

* The CloudHub Shared Load Balancer terminates TLS connections and uses its own server-side certificate.

* Only a DLB allows the configuration of a custom TLS server certificate

* DLB enables you to define SSL configurations to provide custom certificates and optionally enforce two-way SSL client authentication.

* To use a DLB in your environment, you must first create an Anypoint VPC. Because you can associate multiple environments with the same Anypoint VPC, you can use the same dedicated load balancer for your different environments.

* MuleSoft

Reference: https://docs.mulesoft.com/runtime-manager/dedicated-load-balancer-tutorial

Additional Info on SLB Vs DLB:

Salesforce Certified MuleSoft Integration Architect I image Question 10 explanation 66027 09232024002916000000

asked 23/09/2024
Todd Hekkema
42 questions
Total 273 questions
Go to page: of 28
Search

Related questions