ExamGecko
Ask Question

Salesforce Certified MuleSoft Integration Architect I Practice Test - Questions Answers, Page 13

List of questions

Question 121

Report
Export
Collapse

An organization is designing Mule application which connects to a legacy backend. It has been reported that backend services are not highly available and experience downtime quite often. As an integration architect which of the below approach you would propose to achieve high reliability goals?

Alerts can be configured in Mule runtime so that backend team can be communicated when services are down
Alerts can be configured in Mule runtime so that backend team can be communicated when services are down
Until Successful scope can be implemented while calling backend API's
Until Successful scope can be implemented while calling backend API's
On Error Continue scope to be used to call in case of error again
On Error Continue scope to be used to call in case of error again
Create a batch job with all requests being sent to backend using that job as per the availability of backend API's
Create a batch job with all requests being sent to backend using that job as per the availability of backend API's
Suggested answer: B

Explanation:

Correct answer is Untill Successful scope can be implemented while calling backend API's The Until Successful scope repeatedly triggers the scope's components (including flow references) until they all succeed or until a maximum number of retries is exceeded The scope provides option to control the max number of retries and the interval between retries The scope can execute any sequence of processors that may fail for whatever reason and may succeed upon retry

asked 23/09/2024
GLAUCIA C N SILVA
41 questions

Question 122

Report
Export
Collapse

A Mule application contains a Batch Job scope with several Batch Step scopes. The Batch Job scope is configured with a batch block size of 25.

A payload with 4,000 records is received by the Batch Job scope.

When there are no errors, how does the Batch Job scope process records within and between the Batch Step scopes?

The Batch Job scope processes multiple record blocks in parallel, and a block of 25 records can jump ahead to the next Batch Step scope over an earlier block of records Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed in parallel All the records in a block must be completed before the block of 25 records is available to the next Batch Step scope
The Batch Job scope processes multiple record blocks in parallel, and a block of 25 records can jump ahead to the next Batch Step scope over an earlier block of records Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed in parallel All the records in a block must be completed before the block of 25 records is available to the next Batch Step scope
The Batch Job scope processes each record block sequentially, one at a time Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed sequentially, one at a time All 4000 records must be completed before the blocks of records are available to the next Batch Step scope
The Batch Job scope processes each record block sequentially, one at a time Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed sequentially, one at a time All 4000 records must be completed before the blocks of records are available to the next Batch Step scope
The Batch Job scope processes multiple record blocks in parallel, and a block of 25 records can jump ahead to the next Batch Step scope over an earlier block of records Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed sequentially, one record at a time All the records in a block must be completed before the block of 25 records is available to the next Batch Step scope
The Batch Job scope processes multiple record blocks in parallel, and a block of 25 records can jump ahead to the next Batch Step scope over an earlier block of records Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed sequentially, one record at a time All the records in a block must be completed before the block of 25 records is available to the next Batch Step scope
The Batch Job scope processes multiple record blocks in parallel Each Batch Step scope is invoked with a batch of 25 records in the payload of the received Mule event For each Batch Step scope, all 4000 records are processed in parallel Individual records can jump ahead to the next Batch Step scope before the rest of the records finish processing in the current Batch Step scope
The Batch Job scope processes multiple record blocks in parallel Each Batch Step scope is invoked with a batch of 25 records in the payload of the received Mule event For each Batch Step scope, all 4000 records are processed in parallel Individual records can jump ahead to the next Batch Step scope before the rest of the records finish processing in the current Batch Step scope
Suggested answer: A
asked 23/09/2024
Brad Jarrett
42 questions

Question 123

Report
Export
Collapse

To implement predictive maintenance on its machinery equipment, ACME Tractors has installed thousands of IoT sensors that will send data for each machinery asset as sequences of JMS messages, in near real-time, to a JMS queue named SENSOR_DATA on a JMS server. The Mule application contains a JMS Listener operation configured to receive incoming messages from the JMS servers SENSOR_DATA JMS queue. The Mule application persists each received JMS message, then sends a transformed version of the corresponding Mule event to the machinery equipment back-end systems.

The Mule application will be deployed to a multi-node, customer-hosted Mule runtime cluster. Under normal conditions, each JMS message should be processed exactly once.

How should the JMS Listener be configured to maximize performance and concurrent message processing of the JMS queue?

Set numberOfConsumers = 1 Set primaryNodeOnly = false
Set numberOfConsumers = 1 Set primaryNodeOnly = false
Set numberOfConsumers = 1 Set primaryNodeOnly = true
Set numberOfConsumers = 1 Set primaryNodeOnly = true
Set numberOfConsumers to a value greater than one Set primaryNodeOnly = true
Set numberOfConsumers to a value greater than one Set primaryNodeOnly = true
Set numberOfConsumers to a value greater than one Set primaryNodeOnly = false
Set numberOfConsumers to a value greater than one Set primaryNodeOnly = false
Suggested answer: D
asked 23/09/2024
Danilo Paolucci
42 questions

Question 124

Report
Export
Collapse

A Mule application is synchronizing customer data between two different database systems.

What is the main benefit of using eXtended Architecture (XA) transactions over local transactions to synchronize these two different database systems?

An XA transaction synchronizes the database systems with the least amount of Mule configuration or coding
An XA transaction synchronizes the database systems with the least amount of Mule configuration or coding
An XA transaction handles the largest number of requests in the shortest time
An XA transaction handles the largest number of requests in the shortest time
An XA transaction automatically rolls back operations against both database systems if any operation falls
An XA transaction automatically rolls back operations against both database systems if any operation falls
An XA transaction writes to both database systems as fast as possible
An XA transaction writes to both database systems as fast as possible
Suggested answer: B
asked 23/09/2024
Kyle Roarick
36 questions

Question 125

Report
Export
Collapse

An organization has implemented a continuous integration (CI) lifecycle that promotes Mule applications through code, build, and test stages. To standardize the organization's CI journey, a new dependency control approach is being designed to store artifacts that include information such as dependencies, versioning, and build promotions.

To implement these process improvements, the organization will now require developers to maintain all dependencies related to Mule application code in a shared location.

What is the most idiomatic (used for its intended purpose) type of system the organization should use in a shared location to standardize all dependencies related to Mule application code?

A MuleSoft-managed repository at repository.mulesoft.org
A MuleSoft-managed repository at repository.mulesoft.org
A binary artifact repository
A binary artifact repository
API Community Manager
API Community Manager
The Anypoint Object Store service at cloudhub.io
The Anypoint Object Store service at cloudhub.io
Suggested answer: C
asked 23/09/2024
Md. Soyaeb Hossain
33 questions

Question 126

Report
Export
Collapse

An organization has deployed both Mule and non-Mule API implementations to integrate its customer and order management systems. All the APIs are available to REST clients on the public internet.

The organization wants to monitor these APIs by running health checks: for example, to determine if an API can properly accept and process requests. The organization does not have subscriptions to any external monitoring tools and also does not want to extend its IT footprint.

What Anypoint Platform feature provides the most idiomatic (used for its intended purpose) way to monitor the availability of both the Mule and the non-Mule API implementations?

API Functional Monitoring
API Functional Monitoring
Runtime Manager
Runtime Manager
API Manager
API Manager
Anypoint Visualizer
Anypoint Visualizer
Suggested answer: D
asked 23/09/2024
Batista Moreira
38 questions

Question 127

Report
Export
Collapse

The ABC company has an Anypoint Runtime Fabric on VMs/Bare Metal (RTF-VM) appliance installed on its own customer-hosted AWS infrastructure.

Mule applications are deployed to this RTF-VM appliance. As part of the company standards, the Mule application logs must be forwarded to an external log management tool (LMT).

Given the company's current setup and requirements, what is the most idiomatic (used for its intended purpose) way to send Mule application logs to the external LMT?

In RTF-VM, install and configure the external LTM's log-forwarding agent
In RTF-VM, install and configure the external LTM's log-forwarding agent
In RTF-VM, edit the pod configuration to automatically install and configure an Anypoint Monitoring agent
In RTF-VM, edit the pod configuration to automatically install and configure an Anypoint Monitoring agent
In each Mule application, configure custom Log4j settings
In each Mule application, configure custom Log4j settings
In RTF-VM. configure the out-of-the-box external log forwarder
In RTF-VM. configure the out-of-the-box external log forwarder
Suggested answer: A
asked 23/09/2024
Innos Phoku
41 questions

Question 128

Report
Export
Collapse

An organization is designing an integration Mule application to process orders by submitting them to a back-end system for offline processing. Each order will be received by the Mule application through an HTTPS POST and must be acknowledged immediately. Once acknowledged, the order will be submitted to a back-end system. Orders that cannot be successfully submitted due to rejections from the back-end system will need to be processed manually (outside the back-end system).

The Mule application will be deployed to a customer-hosted runtime and is able to use an existing ActiveMQ broker if needed. The ActiveMQ broker is located inside the organization's firewall. The back-end system has a track record of unreliability due to both minor network connectivity issues and longer outages.

What idiomatic (used for their intended purposes) combination of Mule application components and ActiveMQ queues are required to ensure automatic submission of orders to the back-end system while supporting but minimizing manual order processing?

An Until Successful scope to call the back-end system One or more ActiveMQ long-retry queues One or more ActiveMQ dead-letter queues for manual processing
An Until Successful scope to call the back-end system One or more ActiveMQ long-retry queues One or more ActiveMQ dead-letter queues for manual processing
One or more On Error scopes to assist calling the back-end system An Until Successful scope containing VM components for long retries A persistent dead-letter VM queue configured in CloudHub
One or more On Error scopes to assist calling the back-end system An Until Successful scope containing VM components for long retries A persistent dead-letter VM queue configured in CloudHub
One or more On Error scopes to assist calling the back-end system One or more ActiveMQ long-retry queues A persistent dead-letter object store configured in the CloudHub Object Store service
One or more On Error scopes to assist calling the back-end system One or more ActiveMQ long-retry queues A persistent dead-letter object store configured in the CloudHub Object Store service
A Batch Job scope to call the back-end system An Until Successful scope containing Object Store components for long retries A dead-letter object store configured in the Mule application
A Batch Job scope to call the back-end system An Until Successful scope containing Object Store components for long retries A dead-letter object store configured in the Mule application
Suggested answer: A
asked 23/09/2024
July Truong
38 questions

Question 129

Report
Export
Collapse

A Mule application is running on a customer-hosted Mule runtime in an organization's network. The Mule application acts as a producer of asynchronous Mule events. Each Mule event must be broadcast to all interested external consumers outside the Mule application. The Mule events should be published in a way that is guaranteed in normal situations and also minimizes duplicate delivery in less frequent failure scenarios.

The organizational firewall is configured to only allow outbound traffic on ports 80 and 443. Some external event consumers are within the organizational network, while others are located outside the firewall.

What Anypoint Platform service is most idiomatic (used for its intended purpose) for publishing these Mule events to all external consumers while addressing the desired reliability goals?

CloudHub VM queues
CloudHub VM queues
Anypoint MQ
Anypoint MQ
Anypoint Exchange
Anypoint Exchange
CloudHub Shared Load Balancer
CloudHub Shared Load Balancer
Suggested answer: B

Explanation:

Set the Anypoint MQ connector operation to publish or consume messages, or to accept (ACK) or not accept (NACK) a message.

asked 23/09/2024
stefano nicoletti
35 questions

Question 130

Report
Export
Collapse

A Mule application uses APIkit for SOAP to implement a SOAP web service. The Mule application has been deployed to a CloudHub worker in a testing environment.

The integration testing team wants to use a SOAP client to perform Integration testing. To carry out the integration tests, the integration team must obtain the interface definition for the SOAP web service.

What is the most idiomatic (used for its intended purpose) way for the integration testing team to obtain the interface definition for the deployed SOAP web service in order to perform integration testing with the SOAP client?

Retrieve the OpenAPI Specification file(s) from API Manager
Retrieve the OpenAPI Specification file(s) from API Manager
Retrieve the WSDL file(s) from the deployed Mule application
Retrieve the WSDL file(s) from the deployed Mule application
Retrieve the RAML file(s) from the deployed Mule application
Retrieve the RAML file(s) from the deployed Mule application
Retrieve the XML file(s) from Runtime Manager
Retrieve the XML file(s) from Runtime Manager
Suggested answer: D
asked 23/09/2024
Tyler Andringa
36 questions
Total 273 questions
Go to page: of 28
Search

Related questions