ExamGecko
Home Home / MuleSoft / MCIA Level 1 Maintenance

MuleSoft MCIA Level 1 Maintenance Practice Test - Questions Answers, Page 4

Question list
Search
Search

List of questions

Search

Related questions











A company is building an application network and has deployed four Mule APIs: one experience API, one process API, and two system APIs. The logs from all the APIs are aggregated in an external log aggregation tool. The company wants to trace messages that are exchanged between multiple API implementations. What is the most idiomatic (based on its intended use) identifier that should be used to implement Mule event tracing across the multiple API implementations?

A.
Mule event ID
A.
Mule event ID
Answers
B.
Mule correlation ID
B.
Mule correlation ID
Answers
C.
Client's IP address
C.
Client's IP address
Answers
D.
DataWeave UUID
D.
DataWeave UUID
Answers
Suggested answer: B

Explanation:

Correct answer is Mule correlation ID By design, Correlation Ids cannot be changed within a flow in Mule 4 applications and can be set only at source. This ID is part of the Event Context and is generated as soon as the message is received by the application. When a HTTP Request is received, the request is inspected for "X-Correlation-Id" header. If "X-Correlation-Id" header is present, HTTP connector uses this as the Correlation Id. If "X-Correlation-Id" header is NOT present, a Correlation Id is randomly generated. For Incoming HTTP Requests: In order to set a custom Correlation Id, the client invoking the HTTP request must set "X-Correlation-Id" header. This will ensure that the Mule Flow uses this Correlation Id. For Outgoing HTTP Requests: You can also propagate the existing Correlation Id to downstream APIs. By default, all outgoing HTTP Requests send "X-Correlation-Id" header. However, you can choose to set a different value to "X-Correlation-Id" header or set "Send Correlation Id" to NEVER.

Mule application is deployed to Customer Hosted Runtime. Asynchronous logging was implemented to improved throughput of the system. But it was observed over the period of time that few of the important exception log messages which were used to rollback transactions are not working as expected causing huge loss to the Organization. Organization wants to avoid these losses. Application also has constraints due to which they cant compromise on throughput much. What is the possible option in this case?

A.
Logging needs to be changed from asynchronous to synchronous
A.
Logging needs to be changed from asynchronous to synchronous
Answers
B.
External log appender needs to be used in this case
B.
External log appender needs to be used in this case
Answers
C.
Persistent memory storage should be used in such scenarios
C.
Persistent memory storage should be used in such scenarios
Answers
D.
Mixed configuration of asynchronous or synchronous loggers should be used to log exceptions via synchronous way
D.
Mixed configuration of asynchronous or synchronous loggers should be used to log exceptions via synchronous way
Answers
Suggested answer: D

Explanation:

Correct approach is to use Mixed configuration of asynchronous or synchronous loggers shoud be used to log exceptions via synchronous way Asynchronous logging poses a performance-reliability trade-off. You may lose some messages if Mule crashes before the logging buffers flush to the disk. In this case, consider that you can have a mixed configuration of asynchronous or synchronous loggers in your app. Best practice is to use asynchronous logging over synchronous with a minimum logging level of WARN for a production application. In some cases, enable INFO logging level when you need to confirm events such as successful policy installation or to perform troubleshooting. Configure your logging strategy by editing your application’s src/main/resources/log4j2.xml file

As a part of business requirement , old CRM system needs to be integrated using Mule application.

CRM system is capable of exchanging data only via SOAP/HTTP protocol. As an integration architectwho follows API led approach , what is the the below step you will perform so that you can sharedocument with CRM team?

A.
Create RAML specification using Design Center
A.
Create RAML specification using Design Center
Answers
B.
Create SOAP API specification using Design Center
B.
Create SOAP API specification using Design Center
Answers
C.
Create WSDL specification using text editor
C.
Create WSDL specification using text editor
Answers
D.
Create WSDL specification using Design Center
D.
Create WSDL specification using Design Center
Answers
Suggested answer: C

Explanation:

Correct answer is Create WSDL specification using text editor SOAP services are specified using WSDL. A client program connecting to a web service can read the WSDL to determine what functions are available on the server. We can not create WSDL specification in Design Center. We need to use external text editor to create WSDL.

Insurance organization is planning to deploy Mule application in MuleSoft Hosted runtime plane. As a part of requirement , application should be scalable . highly available. It also has regulatory requirement which demands logs to be retained for at least 2 years. As an Integration Architect what step you will recommend in order to achieve this?

A.
It is not possible to store logs for 2 years in CloudHub deployment. External log management system is required.
A.
It is not possible to store logs for 2 years in CloudHub deployment. External log management system is required.
Answers
B.
When deploying an application to CloudHub , logs retention period should be selected as 2 years
B.
When deploying an application to CloudHub , logs retention period should be selected as 2 years
Answers
C.
When deploying an application to CloudHub, worker size should be sufficient to store 2 years data
C.
When deploying an application to CloudHub, worker size should be sufficient to store 2 years data
Answers
D.
Logging strategy should be configured accordingly in log4j file deployed with the application.
D.
Logging strategy should be configured accordingly in log4j file deployed with the application.
Answers
Suggested answer: A

Explanation:

Correct answer is It is not possible to store logs for 2 years in CloudHub deployment. External log management system is required. CloudHub has a specific log retention policy, as described in the documentation: the platform stores logs of up to 100 MB per app & per worker or for up to 30 days, whichever limit is hit first. Once this limit has been reached, the oldest log information is deleted in chunks and is irretrievably lost. The recommended approach is to persist your logs to a external logging system of your choice (such as Splunk, for instance) using a log appender. Please note that this solution results in the logs no longer being stored on our platform, so any support cases you lodge will require for you to provide the appropriate logs for review and case resolution

An organization is designing Mule application which connects to a legacy backend. It has been reported that backend services are not highly available and experience downtime quite often. As an integration architect which of the below approach you would propose to achieve high reliability goals?

A.
Alerts can be configured in Mule runtime so that backend team can be communicated when services are down
A.
Alerts can be configured in Mule runtime so that backend team can be communicated when services are down
Answers
B.
Until Successful scope can be implemented while calling backend API's
B.
Until Successful scope can be implemented while calling backend API's
Answers
C.
On Error Continue scope to be used to call in case of error again
C.
On Error Continue scope to be used to call in case of error again
Answers
D.
Create a batch job with all requests being sent to backend using that job as per the availability of backend API's
D.
Create a batch job with all requests being sent to backend using that job as per the availability of backend API's
Answers
Suggested answer: B

Explanation:

Correct answer is Untill Successful scope can be implemented while calling backend API's The Until Successful scope repeatedly triggers the scope's components (including flow references) until they all succeed or until a maximum number of retries is exceeded The scope provides option to control the max number of retries and the interval between retries The scope can execute any sequence of processors that may fail for whatever reason and may succeed upon retry

A Mule application contains a Batch Job scope with several Batch Step scopes. The Batch Job scope is configured with a batch block size of 25.

A payload with 4,000 records is received by the Batch Job scope.

When there are no errors, how does the Batch Job scope process records within and between the Batch Step scopes?

A.
The Batch Job scope processes multiple record blocks in parallel, and a block of 25 records can jump ahead to the next Batch Step scope over an earlier block of records Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed in parallel All the records in a block must be completed before the block of 25 records is available to the next Batch Step scope
A.
The Batch Job scope processes multiple record blocks in parallel, and a block of 25 records can jump ahead to the next Batch Step scope over an earlier block of records Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed in parallel All the records in a block must be completed before the block of 25 records is available to the next Batch Step scope
Answers
B.
The Batch Job scope processes each record block sequentially, one at a time Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed sequentially, one at a time All 4000 records must be completed before the blocks of records are available to the next Batch Step scope
B.
The Batch Job scope processes each record block sequentially, one at a time Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed sequentially, one at a time All 4000 records must be completed before the blocks of records are available to the next Batch Step scope
Answers
C.
The Batch Job scope processes multiple record blocks in parallel, and a block of 25 records can jump ahead to the next Batch Step scope over an earlier block of records Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed sequentially, one record at a time All the records in a block must be completed before the block of 25 records is available to the next Batch Step scope
C.
The Batch Job scope processes multiple record blocks in parallel, and a block of 25 records can jump ahead to the next Batch Step scope over an earlier block of records Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed sequentially, one record at a time All the records in a block must be completed before the block of 25 records is available to the next Batch Step scope
Answers
D.
The Batch Job scope processes multiple record blocks in parallelEach Batch Step scope is invoked with a batch of 25 records in the payload of the received Mule event For each Batch Step scope, all 4000 records are processed in parallel Individual records can jump ahead to the next Batch Step scope before the rest of the records finish processing in the current Batch Step scope
D.
The Batch Job scope processes multiple record blocks in parallelEach Batch Step scope is invoked with a batch of 25 records in the payload of the received Mule event For each Batch Step scope, all 4000 records are processed in parallel Individual records can jump ahead to the next Batch Step scope before the rest of the records finish processing in the current Batch Step scope
Answers
Suggested answer: A

Explanation:

Reference: https://docs.mulesoft.com/mule-runtime/4.4/batch-processing-concept

To implement predictive maintenance on its machinery equipment, ACME Tractors has installed thousands of IoT sensors that will send data for each machinery asset as sequences of JMS messages, in near real-time, to a JMS queue named SENSOR_DATA on a JMS server. The Mule application contains a JMS Listener operation configured to receive incoming messages from the JMS servers SENSOR_DATA JMS queue. The Mule application persists each received JMS message, then sends a transformed version of the corresponding Mule event to the machinery equipment back-end systems.

The Mule application will be deployed to a multi-node, customer-hosted Mule runtime cluster.

Under normal conditions, each JMS message should be processed exactly once.

How should the JMS Listener be configured to maximize performance and concurrent message processing of the JMS queue?

A.
Set numberOfConsumers = 1Set primaryNodeOnly = false
A.
Set numberOfConsumers = 1Set primaryNodeOnly = false
Answers
B.
Set numberOfConsumers = 1Set primaryNodeOnly = true
B.
Set numberOfConsumers = 1Set primaryNodeOnly = true
Answers
C.
Set numberOfConsumers to a value greater than oneSet primaryNodeOnly = true
C.
Set numberOfConsumers to a value greater than oneSet primaryNodeOnly = true
Answers
D.
Set numberOfConsumers to a value greater than oneSet primaryNodeOnly = false
D.
Set numberOfConsumers to a value greater than oneSet primaryNodeOnly = false
Answers
Suggested answer: D

Explanation:

Reference: https://docs.mulesoft.com/jms-connector/1.8/jms-performance

A Mule application is synchronizing customer data between two different database systems.

What is the main benefit of using eXtended Architecture (XA) transactions over local transactions to synchronize these two different database systems?

A.
An XA transaction synchronizes the database systems with the least amount of Mule configuration or coding
A.
An XA transaction synchronizes the database systems with the least amount of Mule configuration or coding
Answers
B.
An XA transaction handles the largest number of requests in the shortest time
B.
An XA transaction handles the largest number of requests in the shortest time
Answers
C.
An XA transaction automatically rolls back operations against both database systems if any operation falls
C.
An XA transaction automatically rolls back operations against both database systems if any operation falls
Answers
D.
An XA transaction writes to both database systems as fast as possible
D.
An XA transaction writes to both database systems as fast as possible
Answers
Suggested answer: B

Explanation:

Reference: https://docs.oracle.com/middleware/1213/wls/PERFM/llrtune.htm#PERFM997

An organization has implemented a continuous integration (CI) lifecycle that promotes Mule applications through code, build, and test stages. To standardize the organization's CI journey, a new dependency control approach is being designed to store artifacts that include information such as dependencies, versioning, and build promotions.

To implement these process improvements, the organization will now require developers to maintain all dependencies related to Mule application code in a shared location.

What is the most idiomatic (used for its intended purpose) type of system the organization should use in a shared location to standardize all dependencies related to Mule application code?

A.
A MuleSoft-managed repository at repository.mulesoft.org
A.
A MuleSoft-managed repository at repository.mulesoft.org
Answers
B.
A binary artifact repository
B.
A binary artifact repository
Answers
C.
API Community Manager
C.
API Community Manager
Answers
D.
The Anypoint Object Store service at cloudhub.io
D.
The Anypoint Object Store service at cloudhub.io
Answers
Suggested answer: C

An organization has deployed both Mule and non-Mule API implementations to integrate its customer and order management systems. All the APIs are available to REST clients on the public internet.

The organization wants to monitor these APIs by running health checks: for example, to determine if an API can properly accept and process requests. The organization does not have subscriptions to any external monitoring tools and also does not want to extend its IT footprint.

What Anypoint Platform feature provides the most idiomatic (used for its intended purpose) way to monitor the availability of both the Mule and the non-Mule API implementations?

A.
API Functional Monitoring
A.
API Functional Monitoring
Answers
B.
Runtime Manager
B.
Runtime Manager
Answers
C.
API Manager
C.
API Manager
Answers
D.
Anypoint Visualizer
D.
Anypoint Visualizer
Answers
Suggested answer: D

Explanation:

Reference: https://docs.mulesoft.com/visualizer/

Total 116 questions
Go to page: of 12