ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 78 - MCIA - Level 1 discussion

Report
Export

A Mule application is being designed To receive nightly a CSV file containing millions of records from an external vendor over SFTP, The records from the file need to be validated, transformed. And then written to a database. Records can be inserted into the database in any order.

In this use case, what combination of Mule components provides the most effective and performant way to write these records to the database?

A.
Use a Parallel for Each scope to Insert records one by one into the database
Answers
A.
Use a Parallel for Each scope to Insert records one by one into the database
B.
Use a Scatter-Gather to bulk insert records into the database
Answers
B.
Use a Scatter-Gather to bulk insert records into the database
C.
Use a Batch job scope to bulk insert records into the database.
Answers
C.
Use a Batch job scope to bulk insert records into the database.
D.
Use a DataWeave map operation and an Async scope to insert records one by one into the database.
Answers
D.
Use a DataWeave map operation and an Async scope to insert records one by one into the database.
Suggested answer: C

Explanation:

Correct answer is Use a Batch job scope to bulk insert records into the database * Batch Job is most efficient way to manage millions of records.

A few points to note here are as follows :

Reliability: If you want reliabilty while processing the records, i.e should the processing survive a runtime crash or other unhappy scenarios, and when restarted process all the remaining records, if yes then go for batch as it uses persistent queues.

Error Handling: In Parallel for each an error in a particular route will stop processing the remaining records in that route and in such case you'd need to handle it using on error continue, batch process does not stop during such error instead you can have a step for failures and have a dedicated handling in it.

Memory footprint: Since question said that there are millions of records to process, parallel for each will aggregate all the processed records at the end and can possibly cause Out Of Memory.

Batch job instead provides a BatchResult in the on complete phase where you can get the count of failures and success. For huge file processing if order is not a concern definitely go ahead with Batch Job

asked 18/09/2024
Rashid Hashim
45 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first