ExamGecko
Home / MuleSoft / MCIA - Level 1 / List of questions
Ask Question

MuleSoft MCIA - Level 1 Practice Test - Questions Answers, Page 15

List of questions

Question 141

Report
Export
Collapse

One of the backend systems involved by the API implementation enforces rate limits on the number of request a particle client can make.

Both the back-end system and API implementation are deployed to several non-production environments including the staging environment and to a particular production environment. Rate limiting of the back-end system applies to all non-production environments.

The production environment however does not have any rate limiting.

What is the cost-effective approach to conduct performance test of the API implementation in the non-production staging environment?

Including logic within the API implementation that bypasses in locations of the back-end system in the staging environment and invoke a Mocking service that replicates typical back-end system responses Then conduct performance test using this API implementation
Including logic within the API implementation that bypasses in locations of the back-end system in the staging environment and invoke a Mocking service that replicates typical back-end system responses Then conduct performance test using this API implementation
Use MUnit to simulate standard responses from the back-end system.Then conduct performance test to identify other bottlenecks in the system
Use MUnit to simulate standard responses from the back-end system.Then conduct performance test to identify other bottlenecks in the system
Create a Mocking service that replicates the back-end system's production performance characteristics Then configure the API implementation to use the mocking service and conduct the performance test
Create a Mocking service that replicates the back-end system's production performance characteristics Then configure the API implementation to use the mocking service and conduct the performance test
Conduct scaled-down performance tests in the staging environment against rate-limiting back-end system. Then upscale performance results to full production scale
Conduct scaled-down performance tests in the staging environment against rate-limiting back-end system. Then upscale performance results to full production scale
Suggested answer: C
asked 18/09/2024
Jacek Rutkowski
41 questions

Question 142

Report
Export
Collapse

A system API EmployeeSAPI is used to fetch employee's data from an underlying SQL database.

The architect must design a caching strategy to query the database only when there is an update to the employees stable or else return a cached response in order to minimize the number of redundant transactions being handled by the database.

What must the architect do to achieve the caching objective?

Use an On Table Row on employees table and call invalidate cacheUse an object store caching strategy and expiration interval to empty
Use an On Table Row on employees table and call invalidate cacheUse an object store caching strategy and expiration interval to empty
Use a Scheduler with a fixed frequency every hour triggering an invalidate cache flow Use an object store caching strategy and expiration interval to empty
Use a Scheduler with a fixed frequency every hour triggering an invalidate cache flow Use an object store caching strategy and expiration interval to empty
Use a Scheduler with a fixed frequency every hour triggering an invalidate cache flow Use an object store caching strategy and set expiration interval to 1-hour
Use a Scheduler with a fixed frequency every hour triggering an invalidate cache flow Use an object store caching strategy and set expiration interval to 1-hour
Use an on table rule on employees table call invalidate cache and said new employees data to cache Use an object store caching strategy and set expiration interval to 1-hour
Use an on table rule on employees table call invalidate cache and said new employees data to cache Use an object store caching strategy and set expiration interval to 1-hour
Suggested answer: A
asked 18/09/2024
Victor vila
37 questions

Question 143

Report
Export
Collapse

A leading bank implementing new mule API.

The purpose of API to fetch the customer account balances from the backend application and display them on the online platform the online banking platform. The online banking platform will send an array of accounts to Mule API get the account balances.

As a part of the processing the Mule API needs to insert the data into the database for auditing purposes and this process should not have any performance related implications on the account balance retrieval flow How should this requirement be implemented to achieve better throughput?

Implement the Async scope fetch the data from the backend application and to insert records in the Audit database
Implement the Async scope fetch the data from the backend application and to insert records in the Audit database
Implement a for each scope to fetch the data from the back-end application and to insert records into the Audit database
Implement a for each scope to fetch the data from the back-end application and to insert records into the Audit database
Implement a try-catch scope to fetch the data from the back-end application and use the Async scope to insert records into the Audit database
Implement a try-catch scope to fetch the data from the back-end application and use the Async scope to insert records into the Audit database
Implement parallel for each scope to fetch the data from the backend application and use Async scope to insert the records into the Audit database
Implement parallel for each scope to fetch the data from the backend application and use Async scope to insert the records into the Audit database
Suggested answer: D
asked 18/09/2024
Jeffrey Holt Jr
26 questions

Question 144

Report
Export
Collapse

A Mule application is built to support a local transaction for a series of operations on a single database. The mule application has a Scatter-Gather scope that participates in the local transaction.

What is the behavior of the Scatter-Gather when running within this local transaction?

Execution of all routes within Scatter-Gather occurs in parallel Any error that occurs inside Scatter- Gather will result in a roll back of all the database operations
Execution of all routes within Scatter-Gather occurs in parallel Any error that occurs inside Scatter- Gather will result in a roll back of all the database operations
Execution of all routes within Scatter-Gather occurs sequentially Any error that occurs inside Scatter-Gather will be handled by error handler and will not result in roll back
Execution of all routes within Scatter-Gather occurs sequentially Any error that occurs inside Scatter-Gather will be handled by error handler and will not result in roll back
Execution of all routes within Scatter-Gather occurs sequentially Any error that occurs inside Scatter-Gather will result in a roll back of all the database operations
Execution of all routes within Scatter-Gather occurs sequentially Any error that occurs inside Scatter-Gather will result in a roll back of all the database operations
Execution of all routes within Scatter-Gather occurs in parallel Any error that occurs inside Scatter- Gather will be handled by error handler and will not result in roll back
Execution of all routes within Scatter-Gather occurs in parallel Any error that occurs inside Scatter- Gather will be handled by error handler and will not result in roll back
Suggested answer: A
asked 18/09/2024
Edward Eric
36 questions

Question 145

Report
Export
Collapse

How does timeout attribute help inform design decisions while using JMS connector listening for incoming messages in an extended architecture (XA) transaction?

After the timeout is exceeded, stale JMS consumer threads are destroyed and new threads are created
After the timeout is exceeded, stale JMS consumer threads are destroyed and new threads are created
The timeout specifies the time allowed to pass between receiving JMS messages on the same JMS connection and then after the timeout new JMS connection is established
The timeout specifies the time allowed to pass between receiving JMS messages on the same JMS connection and then after the timeout new JMS connection is established
The time allowed to pass between committing the transaction and the completion of the mule flow and then after the timeout flow processing triggers an error
The time allowed to pass between committing the transaction and the completion of the mule flow and then after the timeout flow processing triggers an error
The timeout defines the time that is allowed to pass without the transaction ending explicitly and after the timeout expires, the transaction rolls back
The timeout defines the time that is allowed to pass without the transaction ending explicitly and after the timeout expires, the transaction rolls back
Suggested answer: D
asked 18/09/2024
Raymond LaFrance
51 questions

Question 146

Report
Export
Collapse

An auto mobile company want to share inventory updates with dealers Dl and D2 asynchronously and concurrently via queues Q1 and Q2. Dealer Dl must consume the message from the queue Q1 and dealer D2 to must consume a message from the queue Q2.

Dealer D1 has implemented a retry mechanism to reprocess the transaction in case of any errors while processing the inventers updates. Dealer D2 has not implemented any retry mechanism.

How should the dealers acknowledge the message to avoid message loss and minimize impact on the current implementation?

Dealer D1 must use auto acknowledgement and dealer D2 can use manual acknowledgement and acknowledge the message after successful processing
Dealer D1 must use auto acknowledgement and dealer D2 can use manual acknowledgement and acknowledge the message after successful processing
Dealer D1 can use auto acknowledgement and dealer D2 can use IMMEDIATE acknowledgement and acknowledge the message of successful processing
Dealer D1 can use auto acknowledgement and dealer D2 can use IMMEDIATE acknowledgement and acknowledge the message of successful processing
Dealer D1 and dealer D2 must use AUTO acknowledgement and acknowledge the message after successful processing
Dealer D1 and dealer D2 must use AUTO acknowledgement and acknowledge the message after successful processing
Dealer D1 can use AUTO acknowledgement and dealer D2 must use manual acknowledgement and acknowledge the message after successful processing
Dealer D1 can use AUTO acknowledgement and dealer D2 must use manual acknowledgement and acknowledge the message after successful processing
Suggested answer: D
asked 18/09/2024
Jumar Antonia
34 questions

Question 147

Report
Export
Collapse

A company is using Mulesoft to develop API's and deploy them to Cloudhub and on premises targets.

Recently it has decided to enable Runtime Fabric deployment option as well and infrastructure is set up for this option.

What can be used to deploy Runtime Fabric?

AnypointCLI
AnypointCLI
Anypoint platform REST API's
Anypoint platform REST API's
Directly uploading ajar file from the Runtime manager
Directly uploading ajar file from the Runtime manager
Mule maven plug-in
Mule maven plug-in
Suggested answer: D
asked 18/09/2024
OLUSEGUN IJAOLA
28 questions

Question 148

Report
Export
Collapse

As an enterprise architect, what are the two reasons for which you would use a canonical data model in the new integration project using Mulesoft Anypoint platform ( choose two answers )

To have consistent data structure aligned in processes
To have consistent data structure aligned in processes
To isolate areas within a bounded context
To isolate areas within a bounded context
To incorporate industry standard data formats
To incorporate industry standard data formats
There are multiple canonical definitions of each data type
There are multiple canonical definitions of each data type
Because the model isolates the back and systems and support mule applications from change
Because the model isolates the back and systems and support mule applications from change
Suggested answer: A, B
asked 18/09/2024
Renaldo Williams
43 questions

Question 149

Report
Export
Collapse

A company is planning to migrate its deployment environment from on-premises cluster to a Runtime Fabric (RTF) cluster. It also has a requirement to enable Mule applications deployed to a Mule runtime instance to store and share data across application replicas and restarts.

How can these requirements be met?

Anypoint object store V2 to share data between replicas in the RTF cluster
Anypoint object store V2 to share data between replicas in the RTF cluster
Install the object store pod on one of the cluster nodes
Install the object store pod on one of the cluster nodes
Configure Persistence Gateway in any of the servers using Mule Object Store
Configure Persistence Gateway in any of the servers using Mule Object Store
Configure Persistent Gateway at the RTF
Configure Persistent Gateway at the RTF
Suggested answer: D
asked 18/09/2024
Catarina Machado
32 questions

Question 150

Report
Export
Collapse

An organization designing a hybrid, load balanced, single cluster production environment. Due to performance service level agreement goals, it is looking into running the Mule applications in an active-active multi node cluster configuration.

What should be considered when running its Mule applications in this type of environment?

All event sources, regardless of time , can be configured as the target source by the primary node in the cluster
All event sources, regardless of time , can be configured as the target source by the primary node in the cluster
An external load balancer is required to distribute incoming requests throughout the cluster nodes
An external load balancer is required to distribute incoming requests throughout the cluster nodes
A Mule application deployed to multiple nodes runs in an isolation from the other nodes in the cluster
A Mule application deployed to multiple nodes runs in an isolation from the other nodes in the cluster
Although the cluster environment is fully installed configured and running, it will not process any requests until an outage condition is detected by the primary node in the cluster.
Although the cluster environment is fully installed configured and running, it will not process any requests until an outage condition is detected by the primary node in the cluster.
Suggested answer: B
asked 18/09/2024
Vincent Chung
35 questions
Total 244 questions
Go to page: of 25
Search

Related questions