ExamGecko
Home Home / MuleSoft / MCIA - Level 1

MuleSoft MCIA - Level 1 Practice Test - Questions Answers, Page 11

Question list
Search
Search

List of questions

Search

Related questions











When designing an upstream API and its implementation, the development team has been advised to not set timeouts when invoking downstream API. Because the downstream API has no SLA that can be relied upon. This is the only donwstream API dependency of that upstream API. Assume the downstream API runs uninterrupted without crashing. What is the impact of this advice?

A.
The invocation of the downstream API will run to completion without timing out.
A.
The invocation of the downstream API will run to completion without timing out.
Answers
B.
An SLA for the upstream API CANNOT be provided.
B.
An SLA for the upstream API CANNOT be provided.
Answers
C.
A default timeout of 500 ms will automatically be applied by the Mule runtime in which the upstream API implementation executes.
C.
A default timeout of 500 ms will automatically be applied by the Mule runtime in which the upstream API implementation executes.
Answers
D.
A load-dependent timeout of less than 1000 ms will be applied by the Mule runtime in which the downstream API implementation executes.
D.
A load-dependent timeout of less than 1000 ms will be applied by the Mule runtime in which the downstream API implementation executes.
Answers
Suggested answer: B

Explanation:

An SLA for the upstream API CANNOT be provided.

What aspects of a CI/CD pipeline for Mule applications can be automated using MuleSoft-provided Maven plugins?

A.
Compile, package, unit test, validate unit test coverage, deploy
A.
Compile, package, unit test, validate unit test coverage, deploy
Answers
B.
Compile, package, unit test, deploy, integration test (Incorrect)
B.
Compile, package, unit test, deploy, integration test (Incorrect)
Answers
C.
Compile, package, unit test, deploy, create associated API instances in API Manager
C.
Compile, package, unit test, deploy, create associated API instances in API Manager
Answers
D.
Import from API designer, compile, package, unit test, deploy, publish to Anypoint Exchange
D.
Import from API designer, compile, package, unit test, deploy, publish to Anypoint Exchange
Answers
Suggested answer: A

Explanation:

Correct answer is "Compile, package, unit test, validate unit test coverage, deploy" : Anypoint Platform supports continuous integration and continuous delivery using industry standard tools Mule Maven Plugin The Mule Maven plugin can automate building, packaging and deployment of Mule applications from source projects Using the Mule Maven plugin, you can automate your Mule application deployment to CloudHub, to Anypoint Runtime Fabric, or on-premises, using any of the following deployment strategies ï CloudHub deployment ï Runtime Fabric deployment ï Runtime Manager REST API deployment ï Runtime Manager agent deployment MUnit Maven Plugin The MUnit Maven plugin can automate test execution, and ties in with the Mule Maven plugin. It provides a full suite of integration and unit test capabilities, and is fully integrated with Maven and Surefire for integration with your continuous deployment environment. Since MUnit 2.x, the coverage report goal is integrated with the maven reporting section. Coverage Reports are generated during Maven's site lifecycle, during the coverage-report goal. One of the features of MUnit Coverage is to fail the build if a certain coverage level is not reached. MUnit is not used for integration testing Also publishing to Anypoint Exchange or to create associated API instances in API Manager is not a part of CICD pipeline which can ne achieved using mulesoft provided maven plugin Architecture mentioned in the question can be diagrammatically put as below. Persistent Object Store is the correct answer .

* Mule Object Stores: An object store is a facility for storing objects in or across Mule applications.

Mule uses object stores to persist data for eventual retrieval.

Mule provides two types of object stores:

1) In-memory store ñ stores objects in local Mule runtime memory. Objects are lost on shutdown of the Mule runtime. So we cant use in memory store in our scenario as we want to share watermark within all cloudhub workers

2) Persistent store ñ Mule persists data when an object store is explicitly configured to be persistent.

Hence this watermark will be available even any of the worker goes down

What condition requires using a CloudHub Dedicated Load Balancer?

A.
When cross-region load balancing is required between separate deployments of the same Mule application
A.
When cross-region load balancing is required between separate deployments of the same Mule application
Answers
B.
When custom DNS names are required for API implementations deployed to customer-hosted Mule runtimes
B.
When custom DNS names are required for API implementations deployed to customer-hosted Mule runtimes
Answers
C.
When API invocations across multiple CloudHub workers must be load balanced
C.
When API invocations across multiple CloudHub workers must be load balanced
Answers
D.
When server-side load-balanced TLS mutual authentication is required between API implementations and API clients
D.
When server-side load-balanced TLS mutual authentication is required between API implementations and API clients
Answers
Suggested answer: D

Explanation:

Correct answer is When server-side load-balanced TLS mutual authentication is required between API implementations and API clients CloudHub dedicated load balancers (DLBs) are an optional component of Anypoint Platform that enable you to route external HTTP and HTTPS traffic to multiple Mule applications deployed to CloudHub workers in a Virtual Private Cloud (VPC). Dedicated load balancers enable you to: * Handle load balancing among the different CloudHub workers that run your application. * Define SSL configurations to provide custom certificates and optionally enforce two-way SSL client authentication. * Configure proxy rules that map your applications to custom domains. This enables you to host your applications under a single domain

A company is building an application network and has deployed four Mule APIs: one experience API, one process API, and two system APIs. The logs from all the APIs are aggregated in an external log aggregation tool. The company wants to trace messages that are exchanged between multiple API implementations. What is the most idiomatic (based on its intended use) identifier that should be used to implement Mule event tracing across the multiple API implementations?

A.
Mule event ID
A.
Mule event ID
Answers
B.
Mule correlation ID
B.
Mule correlation ID
Answers
C.
Client's IP address
C.
Client's IP address
Answers
D.
DataWeave UUID
D.
DataWeave UUID
Answers
Suggested answer: B

Explanation:

Correct answer is Mule correlation ID By design, Correlation Ids cannot be changed within a flow in Mule 4 applications and can be set only at source. This ID is part of the Event Context and is generated as soon as the message is received by the application. When a HTTP Request is received, the request is inspected for "X-Correlation-Id" header. If "X-Correlation-Id" header is present, HTTP connector uses this as the Correlation Id. If "X-Correlation-Id" header is NOT present, a Correlation Id is randomly generated. For Incoming HTTP Requests: In order to set a custom Correlation Id, the client invoking the HTTP request must set "X-Correlation-Id" header. This will ensure that the Mule Flow uses this Correlation Id. For Outgoing HTTP Requests: You can also propagate the existing Correlation Id to downstream APIs. By default, all outgoing HTTP Requests send "X-Correlation-Id" header. However, you can choose to set a different value to "X-Correlation-Id" header or set "Send Correlation Id" to NEVER.

Mule application is deployed to Customer Hosted Runtime. Asynchronous logging was implemented to improved throughput of the system. But it was observed over the period of time that few of the important exception log messages which were used to rollback transactions are not working as expected causing huge loss to the Organization. Organization wants to avoid these losses. Application also has constraints due to which they cant compromise on throughput much.

What is the possible option in this case?

A.
Logging needs to be changed from asynchronous to synchronous
A.
Logging needs to be changed from asynchronous to synchronous
Answers
B.
External log appender needs to be used in this case
B.
External log appender needs to be used in this case
Answers
C.
Persistent memory storage should be used in such scenarios
C.
Persistent memory storage should be used in such scenarios
Answers
D.
Mixed configuration of asynchronous or synchronous loggers should be used to log exceptions via synchronous way
D.
Mixed configuration of asynchronous or synchronous loggers should be used to log exceptions via synchronous way
Answers
Suggested answer: D

Explanation:

Correct approach is to use Mixed configuration of asynchronous or synchronous loggers shoud be used to log exceptions via synchronous way Asynchronous logging poses a performance-reliability trade-off. You may lose some messages if Mule crashes before the logging buffers flush to the disk. In this case, consider that you can have a mixed configuration of asynchronous or synchronous loggers in your app. Best practice is to use asynchronous logging over synchronous with a minimum logging level of WARN for a production application. In some cases, enable INFO logging level when you need to confirm events such as successful policy installation or to perform troubleshooting. Configure your logging strategy by editing your application's src/main/resources/log4j2.xml file

As a part of business requirement , old CRM system needs to be integrated using Mule application.

CRM system is capable of exchanging data only via SOAP/HTTP protocol. As an integration architectwho follows API led approach , what is the the below step you will perform so that you can sharedocument with CRM team?

A.
Create RAML specification using Design Center
A.
Create RAML specification using Design Center
Answers
B.
Create SOAP API specification using Design Center
B.
Create SOAP API specification using Design Center
Answers
C.
Create WSDL specification using text editor
C.
Create WSDL specification using text editor
Answers
D.
Create WSDL specification using Design Center
D.
Create WSDL specification using Design Center
Answers
Suggested answer: C

Explanation:

Correct answer is Create WSDL specification using text editor SOAP services are specified using WSDL. A client program connecting to a web service can read the WSDL to determine what functions are available on the server. We can not create WSDL specification in Design Center. We need to use external text editor to create WSDL.

Insurance organization is planning to deploy Mule application in MuleSoft Hosted runtime plane. As a part of requirement , application should be scalable . highly available. It also has regulatory requirement which demands logs to be retained for at least 2 years. As an Integration Architect what step you will recommend in order to achieve this?

A.
It is not possible to store logs for 2 years in CloudHub deployment. External log management system is required.
A.
It is not possible to store logs for 2 years in CloudHub deployment. External log management system is required.
Answers
B.
When deploying an application to CloudHub , logs retention period should be selected as 2 years
B.
When deploying an application to CloudHub , logs retention period should be selected as 2 years
Answers
C.
When deploying an application to CloudHub, worker size should be sufficient to store 2 years data
C.
When deploying an application to CloudHub, worker size should be sufficient to store 2 years data
Answers
D.
Logging strategy should be configured accordingly in log4j file deployed with the application.
D.
Logging strategy should be configured accordingly in log4j file deployed with the application.
Answers
Suggested answer: A

Explanation:

Correct answer is It is not possible to store logs for 2 years in CloudHub deployment. External log management system is required. CloudHub has a specific log retention policy, as described in the documentation: the platform stores logs of up to 100 MB per app & per worker or for up to 30 days, whichever limit is hit first. Once this limit has been reached, the oldest log information is deleted in chunks and is irretrievably lost. The recommended approach is to persist your logs to a external logging system of your choice (such as Splunk, for instance) using a log appender. Please note that this solution results in the logs no longer being stored on our platform, so any support cases you lodge will require for you to provide the appropriate logs for review and case resolution

An organization is designing Mule application which connects to a legacy backend. It has been reported that backend services are not highly available and experience downtime quite often. As an integration architect which of the below approach you would propose to achieve high reliability goals?

A.
Alerts can be configured in Mule runtime so that backend team can be communicated when services are down
A.
Alerts can be configured in Mule runtime so that backend team can be communicated when services are down
Answers
B.
Until Successful scope can be implemented while calling backend API's
B.
Until Successful scope can be implemented while calling backend API's
Answers
C.
On Error Continue scope to be used to call in case of error again
C.
On Error Continue scope to be used to call in case of error again
Answers
D.
Create a batch job with all requests being sent to backend using that job as per the availability of backend API's
D.
Create a batch job with all requests being sent to backend using that job as per the availability of backend API's
Answers
Suggested answer: B

Explanation:

Correct answer is Untill Successful scope can be implemented while calling backend API's The Until Successful scope repeatedly triggers the scope's components (including flow references) until they all succeed or until a maximum number of retries is exceeded The scope provides option to control the max number of retries and the interval between retries The scope can execute any sequence of processors that may fail for whatever reason and may succeed upon retry

A Mule application contains a Batch Job scope with several Batch Step scopes. The Batch Job scope is configured with a batch block size of 25.

A payload with 4,000 records is received by the Batch Job scope.

When there are no errors, how does the Batch Job scope process records within and between the Batch Step scopes?

A.
The Batch Job scope processes multiple record blocks in parallel, and a block of 25 records can jump ahead to the next Batch Step scope over an earlier block of records Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed in parallel All the records in a block must be completed before the block of 25 records is available to the next Batch Step scope
A.
The Batch Job scope processes multiple record blocks in parallel, and a block of 25 records can jump ahead to the next Batch Step scope over an earlier block of records Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed in parallel All the records in a block must be completed before the block of 25 records is available to the next Batch Step scope
Answers
B.
The Batch Job scope processes each record block sequentially, one at a time Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed sequentially, one at a time All 4000 records must be completed before the blocks of records are available to the next Batch Step scope
B.
The Batch Job scope processes each record block sequentially, one at a time Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed sequentially, one at a time All 4000 records must be completed before the blocks of records are available to the next Batch Step scope
Answers
C.
The Batch Job scope processes multiple record blocks in parallel, and a block of 25 records can jump ahead to the next Batch Step scope over an earlier block of records Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed sequentially, one record at a time All the records in a block must be completed before the block of 25 records is available to the nextBatch Step scope
C.
The Batch Job scope processes multiple record blocks in parallel, and a block of 25 records can jump ahead to the next Batch Step scope over an earlier block of records Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed sequentially, one record at a time All the records in a block must be completed before the block of 25 records is available to the nextBatch Step scope
Answers
D.
The Batch Job scope processes multiple record blocks in parallel Each Batch Step scope is invoked with a batch of 25 records in the payload of the received Mule event For each Batch Step scope, all 4000 records are processed in parallel Individual records can jump ahead to the next Batch Step scope before the rest of the records finish processing in the current Batch Step scope
D.
The Batch Job scope processes multiple record blocks in parallel Each Batch Step scope is invoked with a batch of 25 records in the payload of the received Mule event For each Batch Step scope, all 4000 records are processed in parallel Individual records can jump ahead to the next Batch Step scope before the rest of the records finish processing in the current Batch Step scope
Answers
Suggested answer: A

Explanation:

Reference: https://docs.mulesoft.com/mule-runtime/4.4/batch-processing-concept

To implement predictive maintenance on its machinery equipment, ACME Tractors has installed thousands of IoT sensors that will send data for each machinery asset as sequences of JMS messages, in near real-time, to a JMS queue named SENSOR_DATA on a JMS server. The Mule application contains a JMS Listener operation configured to receive incoming messages from the JMS servers SENSOR_DATA JMS queue. The Mule application persists each received

JMS message, then sends a transformed version of the corresponding Mule event to the machinery equipment back-end systems.

The Mule application will be deployed to a multi-node, customer-hosted Mule runtime cluster.

Under normal conditions, each JMS message should be processed exactly once.

How should the JMS Listener be configured to maximize performance and concurrent message processing of the JMS queue?

A.
Set numberOfConsumers = 1Set primaryNodeOnly = false
A.
Set numberOfConsumers = 1Set primaryNodeOnly = false
Answers
B.
Set numberOfConsumers = 1Set primaryNodeOnly = true
B.
Set numberOfConsumers = 1Set primaryNodeOnly = true
Answers
C.
Set numberOfConsumers to a value greater than oneSet primaryNodeOnly = true
C.
Set numberOfConsumers to a value greater than oneSet primaryNodeOnly = true
Answers
D.
Set numberOfConsumers to a value greater than oneSet primaryNodeOnly = false
D.
Set numberOfConsumers to a value greater than oneSet primaryNodeOnly = false
Answers
Suggested answer: D

Explanation:

Reference: https://docs.mulesoft.com/jms-connector/1.8/jms-performance

Total 244 questions
Go to page: of 25