ExamGecko
Home Home / MuleSoft / MCIA - Level 1

MuleSoft MCIA - Level 1 Practice Test - Questions Answers, Page 8

Question list
Search
Search

List of questions

Search

Related questions











A mule application is deployed to a Single Cloudhub worker and the public URL appears in Runtime Manager as the APP URL.

Requests are sent by external web clients over the public internet to the mule application App url.

Each of these requests routed to the HTTPS Listener event source of the running Mule application.

Later, the DevOps team edits some properties of this running Mule application in Runtime Manager.

Immediately after the new property values are applied in runtime manager, how is the current Mule application deployment affected and how will future web client requests to the Mule application be handled?

A.
Cloudhub will redeploy the Mule application to the OLD Cloudhub worker New web client requests will RETURN AN ERROR until the Mule application is redeployed to the OLD Cloudhub worker
A.
Cloudhub will redeploy the Mule application to the OLD Cloudhub worker New web client requests will RETURN AN ERROR until the Mule application is redeployed to the OLD Cloudhub worker
Answers
B.
CloudHub will redeploy the Mule application to a NEW Cloudhub worker New web client requests will RETURN AN ERROR until the NEW Cloudhub worker is available
B.
CloudHub will redeploy the Mule application to a NEW Cloudhub worker New web client requests will RETURN AN ERROR until the NEW Cloudhub worker is available
Answers
C.
Cloudhub will redeploy the Mule application to a NEW Cloudhub worker New web client requests are ROUTED to the OLD Cloudhub worker until the NEW Cloudhub worker is available.
C.
Cloudhub will redeploy the Mule application to a NEW Cloudhub worker New web client requests are ROUTED to the OLD Cloudhub worker until the NEW Cloudhub worker is available.
Answers
D.
Cloudhub will redeploy the mule application to the OLD Cloudhub worker New web client requests are ROUTED to the OLD Cloudhub worker BOTH before and after the Mule application is redeployed.
D.
Cloudhub will redeploy the mule application to the OLD Cloudhub worker New web client requests are ROUTED to the OLD Cloudhub worker BOTH before and after the Mule application is redeployed.
Answers
Suggested answer: C

Explanation:

CloudHub supports updating your applications at runtime so end users of your HTTP APIs experiencezero downtime. While your application update is deploying, CloudHub keeps the old version of yourapplication running. Your domain points to the old version of your application until the newlyuploaded version is fully started. This allows you to keep servicing requests from your old applicationwhile the new version of your application is starting.

An external REST client periodically sends an array of records in a single POST request to a Mule application API endpoint.

The Mule application must validate each record of the request against a JSON schema before sending it to a downstream system in the same order that it was received in the array Record processing will take place inside a router or scope that calls a child flow. The child flow has its own error handling defined. Any validation or communication failures should not prevent further processing of the remaining records.

To best address these requirements what is the most idiomatic(used for it intended purpose) router or scope to used in the parent flow, and what type of error handler should be used in the child flow?

A.
First Successful router in the parent flowOn Error Continue error handler in the child flow
A.
First Successful router in the parent flowOn Error Continue error handler in the child flow
Answers
B.
For Each scope in the parent flowOn Error Continue error handler in the child flow
B.
For Each scope in the parent flowOn Error Continue error handler in the child flow
Answers
C.
Parallel For Each scope in the parent flowOn Error Propagate error handler in the child flow
C.
Parallel For Each scope in the parent flowOn Error Propagate error handler in the child flow
Answers
D.
Until Successful router in the parent flowOn Error Propagate error handler in the child flow
D.
Until Successful router in the parent flowOn Error Propagate error handler in the child flow
Answers
Suggested answer: B

Explanation:

Correct answer is For Each scope in the parent flow On Error Continue error handler in the child flow.

You can extract below set of requirements from the question a) Records should be sent to downstream system in the same order that it was received in the array b) Any validation or communication failures should not prevent further processing of the remaining records First requirement can be met using For Each scope in the parent flow and second requirement can be met using On Error Continue scope in child flow so that error will be suppressed.

An organization has decided on a cloudhub migration strategy that aims to minimize the organizations own IT resources. Currently, the organizational has all of its Mule applications running on its own premises and uses an premises load balancer that exposes all APIs under the base URL

https://api.acme.comAs part of the migration strategy, the organization plans to migrate all of its Mule applications andload balancer to cloudhubWhat is the most straight-forward and cost effective approach to the Mule applications deploymentand load balancing that preserves the public URLs?

A.
Deploy the Mule applications to Cloudhub Update the CNAME record for an api.acme.com in the organizations DNS server pointing to the A record of a cloudhub dedicated load balancer(DLB) Apply mapping rules in the DLB to map URLs to their corresponding Mule applications
A.
Deploy the Mule applications to Cloudhub Update the CNAME record for an api.acme.com in the organizations DNS server pointing to the A record of a cloudhub dedicated load balancer(DLB) Apply mapping rules in the DLB to map URLs to their corresponding Mule applications
Answers
B.
For each migrated Mule application, deploy an API proxy Mule application to Cloudhub with all applications under the control of a dedicated load balancer(CLB) Update the CNAME record for api.acme.com in the organization DNS server pointing to the A record of a cloudhub dedicated load balancer(DLB) Apply mapping rules in the DLB to map each API proxy application to its corresponding Mule applications
B.
For each migrated Mule application, deploy an API proxy Mule application to Cloudhub with all applications under the control of a dedicated load balancer(CLB) Update the CNAME record for api.acme.com in the organization DNS server pointing to the A record of a cloudhub dedicated load balancer(DLB) Apply mapping rules in the DLB to map each API proxy application to its corresponding Mule applications
Answers
C.
Deploy the Mule applications to Cloudhub Create CNAME record for api.acme.com in the Cloudhub Shared load balancer (SLB) pointing to the A record of the on-premise load balancer Apply mapping rules in the SLB to map URLs to their corresponding Mule applications
C.
Deploy the Mule applications to Cloudhub Create CNAME record for api.acme.com in the Cloudhub Shared load balancer (SLB) pointing to the A record of the on-premise load balancer Apply mapping rules in the SLB to map URLs to their corresponding Mule applications
Answers
D.
Deploy the Mule applications to Cloudhub Update the CNAME record for api.acme.com in the organization DNS server pointing to the A record of the cloudhub shared load balancer(SLB) Apply mapping rules in the SLB to map URLs to their corresponding Mule applications.
D.
Deploy the Mule applications to Cloudhub Update the CNAME record for api.acme.com in the organization DNS server pointing to the A record of the cloudhub shared load balancer(SLB) Apply mapping rules in the SLB to map URLs to their corresponding Mule applications.
Answers
Suggested answer: A

Explanation:

https://help.mulesoft.com/s/feed/0D52T000055pzgsSAA.

An organization is designing a mule application to support an all or nothing transaction between serval database operations and some other connectors so that they all roll back if there is a problem with any of the connectors Besides the database connector , what other connector can be used in the transaction.

A.
VM
A.
VM
Answers
B.
Anypoint MQ
B.
Anypoint MQ
Answers
C.
SFTP
C.
SFTP
Answers
D.
ObjectStore
D.
ObjectStore
Answers
Suggested answer: A

Explanation:

Correct answer is VM VM support Transactional Type. When an exception occur, The transaction rolls back to its original state for reprocessing. This feature is not supported by other connectors.

Here is additional information about Transaction management:

A mule application uses an HTTP request operation to involve an external API.

The external API follows the HTTP specification for proper status code usage.

What is possible cause when a 3xx status code is returned to the HTTP Request operation from theexternal API?

A.
The request was not accepted by the external API
A.
The request was not accepted by the external API
Answers
B.
The request was Redirected to a different URL by the external API
B.
The request was Redirected to a different URL by the external API
Answers
C.
The request was NOT RECEIVED by the external API
C.
The request was NOT RECEIVED by the external API
Answers
D.
The request was ACCEPTED by the external API
D.
The request was ACCEPTED by the external API
Answers
Suggested answer: B

Explanation:

3xx HTTP status codes indicate a redirection that the user agent (a web browser or a crawler) needsto take further action when trying to access a particular resource.

Reference: https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html

An organization is migrating all its Mule applications to Runtime Fabric (RTF). None of the Mule applications use Mule domain projects.

Currently, all the Mule applications have been manually deployed to a server group among several customer hosted Mule runtimes.

Port conflicts between these Mule application deployments are currently managed by the DevOps team who carefully manage Mule application properties files.

When the Mule applications are migrated from the current customer-hosted server group to Runtime Fabric (RTF), fo the Mule applications need to be rewritten and what DevOps port configuration responsibilities change or stay the same?

A.
Yes, the Mule applications Must be rewrittenDevOps No Longer needs to manage port conflicts between the Mule applications
A.
Yes, the Mule applications Must be rewrittenDevOps No Longer needs to manage port conflicts between the Mule applications
Answers
B.
Yes, the Mule applications Must be rewrittenDevOps Must Still Manage port conflicts.
B.
Yes, the Mule applications Must be rewrittenDevOps Must Still Manage port conflicts.
Answers
C.
NO, The Mule applications do NOT need to be rewrittenDevOps MUST STILL manage port conflicts
C.
NO, The Mule applications do NOT need to be rewrittenDevOps MUST STILL manage port conflicts
Answers
D.
NO, the Mule applications do NO need to be rewrittenDevOps NO LONGER needs to manage port conflicts between the Mule applications.
D.
NO, the Mule applications do NO need to be rewrittenDevOps NO LONGER needs to manage port conflicts between the Mule applications.
Answers
Suggested answer: C

Explanation:

* Anypoint Runtime Fabric is a container service that automates the deployment and orchestration of your Mule applications and gateways.

* Runtime Fabric runs on customer-managed infrastructure on AWS, Azure, virtual machines (VMs) or bare-metal servers.

* As none of the Mule applications use Mule domain projects. applications are not required to be rewritten. Also when applications are deployed on RTF, by default ingress is allowed only on 8081.

* Hence port conflicts are not required to be managed by DevOps team

An organization is evaluating using the CloudHub shared Load Balancer (SLB) vs creating a CloudHub dedicated load balancer (DLB). They are evaluating how this choice affects the various types of certificates used by CloudHub deplpoyed Mule applications, including MuleSoft-provided, customerprovided, or Mule application-provided certificates.

What type of restrictions exist on the types of certificates that can be exposed by the CloudHub Shared Load Balancer (SLB) to external web clients over the public internet?

A.
Only MuleSoft-provided certificates are exposed.
A.
Only MuleSoft-provided certificates are exposed.
Answers
B.
Only customer-provided wildcard certificates are exposed.
B.
Only customer-provided wildcard certificates are exposed.
Answers
C.
Only customer-provided self-signed certificates are exposed.
C.
Only customer-provided self-signed certificates are exposed.
Answers
D.
Only underlying Mule application certificates are exposed (pass-through)
D.
Only underlying Mule application certificates are exposed (pass-through)
Answers
Suggested answer: A

Explanation:

https://docs.mulesoft.com/runtime-manager/dedicated-load-balancer-tutorial

A Mule application is being designed To receive nightly a CSV file containing millions of records from an external vendor over SFTP, The records from the file need to be validated, transformed. And then written to a database. Records can be inserted into the database in any order.

In this use case, what combination of Mule components provides the most effective and performant way to write these records to the database?

A.
Use a Parallel for Each scope to Insert records one by one into the database
A.
Use a Parallel for Each scope to Insert records one by one into the database
Answers
B.
Use a Scatter-Gather to bulk insert records into the database
B.
Use a Scatter-Gather to bulk insert records into the database
Answers
C.
Use a Batch job scope to bulk insert records into the database.
C.
Use a Batch job scope to bulk insert records into the database.
Answers
D.
Use a DataWeave map operation and an Async scope to insert records one by one into the database.
D.
Use a DataWeave map operation and an Async scope to insert records one by one into the database.
Answers
Suggested answer: C

Explanation:

Correct answer is Use a Batch job scope to bulk insert records into the database * Batch Job is most efficient way to manage millions of records.

A few points to note here are as follows :

Reliability: If you want reliabilty while processing the records, i.e should the processing survive a runtime crash or other unhappy scenarios, and when restarted process all the remaining records, if yes then go for batch as it uses persistent queues.

Error Handling: In Parallel for each an error in a particular route will stop processing the remaining records in that route and in such case you'd need to handle it using on error continue, batch process does not stop during such error instead you can have a step for failures and have a dedicated handling in it.

Memory footprint: Since question said that there are millions of records to process, parallel for each will aggregate all the processed records at the end and can possibly cause Out Of Memory.

Batch job instead provides a BatchResult in the on complete phase where you can get the count of failures and success. For huge file processing if order is not a concern definitely go ahead with Batch Job

An automation engineer needs to write scripts to automate the steps of the API lifecycle, including steps to create, publish, deploy and manage APIs and their implementations in Anypoint Platform.

What Anypoint Platform feature can be used to automate the execution of all these actions in scripts in the easiest way without needing to directly invoke the Anypoint Platform REST APIs?

A.
Automated Policies in API Manager
A.
Automated Policies in API Manager
Answers
B.
Runtime Manager agent
B.
Runtime Manager agent
Answers
C.
The Mule Maven Plugin
C.
The Mule Maven Plugin
Answers
D.
Anypoint CLI
D.
Anypoint CLI
Answers
Suggested answer: D

Explanation:

Anypoint Platform provides a scripting and command-line tool for both Anypoint Platform and Anypoint Platform Private Cloud Edition (Anypoint Platform PCE). The command-line interface (CLI) supports both the interactive shell and standard CLI modes and works with: Anypoint Exchange Access management Anypoint Runtime Manager

A company wants its users to log in to Anypoint Platform using the company's own internal user credentials. To achieve this, the company needs to integrate an external identity provider (IdP) with the company's Anypoint Platform master organization, but SAML 2.0 CANNOT be used. Besides SAML 2.0, what single-sign-on standard can the company use to integrate the IdP with their Anypoint Platform master organization?

A.
SAML 1.0
A.
SAML 1.0
Answers
B.
OAuth 2.0
B.
OAuth 2.0
Answers
C.
Basic Authentication
C.
Basic Authentication
Answers
D.
OpenID Connect
D.
OpenID Connect
Answers
Suggested answer: D

Explanation:

As the Anypoint Platform organization administrator, you can configure identity management in Anypoint Platform to set up users for single sign-on (SSO).

Configure identity management using one of the following single sign-on standards:

1) OpenID Connect: End user identity verification by an authorization server including SSO 2) SAML 2.0: Web-based authorization including cross-domain SSO

Total 244 questions
Go to page: of 25