ExamGecko
Home Home / MuleSoft / MCIA Level 1 Maintenance

MCIA Level 1 Maintenance: MuleSoft Certified Integration Architect - Level 1 MAINTENANCE

MuleSoft Certified Integration Architect - Level 1 MAINTENANCE
Vendor:

MuleSoft

MuleSoft Certified Integration Architect - Level 1 MAINTENANCE Exam Questions: 116
MuleSoft Certified Integration Architect - Level 1 MAINTENANCE   2.370 Learners
Take Practice Tests
Comming soon
PDF | VPLUS

This study guide should help you understand what to expect on the exam and includes a summary of the topics the exam might cover and links to additional resources. The information and materials in this document should help you focus your studies as you prepare for the exam.

Related questions

An organization is evaluating using the CloudHub shared Load Balancer (SLB) vs creating a CloudHub dedicated load balancer (DLB). They are evaluating how this choice affects the various types of certificates used by CloudHub deployed Mule applications, including MuleSoft-provided, customerprovided, or Mule application-provided certificates. What type of restrictions exist on the types of certificates for the service that can be exposed by the CloudHub Shared Load Balancer (SLB) to external web clients over the public internet?

A.
Underlying Mule applications need to implement own certificates
A.
Underlying Mule applications need to implement own certificates
Answers
B.
Only MuleSoft provided certificates can be used for server side certificate
B.
Only MuleSoft provided certificates can be used for server side certificate
Answers
C.
Only self signed certificates can be used
C.
Only self signed certificates can be used
Answers
D.
All certificates which can be used in shared load balancer need to get approved by raising support ticket
D.
All certificates which can be used in shared load balancer need to get approved by raising support ticket
Answers
Suggested answer: B

Explanation:

Correct answer is Only MuleSoft provided certificates can be used for server side certificate * The CloudHub Shared Load Balancer terminates TLS connections and uses its own server-side certificate.

* You would need to use dedicated load balancer which can enable you to define SSL configurations to provide custom certificates and optionally enforce two-way SSL client authentication.

* To use a dedicated load balancer in your environment, you must first create an Anypoint VPC.

Because you can associate multiple environments with the same Anypoint VPC, you can use the same dedicated load balancer for your different environments.

Additional Info on SLB Vs DLB:

asked 18/09/2024
Fahim Thanawala
43 questions

An application deployed to a runtime fabric environment with two cluster replicas is designed to periodically trigger of flow for processing a high-volume set of records from the source system and synchronize with the SaaS system using the Batch job scope After processing 1000 records in a periodic synchronization of 1 lakh records, the replicas in which batch job instance was started went down due to unexpected failure in the runtime fabric environment What is the consequence of losing the replicas that run the Batch job instance?

A.
The remaining 99000 records will be lost and left and processed
A.
The remaining 99000 records will be lost and left and processed
Answers
B.
The second replicas will take over processing the remaining99000 records
B.
The second replicas will take over processing the remaining99000 records
Answers
C.
A new replacement replica will be available and will be process all 1,00,000 records from scratch leading to duplicate record processing
C.
A new replacement replica will be available and will be process all 1,00,000 records from scratch leading to duplicate record processing
Answers
D.
A new placement replica will be available and will take or processing the remaining 99,000 records
D.
A new placement replica will be available and will take or processing the remaining 99,000 records
Answers
Suggested answer: B
asked 18/09/2024
Eric De La Vega
41 questions

A company is modernizing its legal systems lo accelerate access lo applications and data while supporting the adoption of new technologies. The key to achieving this business goal is unlocking the companies' key systems and dala including microservices miming under Docker and kubernetes containers using apis.

Considering the current aggressive backlog and project delivery requirements the company wants to take a strategic approach in the first phase of its transformation projects by quickly deploying API's in mule runtime that are able lo scale, connect to on premises systems and migrate as needed.

Which runtime deployment option supports company's goals?

Become a Premium Member for full access
Unlock Premium Member  Unlock Premium Member

An ABC Farms project team is planning to build a new API that is required to work with data from different domains across the organization.

The organization has a policy that all project teams should leverage existing investments by reusing existing APIs and related resources and documentation that other project teams have already developed and deployed.

To support reuse, where on Anypoint Platform should the project team go to discover and read existing APIs, discover related resources and documentation, and interact with mocked versions of those APIs?

A.
Design Center
A.
Design Center
Answers
B.
API Manager
B.
API Manager
Answers
C.
Runtime Manager
C.
Runtime Manager
Answers
D.
Anypoint Exchange
D.
Anypoint Exchange
Answers
Suggested answer: D

Explanation:

The mocking service is a feature of Anypoint Platform and runs continuously. You can run the mocking service from the text editor, the visual editor, and from Anypoint Exchange. You can simulate calls to the API in API Designer before publishing the API specification to Exchange or in Exchange after publishing the API specification.

Reference: https://docs.mulesoft.com/design-center/design-mocking-service

asked 18/09/2024
Samuel rodriguez
30 questions

A Mule application is running on a customer-hosted Mule runtime in an organization's network. The Mule application acts as a producer of asynchronous Mule events. Each Mule event must be broadcast to all interested external consumers outside the Mule application. The Mule events should be published in a way that is guaranteed in normal situations and also minimizes duplicate delivery in less frequent failure scenarios.

The organizational firewall is configured to only allow outbound traffic on ports 80 and 443. Some external event consumers are within the organizational network, while others are located outside the firewall.

What Anypoint Platform service is most idiomatic (used for its intended purpose) for publishing these Mule events to all external consumers while addressing the desired reliability goals?

A.
CloudHub VM queues
A.
CloudHub VM queues
Answers
B.
Anypoint MQ
B.
Anypoint MQ
Answers
C.
Anypoint Exchange
C.
Anypoint Exchange
Answers
D.
CloudHub Shared Load Balancer
D.
CloudHub Shared Load Balancer
Answers
Suggested answer: B

Explanation:

Set the Anypoint MQ connector operation to publish or consume messages, or to accept (ACK) or not accept (NACK) a message.

Reference: https://docs.mulesoft.com/mq/

asked 18/09/2024
Sterling White
47 questions

An organization is implementing a Quote of the Day API that caches today's quote. What scenario can use the CloudHub Object Store connector to persist the cache's state?

A.
When there is one deployment of the API implementation to CloudHub and another one to customer hosted mule runtime that must share the cache state.
A.
When there is one deployment of the API implementation to CloudHub and another one to customer hosted mule runtime that must share the cache state.
Answers
B.
When there are two CloudHub deployments of the API implementation by two Anypoint Platform business groups to the same CloudHub region that must share the cache state.
B.
When there are two CloudHub deployments of the API implementation by two Anypoint Platform business groups to the same CloudHub region that must share the cache state.
Answers
C.
When there is one CloudHub deployment of the API implementation to three workers that must share the cache state.
C.
When there is one CloudHub deployment of the API implementation to three workers that must share the cache state.
Answers
D.
When there are three CloudHub deployments of the API implementation to three separate CloudHub regions that must share the cache state.
D.
When there are three CloudHub deployments of the API implementation to three separate CloudHub regions that must share the cache state.
Answers
Suggested answer: C

Explanation:

Object Store Connector is a Mule component that allows for simple key-value storage. Although it can serve a wide variety of use cases, it is mainly design for: - Storing synchronization information, such as watermarks. - Storing temporal information such as access tokens. - Storing user information.

Additionally, Mule Runtime uses Object Stores to support some of its own components, for example:

- The Cache module uses an Object Store to maintain all of the cached data. - The OAuth module (and every OAuth enabled connector) uses Object Stores to store the access and refresh tokens. Object Store data is in the same region as the worker where the app is initially deployed. For example, if you deploy to the Singapore region, the object store persists in the Singapore region. MuleSoft Reference : https://docs.mulesoft.com/object-store-connector/1.1/ Data can be shared between different instances of the Mule application. This is not recommended for Inter Mule app communication.

Coming to the question, object store cannot be used to share cached data if it is deployed as separate Mule applications or deployed under separate Business Groups. Hence correct answer is When there is one CloudHub deployment of the API implementation to three workers that must share the cache state.

asked 18/09/2024
Ben Spiers
34 questions

A global, high-volume shopping Mule application is being built and will be deployed to CloudHub. To improve performance, the Mule application uses a Cache scope that maintains cache state in a CloudHub object store. Web clients will access the Mule application over HTTP from all around the world, with peak volume coinciding with business hours in the web client's geographic location. To achieve optimal performance, what Anypoint Platform region should be chosen for the CloudHub object store?

A.
Choose the same region as to where the Mule application is deployed
A.
Choose the same region as to where the Mule application is deployed
Answers
B.
Choose the US-West region, the only supported region for CloudHub object stores
B.
Choose the US-West region, the only supported region for CloudHub object stores
Answers
C.
Choose the geographically closest available region for each web client
C.
Choose the geographically closest available region for each web client
Answers
Suggested answer: A

Explanation:

CloudHub object store should be in same region where the Mule application is deployed. This will give optimal performance.

Before learning about Cache scope and object store in Mule 4 we understand what is in general Caching is and other related things.

WHAT DOES “CACHING” MEAN?

Caching is the process of storing frequently used data in memory, file system or database which saves processing time and load if it would have to be accessed from original source location every time.

In computing, a cache is a high-speed data storage layer which stores a subset of data, so that future requests for that data are served up faster than is possible by accessing the data’s primary storage location. Caching allows you to efficiently reuse previously retrieved or computed data.

How does Caching work?

The data in a cache is generally stored in fast access hardware such as RAM (Random-access memory) and may also be used in correlation with a software component. A cache’s primary purpose is to increase data retrieval performance by reducing the need to access the underlying slower storage layer.

Caching in MULE 4

In Mule 4 caching can be achieved in mule using cache scope and/or object-store. Cache scope internally uses Object Store to store the data.

What is Object Store

Object Store lets applications store data and states across batch processes, Mule components, and applications, from within an application. If used on cloud hub, the object store is shared between applications deployed on Cluster.

Cache Scope is used in below-mentioned cases:

Need to store the whole response from the outbound processor

Data returned from the outbound processor does not change very frequently ? As Cache scope internally handle the cache hit and cache miss scenarios it is more readable Object Store is used in below-mentioned cases:

Need to store custom/intermediary data

To store watermarks

Sharing the data/stage across applications, schedulers, batch.

If CloudHub object store is in same region where the Mule application is deployed it will aid in fast access of data and give optimal performance.

asked 18/09/2024
István Balla
37 questions

An insurance company is using a CIoudHub runtime plane. As a part of requirement, email alert should be sent to internal operations team every time of policy applied to an API instance is deleted As an integration architect suggest on how this requirement be met?

Become a Premium Member for full access
Unlock Premium Member  Unlock Premium Member

An organization is designing an integration Mule application to process orders by submitting them to a back-end system for offline processing. Each order will be received by the Mule application through an HTTPS POST and must be acknowledged immediately. Once acknowledged, the order will be submitted to a back-end system. Orders that cannot be successfully submitted due to rejections from the back-end system will need to be processed manually (outside the back-end system).

The Mule application will be deployed to a customer-hosted runtime and is able to use an existing ActiveMQ broker if needed. The ActiveMQ broker is located inside the organization’s firewall. The back-end system has a track record of unreliability due to both minor network connectivity issues and longer outages.

What idiomatic (used for their intended purposes) combination of Mule application components and ActiveMQ queues are required to ensure automatic submission of orders to the back-end system while supporting but minimizing manual order processing?

A.
An Until Successful scope to call the back-end systemOne or more ActiveMQ long-retry queuesOne or more ActiveMQ dead-letter queues for manual processing
A.
An Until Successful scope to call the back-end systemOne or more ActiveMQ long-retry queuesOne or more ActiveMQ dead-letter queues for manual processing
Answers
B.
One or more On Error scopes to assist calling the back-end systemAn Until Successful scope containing VM components for long retriesA persistent dead-letter VM queue configured in CloudHub
B.
One or more On Error scopes to assist calling the back-end systemAn Until Successful scope containing VM components for long retriesA persistent dead-letter VM queue configured in CloudHub
Answers
C.
One or more On Error scopes to assist calling the back-end systemOne or more ActiveMQ long-retry queuesA persistent dead-letter object store configured in the CloudHub Object Store service
C.
One or more On Error scopes to assist calling the back-end systemOne or more ActiveMQ long-retry queuesA persistent dead-letter object store configured in the CloudHub Object Store service
Answers
D.
A Batch Job scope to call the back-end systemAn Until Successful scope containing Object Store components for long retries A dead-letter object store configured in the Mule application
D.
A Batch Job scope to call the back-end systemAn Until Successful scope containing Object Store components for long retries A dead-letter object store configured in the Mule application
Answers
Suggested answer: A
asked 18/09/2024
VEDA VIKASH Matam Shashidhar
37 questions

The ABC company has an Anypoint Runtime Fabric on VMs/Bare Metal (RTF-VM) appliance installed on its own customer-hosted AWS infrastructure.

Mule applications are deployed to this RTF-VM appliance. As part of the company standards, the Mule application logs must be forwarded to an external log management tool (LMT).

Given the company's current setup and requirements, what is the most idiomatic (used for its intended purpose) way to send Mule application logs to the external LMT?

A.
In RTF-VM, install and configure the external LTM's log-forwarding agent
A.
In RTF-VM, install and configure the external LTM's log-forwarding agent
Answers
B.
In RTF-VM, edit the pod configuration to automatically install and configure an Anypoint Monitoring agent
B.
In RTF-VM, edit the pod configuration to automatically install and configure an Anypoint Monitoring agent
Answers
C.
In each Mule application, configure custom Log4j settings
C.
In each Mule application, configure custom Log4j settings
Answers
D.
In RTF-VM. configure the out-of-the-box external log forwarder
D.
In RTF-VM. configure the out-of-the-box external log forwarder
Answers
Suggested answer: A

Explanation:

Reference: https://help.mulesoft.com/s/article/Enable-external-log-forwarding-for-Muleapplications-deployed-in-RTF

asked 18/09/2024
Juan Bueno
40 questions