ExamGecko
Home Home / Salesforce / Certified MuleSoft Integration Architect I

Certified MuleSoft Integration Architect I: Salesforce Certified MuleSoft Integration Architect I

Salesforce Certified MuleSoft Integration Architect I
Vendor:

Salesforce

Salesforce Certified MuleSoft Integration Architect I Exam Questions: 273
Salesforce Certified MuleSoft Integration Architect I   2.370 Learners
Take Practice Tests
Comming soon
PDF | VPLUS

The Certified MuleSoft Integration Architect I exam is a crucial step for anyone looking to excel in MuleSoft integration architecture. To increase your chances of success, practicing with real exam questions shared by those who have already passed can be incredibly helpful. In this guide, we’ll provide practice test questions and answers, offering insights directly from successful candidates.

Why Use Certified MuleSoft Integration Architect I Practice Test?

  • Real Exam Experience: Our practice tests accurately mirror the format and difficulty of the actual Certified MuleSoft Integration Architect I exam, providing you with a realistic preparation experience.
  • Identify Knowledge Gaps: Practicing with these tests helps you pinpoint areas that need more focus, allowing you to study more effectively.
  • Boost Confidence: Regular practice builds confidence and reduces test anxiety.
  • Track Your Progress: Monitor your performance to see improvements and adjust your study plan accordingly.

Key Features of Certified MuleSoft Integration Architect I Practice Test

  • Up-to-Date Content: Our community regularly updates the questions to reflect the latest exam objectives and technology trends.
  • Detailed Explanations: Each question comes with detailed explanations, helping you understand the correct answers and learn from any mistakes.
  • Comprehensive Coverage: The practice tests cover all key topics of the Certified MuleSoft Integration Architect I exam, including API-led connectivity, integration patterns, and best practices for MuleSoft.
  • Customizable Practice: Tailor your study experience by creating practice sessions based on specific topics or difficulty levels.

Exam Details

  • Exam Number: MuleSoft Integration Architect I
  • Exam Name: Certified MuleSoft Integration Architect I Exam
  • Length of Test: 120 minutes
  • Exam Format: Multiple-choice and scenario-based questions
  • Exam Language: English
  • Number of Questions in the Actual Exam: 60 questions
  • Passing Score: 70%

Use the member-shared Certified MuleSoft Integration Architect I Practice Tests to ensure you're fully prepared for your certification exam. Start practicing today and take a significant step towards achieving your certification goals!

Related questions

Refer to the exhibit.

A Mule 4 application has a parent flow that breaks up a JSON array payload into 200 separate items, then sends each item one at a time inside an Async scope to a VM queue.

A second flow to process orders has a VM Listener on the same VM queue. The rest of this flow processes each received item by writing the item to a database.

This Mule application is deployed to four CloudHub workers with persistent queues enabled.

What message processing guarantees are provided by the VM queue and the CloudHub workers, and how are VM messages routed among the CloudHub workers for each invocation of the parent flow under normal operating conditions where all the CloudHub workers remain online?

A.
EACH item VM message is processed AT MOST ONCE by ONE CloudHub worker, with workers chosen in a deterministic round-robin fashion Each of the four CloudHub workers can be expected to process 1/4 of the Item VM messages (about 50 items)
A.
EACH item VM message is processed AT MOST ONCE by ONE CloudHub worker, with workers chosen in a deterministic round-robin fashion Each of the four CloudHub workers can be expected to process 1/4 of the Item VM messages (about 50 items)
Answers
B.
EACH item VM message is processed AT LEAST ONCE by ONE ARBITRARY CloudHub worker Each of the four CloudHub workers can be expected to process some item VM messages
B.
EACH item VM message is processed AT LEAST ONCE by ONE ARBITRARY CloudHub worker Each of the four CloudHub workers can be expected to process some item VM messages
Answers
C.
ALL Item VM messages are processed AT LEAST ONCE by the SAME CloudHub worker where the parent flow was invoked This one CloudHub worker processes ALL 200 item VM messages
C.
ALL Item VM messages are processed AT LEAST ONCE by the SAME CloudHub worker where the parent flow was invoked This one CloudHub worker processes ALL 200 item VM messages
Answers
D.
ALL item VM messages are processed AT MOST ONCE by ONE ARBITRARY CloudHub worker This one CloudHub worker processes ALL 200 item VM messages
D.
ALL item VM messages are processed AT MOST ONCE by ONE ARBITRARY CloudHub worker This one CloudHub worker processes ALL 200 item VM messages
Answers
Suggested answer: B

Explanation:

Correct answer is EACH item VM message is processed AT LEAST ONCE by ONE ARBITRARY CloudHub worker. Each of the four CloudHub workers can be expected to process some item VM messages In Cloudhub, each persistent VM queue is listened on by every CloudHub worker - But each message is read and processed at least once by only one CloudHub worker and the duplicate processing is possible - If the CloudHub worker fails , the message can be read by another worker to prevent loss of messages and this can lead to duplicate processing - By default , every CloudHub worker's VM Listener receives different messages from VM Queue Referenece: https://dzone.com/articles/deploying-mulesoft-application-on-1-worker-vs-mult

asked 23/09/2024
Jordan Pfingsten
44 questions

An ABC Farms project team is planning to build a new API that is required to work with data from different domains across the organization.

The organization has a policy that all project teams should leverage existing investments by reusing existing APIs and related resources and documentation that other project teams have already developed and deployed.

To support reuse, where on Anypoint Platform should the project team go to discover and read existing APIs, discover related resources and documentation, and interact with mocked versions of those APIs?

A.
Design Center
A.
Design Center
Answers
B.
API Manager
B.
API Manager
Answers
C.
Runtime Manager
C.
Runtime Manager
Answers
D.
Anypoint Exchange
D.
Anypoint Exchange
Answers
Suggested answer: D

Explanation:

The mocking service is a feature of Anypoint Platform and runs continuously. You can run the mocking service from the text editor, the visual editor, and from Anypoint Exchange. You can simulate calls to the API in API Designer before publishing the API specification to Exchange or in Exchange after publishing the API specification.

asked 23/09/2024
Venish Arumugam
35 questions

A Mule application uses the Database connector.

What condition can the Mule application automatically adjust to or recover from without needing to restart or redeploy the Mule application?

A.
One of the stored procedures being called by the Mule application has been renamed
A.
One of the stored procedures being called by the Mule application has been renamed
Answers
B.
The database server was unavailable for four hours due to a major outage but is now fully operational again
B.
The database server was unavailable for four hours due to a major outage but is now fully operational again
Answers
C.
The credentials for accessing the database have been updated and the previous credentials are no longer valid
C.
The credentials for accessing the database have been updated and the previous credentials are no longer valid
Answers
D.
The database server has been updated and hence the database driver library/JAR needs a minor version upgrade
D.
The database server has been updated and hence the database driver library/JAR needs a minor version upgrade
Answers
Suggested answer: B

Explanation:

* Any change in the application will require a restart except when the issue outside the app. For below situations , you would need to redeploy the code after doing necessary changes

-- One of the stored procedures being called by the Mule application has been renamed. In this case, in the Mule application you will have to do changes to accommodate the new stored procedure name.

-- Required redesign of Mule applications to follow microservice architecture principles. As code is changed, deployment is must

-- If the credentials changed and you need to update the connector or the properties.

-- The credentials for accessing the database have been updated and the previous credentials are no longer valid. In this situation you need to restart or redeploy depending on how credentials are configured in Mule application.

* So Correct answer is The database server was unavailable for four hours due to a major outage but is now fully operational again as this is the only external issue to application.

asked 23/09/2024
Maria Gervasi
37 questions

A new Mule application under development must implement extensive data transformation logic. Some of the data transformation functionality is already available as external transformation services that are mature and widely used across the organization; the rest is highly specific to the new Mule application.

The organization follows a rigorous testing approach, where every service and application must be extensively acceptance tested before it is allowed to go into production.

What is the best way to implement the data transformation logic for this new Mule application while minimizing the overall testing effort?

A.
Implement and expose all transformation logic as mlaoservices using DataWeave, so it can be reused by any application component that needs it, including the new Mule application
A.
Implement and expose all transformation logic as mlaoservices using DataWeave, so it can be reused by any application component that needs it, including the new Mule application
Answers
B.
Implement transformation logic in the new Mute application using DataWeave, replicating the transformation logic of existing transformation services
B.
Implement transformation logic in the new Mute application using DataWeave, replicating the transformation logic of existing transformation services
Answers
C.
Extend the existing transformation services with new transformation logic and Invoke them from the new Mule application
C.
Extend the existing transformation services with new transformation logic and Invoke them from the new Mule application
Answers
D.
Implement transformation logic in the new Mute application using DataWeave, invoking existing transformation services when possible
D.
Implement transformation logic in the new Mute application using DataWeave, invoking existing transformation services when possible
Answers
Suggested answer: D

Explanation:

Correct answer is Implement transformation logic in the new Mule application using DataWeave, invoking existing transformation services when possible. * The key here minimal testing effort, 'Extend existing transformation logic' is not a feasible option because additional functionality is highly specific to the new Mule application so it should not be a part of commonly used functionality. So this option is ruled out. * 'Implement transformation logic in the new Mule application using DataWeave, replicating the transformation logic of existing transformation services' Replicating the transformation logic of existing transformation services will cause duplicity of code. So this option is ruled out. * 'Implement and expose all transformation logic as microservices using DataWeave, so it can be reused by any application component that needs it, including the new Mule application' as question specifies that the transformation is app specific and wont be used outside

asked 23/09/2024
Lietuvis Kau
32 questions

A Mule application is being designed To receive nightly a CSV file containing millions of records from an external vendor over SFTP, The records from the file need to be validated, transformed. And then written to a database. Records can be inserted into the database in any order.

In this use case, what combination of Mule components provides the most effective and performant way to write these records to the database?

A.
Use a Parallel for Each scope to Insert records one by one into the database
A.
Use a Parallel for Each scope to Insert records one by one into the database
Answers
B.
Use a Scatter-Gather to bulk insert records into the database
B.
Use a Scatter-Gather to bulk insert records into the database
Answers
C.
Use a Batch job scope to bulk insert records into the database.
C.
Use a Batch job scope to bulk insert records into the database.
Answers
D.
Use a DataWeave map operation and an Async scope to insert records one by one into the database.
D.
Use a DataWeave map operation and an Async scope to insert records one by one into the database.
Answers
Suggested answer: C

Explanation:

Correct answer is Use a Batch job scope to bulk insert records into the database

* Batch Job is most efficient way to manage millions of records.

A few points to note here are as follows :

Reliability: If you want reliabilty while processing the records, i.e should the processing survive a runtime crash or other unhappy scenarios, and when restarted process all the remaining records, if yes then go for batch as it uses persistent queues.

Error Handling: In Parallel for each an error in a particular route will stop processing the remaining records in that route and in such case you'd need to handle it using on error continue, batch process does not stop during such error instead you can have a step for failures and have a dedicated handling in it.

Memory footprint: Since question said that there are millions of records to process, parallel for each will aggregate all the processed records at the end and can possibly cause Out Of Memory.

Batch job instead provides a BatchResult in the on complete phase where you can get the count of failures and success. For huge file processing if order is not a concern definitely go ahead with Batch Job

asked 23/09/2024
Ah Say
31 questions

A Mule application is deployed to a cluster of two(2) cusomter-hosted Mule runtimes. Currently the node name Alice is the primary node and node named bob is the secondary node. The mule application has a flow that polls a directory on a file system for new files.

The primary node Alice fails for an hour and then restarted.

After the Alice node completely restarts, from what node are the files polled, and what node is now the primary node for the cluster?

A.
Files are polled from Alice node Alice is now the primary node
A.
Files are polled from Alice node Alice is now the primary node
Answers
B.
Files are polled form Bob node Alice is now the primary node
B.
Files are polled form Bob node Alice is now the primary node
Answers
C.
Files are polled from Alice node Bob is the now the primary node
C.
Files are polled from Alice node Bob is the now the primary node
Answers
D.
Files are polled form Bob node Bob is now the primary node
D.
Files are polled form Bob node Bob is now the primary node
Answers
Suggested answer: D

Explanation:

* Mule High Availability Clustering provides basic failover capability for Mule. * When the primary Mule Runtime becomes unavailable, for example, because of a fatal JVM or hardware failure or it's taken offline for maintenance, a backup Mule Runtime immediately becomes the primary node and resumes processing where the failed instance left off. * After a system administrator recovers a failed Mule Runtime server and puts it back online, that server automatically becomes the backup node. In this case, Alice, once up, will become backup ----------------------------------------------------------------------------------------------------------------------------------------------

Reference: https://docs.mulesoft.com/mule-runtime/4.3/hadr-guide So correct choice is : Files are polled form Bob node Bob is now the primary node

asked 23/09/2024
Vincent Scotti
29 questions

A banking company is developing a new set of APIs for its online business. One of the critical API's is a master lookup API which is a system API. This master lookup API uses persistent object store. This API will be used by all other APIs to provide master lookup data.

Master lookup API is deployed on two cloudhub workers of 0.1 vCore each because there is a lot of master data to be cached. Master lookup data is stored as a key value pair. The cache gets refreshed if they key is not found in the cache.

Doing performance testing it was observed that the Master lookup API has a higher response time due to database queries execution to fetch the master lookup data.

Due to this performance issue, go-live of the online business is on hold which could cause potential financial loss to Bank.

As an integration architect, which of the below option you would suggest to resolve performance issue?

A.
Implement HTTP caching policy for all GET endpoints for the master lookup API and implement locking to synchronize access to object store
A.
Implement HTTP caching policy for all GET endpoints for the master lookup API and implement locking to synchronize access to object store
Answers
B.
Upgrade vCore size from 0.1 vCore to 0,2 vCore
B.
Upgrade vCore size from 0.1 vCore to 0,2 vCore
Answers
C.
Implement HTTP caching policy for all GET endpoints for master lookup API
C.
Implement HTTP caching policy for all GET endpoints for master lookup API
Answers
D.
Add an additional Cloudhub worker to provide additional capacity
D.
Add an additional Cloudhub worker to provide additional capacity
Answers
Suggested answer: A
asked 23/09/2024
Avishek Das
42 questions

An organization uses a set of customer-hosted Mule runtimes that are managed using the Mulesoft-hosted control plane. What is a condition that can be alerted on from Anypoint Runtime Manager without any custom components or custom coding?

A.
When a Mule runtime on a given customer-hosted server is experiencing high memory consumption during certain periods
A.
When a Mule runtime on a given customer-hosted server is experiencing high memory consumption during certain periods
Answers
B.
When an SSL certificate used by one of the deployed Mule applications is about to expire
B.
When an SSL certificate used by one of the deployed Mule applications is about to expire
Answers
C.
When the Mute runtime license installed on a Mule runtime is about to expire
C.
When the Mute runtime license installed on a Mule runtime is about to expire
Answers
D.
When a Mule runtime's customer-hosted server is about to run out of disk space
D.
When a Mule runtime's customer-hosted server is about to run out of disk space
Answers
Suggested answer: A

Explanation:

Correct answer is When a Mule runtime on a given customer-hosted server is experiencing high memory consumption during certain periods Using Anypoint Monitoring, you can configure two different types of alerts: Basic alerts for servers and Mule apps Limit per organization: Up to 50 basic alerts for users who do not have a Titanium subscription to Anypoint Platform You can set up basic alerts to trigger email notifications when a metric you are measuring passes a specified threshold. You can create basic alerts for the following metrics for servers or Mule apps: For on-premises servers and CloudHub apps: * CPU utilization * Memory utilization * Thread count Advanced alerts for graphs in custom dashboards in Anypoint Monitoring. You must have a Titanium subscription to use this feature. Limit per organization: Up to 20 advanced alerts

asked 23/09/2024
Jeffrey Ding
35 questions

An XA transaction Is being configured that involves a JMS connector listening for Incoming JMS messages. What is the meaning of the timeout attribute of the XA transaction, and what happens after the timeout expires?

A.
The time that is allowed to pass between committing the transaction and the completion of the Mule flow After the timeout, flow processing triggers an error
A.
The time that is allowed to pass between committing the transaction and the completion of the Mule flow After the timeout, flow processing triggers an error
Answers
B.
The time that Is allowed to pass between receiving JMS messages on the same JMS connection After the timeout, a new JMS connection Is established
B.
The time that Is allowed to pass between receiving JMS messages on the same JMS connection After the timeout, a new JMS connection Is established
Answers
C.
The time that Is allowed to pass without the transaction being ended explicitly After the timeout, the transaction Is forcefully rolled-back
C.
The time that Is allowed to pass without the transaction being ended explicitly After the timeout, the transaction Is forcefully rolled-back
Answers
D.
The time that Is allowed to pass for state JMS consumer threads to be destroyed After the timeout, a new JMS consumer thread is created
D.
The time that Is allowed to pass for state JMS consumer threads to be destroyed After the timeout, a new JMS consumer thread is created
Answers
Suggested answer: C

Explanation:

* Setting a transaction timeout for the Bitronix transaction manager

Set the transaction timeout either

-- In wrapper.conf

-- In CloudHub in the Properties tab of the Mule application deployment

The default is 60 secs. It is defined as

mule.bitronix.transactiontimeout = 120

* This property defines the timeout for each transaction created for this manager.

If the transaction has not terminated before the timeout expires it will be automatically rolled back.

---------------------------------------------------------------------------------------------------------------------

Additional Info around Transaction Management:

Bitronix is available as the XA transaction manager for Mule applications

To use Bitronix, declare it as a global configuration element in the Mule application

<bti:transaction-manager />

Each Mule runtime can have only one instance of a Bitronix transaction manager, which is shared by all Mule applications

For customer-hosted deployments, define the XA transaction manager in a Mule domain

-- Then share this global element among all Mule applications in the Mule runtime

asked 23/09/2024
Mihail Galabov
32 questions

An organization has decided on a cloudhub migration strategy that aims to minimize the organizations own IT resources. Currently, the organizational has all of its Mule applications running on its own premises and uses an premises load balancer that exposes all APIs under the base URL https://api.acme.com

As part of the migration strategy, the organization plans to migrate all of its Mule applications and load balancer to cloudhub

What is the most straight-forward and cost effective approach to the Mule applications deployment and load balancing that preserves the public URLs?

A.
Deploy the Mule applications to Cloudhub Update the CNAME record for an api.acme.com in the organizations DNS server pointing to the A record of a cloudhub dedicated load balancer(DLB) Apply mapping rules in the DLB to map URLs to their corresponding Mule applications
A.
Deploy the Mule applications to Cloudhub Update the CNAME record for an api.acme.com in the organizations DNS server pointing to the A record of a cloudhub dedicated load balancer(DLB) Apply mapping rules in the DLB to map URLs to their corresponding Mule applications
Answers
B.
For each migrated Mule application, deploy an API proxy Mule application to Cloudhub with all applications under the control of a dedicated load balancer(CLB) Update the CNAME record for api.acme.com in the organization DNS server pointing to the A record of a cloudhub dedicated load balancer(DLB) Apply mapping rules in the DLB to map each API proxy application to its corresponding Mule applications
B.
For each migrated Mule application, deploy an API proxy Mule application to Cloudhub with all applications under the control of a dedicated load balancer(CLB) Update the CNAME record for api.acme.com in the organization DNS server pointing to the A record of a cloudhub dedicated load balancer(DLB) Apply mapping rules in the DLB to map each API proxy application to its corresponding Mule applications
Answers
C.
Deploy the Mule applications to Cloudhub Create CNAME record for api.acme.com in the Cloudhub Shared load balancer (SLB) pointing to the A record of the on-premise load balancer Apply mapping rules in the SLB to map URLs to their corresponding Mule applications
C.
Deploy the Mule applications to Cloudhub Create CNAME record for api.acme.com in the Cloudhub Shared load balancer (SLB) pointing to the A record of the on-premise load balancer Apply mapping rules in the SLB to map URLs to their corresponding Mule applications
Answers
D.
Deploy the Mule applications to Cloudhub Update the CNAME record for api.acme.com in the organization DNS server pointing to the A record of the cloudhub shared load balancer(SLB) Apply mapping rules in the SLB to map URLs to their corresponding Mule applications.
D.
Deploy the Mule applications to Cloudhub Update the CNAME record for api.acme.com in the organization DNS server pointing to the A record of the cloudhub shared load balancer(SLB) Apply mapping rules in the SLB to map URLs to their corresponding Mule applications.
Answers
Suggested answer: A

Explanation:

https://help.mulesoft.com/s/feed/0D52T000055pzgsSAA.

asked 23/09/2024
Danyail Storey
28 questions