ExamGecko
Home Home / Salesforce / Certified MuleSoft Platform Architect I

Certified MuleSoft Platform Architect I: Salesforce Certified MuleSoft Platform Architect I

Salesforce Certified MuleSoft Platform Architect I
Vendor:

Salesforce

Salesforce Certified MuleSoft Platform Architect I Exam Questions: 152
Salesforce Certified MuleSoft Platform Architect I   2.370 Learners
Take Practice Tests
Comming soon
PDF | VPLUS

The Certified MuleSoft Platform Architect I exam is a crucial step for anyone looking to excel in architecting solutions on the MuleSoft platform. To increase your chances of success, practicing with real exam questions shared by those who have already passed can be incredibly helpful. In this guide, we’ll provide practice test questions and answers, offering insights directly from successful candidates.

Why Use Certified MuleSoft Platform Architect I Practice Test?

  • Real Exam Experience: Our practice tests accurately mirror the format and difficulty of the actual Certified MuleSoft Platform Architect I exam, providing you with a realistic preparation experience.
  • Identify Knowledge Gaps: Practicing with these tests helps you pinpoint areas that need more focus, allowing you to study more effectively.
  • Boost Confidence: Regular practice builds confidence and reduces test anxiety.
  • Track Your Progress: Monitor your performance to see improvements and adjust your study plan accordingly.

Key Features of Certified MuleSoft Platform Architect I Practice Test

  • Up-to-Date Content: Our community regularly updates the questions to reflect the latest exam objectives and technology trends.
  • Detailed Explanations: Each question comes with detailed explanations, helping you understand the correct answers and learn from any mistakes.
  • Comprehensive Coverage: The practice tests cover all key topics of the Certified MuleSoft Platform Architect I exam, including API-led connectivity, application networks, and MuleSoft architecture.
  • Customizable Practice: Tailor your study experience by creating practice sessions based on specific topics or difficulty levels.

Exam Details

  • Exam Number: MuleSoft Platform Architect I
  • Exam Name: Certified MuleSoft Platform Architect I Exam
  • Length of Test: 120 minutes
  • Exam Format: Multiple-choice and scenario-based questions
  • Exam Language: English
  • Number of Questions in the Actual Exam: 60 questions
  • Passing Score: 70%

Use the member-shared Certified MuleSoft Platform Architect I Practice Tests to ensure you're fully prepared for your certification exam. Start practicing today and take a significant step towards achieving your certification goals!

Related questions

A system API has a guaranteed SLA of 100 ms per request. The system API is deployed to a primary environment as well as to a disaster recovery (DR) environment, with different DNS names in each environment. An upstream process API invokes the system API and the main goal of this process API is to respond to client requests in the least possible time. In what order should the system APIs be invoked, and what changes should be made in order to speed up the response time for requests from the process API?

A.
In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment, and ONLY use the first response
A.
In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment, and ONLY use the first response
Answers
B.
In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment using a scatter-gather configured with a timeout, and then merge the responses
B.
In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment using a scatter-gather configured with a timeout, and then merge the responses
Answers
C.
Invoke the system API deployed to the primary environment, and if it fails, invoke the system API deployed to the DR environment
C.
Invoke the system API deployed to the primary environment, and if it fails, invoke the system API deployed to the DR environment
Answers
D.
Invoke ONLY the system API deployed to the primary environment, and add timeout and retry logic to avoid intermittent failures
D.
Invoke ONLY the system API deployed to the primary environment, and add timeout and retry logic to avoid intermittent failures
Answers
Suggested answer: A

Explanation:

In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment, and ONLY use the first response. >> The API requirement in the given scenario is to respond in least possible time.>> The option that is suggesting to first try the API in primary environment and then fallback to API in DR environment would result in successful response but NOT in least possible time. So, this is NOT a right choice of implementation for given requirement.>> Another option that is suggesting to ONLY invoke API in primary environment and to add timeout and retries may also result in successful response upon retries but NOT in least possible time. So, this is also NOT a right choice of implementation for given requirement.>> One more option that is suggesting to invoke API in primary environment and API in DR environment in parallel using Scatter-Gather would result in wrong API response as it would return merged results and moreover, Scatter-Gather does things in parallel which is true but still completes its scope only on finishing all routes inside it. So again, NOT a right choice of implementation for given requirementThe Correct choice is to invoke the API in primary environment and the API in DR environment parallelly, and using ONLY the first response received from one of them.

asked 23/09/2024
gdgd gdgd
25 questions

How are an API implementation, API client, and API consumer combined to invoke and process an API?

A.
The API consumer creates an API implementation, which receives API invocations from an API such that they are processed for an API client
A.
The API consumer creates an API implementation, which receives API invocations from an API such that they are processed for an API client
Answers
B.
The API client creates an API consumer, which receives API invocations from an API such that they are processed for an API implementation
B.
The API client creates an API consumer, which receives API invocations from an API such that they are processed for an API implementation
Answers
C.
The ApI consumer creates an API client, which sends API invocations to an API such that they are processed by an API implementation
C.
The ApI consumer creates an API client, which sends API invocations to an API such that they are processed by an API implementation
Answers
D.
The ApI client creates an API consumer, which sends API invocations to an API such that they are processed by an API implementation
D.
The ApI client creates an API consumer, which sends API invocations to an API such that they are processed by an API implementation
Answers
Suggested answer: C

Explanation:

The API consumer creates an API client, which sends APIinvocations to an API such that they are processed by an API implementation Terminology:>> API Client - It is a piece of code or program the is written to invoke an API>> API Consumer - An owner/entity who owns the API Client. API Consumers write API clients.>> API - The provider of the API functionality. Typically an API Instance on API Manager where they are managed and operated.>> API Implementation - The actual piece of code written by API provider where the functionality of the API is implemented. Typically, these are Mule Applications running on Runtime Manager.

asked 23/09/2024
Glen Teis
34 questions

What are the major benefits of MuleSoft proposed IT Operating Model?

A.
1. Decrease the IT delivery gap 2. Meet various business demands without increasing the IT capacity 3. Focus on creation of reusable assets first. Upon finishing creation of all the possible assets then inform the LOBs in the organization to start using them
A.
1. Decrease the IT delivery gap 2. Meet various business demands without increasing the IT capacity 3. Focus on creation of reusable assets first. Upon finishing creation of all the possible assets then inform the LOBs in the organization to start using them
Answers
B.
1. Decrease the IT delivery gap 2. Meet various business demands by increasing the IT capacity and forming various IT departments 3. Make consumption of assets at the rate of production
B.
1. Decrease the IT delivery gap 2. Meet various business demands by increasing the IT capacity and forming various IT departments 3. Make consumption of assets at the rate of production
Answers
C.
1. Decrease the IT delivery gap 2. Meet various business demands without increasing the IT capacity 3. Make consumption of assets at the rate of production
C.
1. Decrease the IT delivery gap 2. Meet various business demands without increasing the IT capacity 3. Make consumption of assets at the rate of production
Answers
Suggested answer: C

Explanation:

1. Decrease the IT delivery gap2. Meet various business demands without increasing the IT capacity3. Make consumption of assets at the rate of production.

asked 23/09/2024
Dan Yann
47 questions

A retail company is using an Order API to accept new orders. The Order API uses a JMS queue to submit orders to a backend order management service. The normal load for orders is being handled using two (2) CloudHub workers, each configured with 0.2 vCore. The CPU load of each CloudHub worker normally runs well below 70%. However, several times during the year the Order API gets four times (4x) the average number of orders. This causes the CloudHub worker CPU load to exceed 90% and the order submission time to exceed 30 seconds. The cause, however, is NOT the backend order management service, which still responds fast enough to meet the response SLA for the Order API. What is the MOST resource-efficient way to configure the Mule application's CloudHub deployment to help the company cope with this performance challenge?

A.
Permanently increase the size of each of the two (2) CloudHub workers by at least four times (4x) to one (1) vCore
A.
Permanently increase the size of each of the two (2) CloudHub workers by at least four times (4x) to one (1) vCore
Answers
B.
Use a vertical CloudHub autoscaling policy that triggers on CPU utilization greater than 70%
B.
Use a vertical CloudHub autoscaling policy that triggers on CPU utilization greater than 70%
Answers
C.
Permanently increase the number of CloudHub workers by four times (4x) to eight (8) CloudHub workers
C.
Permanently increase the number of CloudHub workers by four times (4x) to eight (8) CloudHub workers
Answers
D.
Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater than 70%
D.
Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater than 70%
Answers
Suggested answer: D

Explanation:

Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater than 70% The scenario in the question is very clearly stating that the usual traffic in the year is pretty well handled by the existing worker configuration with CPU running well below 70%. The problem occurs only 'sometimes' occasionally when there is spike in the number of orders coming in.So, based on above, We neither need to permanently increase the size of each worker nor need to permanently increase the number of workers. This is unnecessary as other than those 'occasional' times the resources are idle and wasted.We have two options left now. Either to use horizontal Cloudhub autoscaling policy to automatically increase the number of workers or to use vertical Cloudhub autoscaling policy to automatically increase the vCore size of each worker.Here, we need to take two things into consideration:1. CPU2. Order Submission Rate to JMS Queue>> From CPU perspective, both the options (horizontal and vertical scaling) solves the issue. Both helps to bring down the usage below 90%.>> However, If we go with Vertical Scaling, then from Order Submission Rate perspective, as the application is still being load balanced with two workers only, there may not be much improvement in the incoming request processing rate and order submission rate to JMS queue. The throughput would be same as before. Only CPU utilization comes down.>> But, if we go with Horizontal Scaling, it will spawn new workers and adds extra hand to increase the throughput as more workers are being load balanced now. This way we can address both CPU and Order Submission rate.Hence, Horizontal CloudHub Autoscaling policy is the right and best answer.

asked 23/09/2024
Beena Bamania
34 questions

What is true about API implementations when dealing with legal regulations that require all data processing to be performed within a certain jurisdiction (such as in the USA or the EU)?

A.
They must avoid using the Object Store as it depends on services deployed ONLY to the US East region
A.
They must avoid using the Object Store as it depends on services deployed ONLY to the US East region
Answers
B.
They must use a Jurisdiction-local external messaging system such as Active MQ rather than Anypoint MQ
B.
They must use a Jurisdiction-local external messaging system such as Active MQ rather than Anypoint MQ
Answers
C.
They must te deployed to Anypoint Platform runtime planes that are managed by Anypoint Platform control planes, with both planes in the same Jurisdiction
C.
They must te deployed to Anypoint Platform runtime planes that are managed by Anypoint Platform control planes, with both planes in the same Jurisdiction
Answers
D.
They must ensure ALL data is encrypted both in transit and at rest
D.
They must ensure ALL data is encrypted both in transit and at rest
Answers
Suggested answer: C

Explanation:

They must be deployed to Anypoint Platform runtime planes thatare managed by Anypoint Platform control planes, with both planes in the same Jurisdiction. >> As per legal regulations, all data processing to be performed within a certain jurisdiction. Meaning, the data in USA should reside within USA and should not go out. Same way, the data in EU should reside within EU and should not go out.>> So, just encrypting the data in transit and at rest does not help to be compliant with the rules. We need to make sure that data does not go out too.>> The data that we are talking here is not just about the messages that are published to Anypoint MQ. It includes the apps running, transaction states, application logs, events, metric info and any other metadata. So, just replacing Anypoint MQ with a locally hosted ActiveMQ does NOT help.>> The data that we are talking here is not just about the key/value pairs that are stored in Object Store. It includes the messages published, apps running, transaction states, application logs, events, metric info and any other metadata. So, just avoiding using Object Store does NOT help.>> The only option left and also the right option in the given choices is to deploy application on runtime and control planes that are both within the jurisdiction.

asked 23/09/2024
Guilherme Silva
27 questions

An API implementation is being designed that must invoke an Order API, which is known to repeatedly experience downtime.

For this reason, a fallback API is to be called when the Order API is unavailable.

What approach to designing the invocation of the fallback API provides the best resilience?

A.
Search Anypoint Exchange for a suitable existing fallback API, and then implement invocations to this fallback API in addition to the Order API
A.
Search Anypoint Exchange for a suitable existing fallback API, and then implement invocations to this fallback API in addition to the Order API
Answers
B.
Create a separate entry for the Order API in API Manager, and then invoke this API as a fallback API if the primary Order API is unavailable
B.
Create a separate entry for the Order API in API Manager, and then invoke this API as a fallback API if the primary Order API is unavailable
Answers
C.
Redirect client requests through an HTTP 307 Temporary Redirect status code to the fallback API whenever the Order API is unavailable
C.
Redirect client requests through an HTTP 307 Temporary Redirect status code to the fallback API whenever the Order API is unavailable
Answers
D.
Set an option in the HTTP Requester component that invokes the Order API to instead invoke a fallback API whenever an HTTP 4xx or 5xx response status code is returned from the Order API
D.
Set an option in the HTTP Requester component that invokes the Order API to instead invoke a fallback API whenever an HTTP 4xx or 5xx response status code is returned from the Order API
Answers
Suggested answer: A

Explanation:

Search Anypoint exchange for a suitable existing fallback API, andthen implement invocations to this fallback API in addition to the order API >> It is not ideal and good approach, until unless there is a pre-approved agreement with the API clients that they will receive a HTTP 3xx temporary redirect status code and they have to implement fallback logic their side to call another API.>> Creating separate entry of same Order API in API manager would just create an another instance of it on top of same API implementation. So, it does NO GOOD by using clone od same API as a fallback API. Fallback API should be ideally a different API implementation that is not same as primary one.>> There is NO option currently provided by Anypoint HTTP Connector that allows us to invoke a fallback API when we receive certain HTTP status codes in response.The only statement TRUE in the given options is to Search Anypoint exchange for a suitable existing fallback API, and then implement invocations to this fallback API in addition to the order API.

asked 23/09/2024
Joseph Lewis
44 questions

What is a typical result of using a fine-grained rather than a coarse-grained API deployment model to implement a given business process?

A.
A decrease in the number of connections within the application network supporting the business process
A.
A decrease in the number of connections within the application network supporting the business process
Answers
B.
A higher number of discoverable API-related assets in the application network
B.
A higher number of discoverable API-related assets in the application network
Answers
C.
A better response time for the end user as a result of the APIs being smaller in scope and complexity
C.
A better response time for the end user as a result of the APIs being smaller in scope and complexity
Answers
D.
An overall tower usage of resources because each fine-grained API consumes less resources
D.
An overall tower usage of resources because each fine-grained API consumes less resources
Answers
Suggested answer: B

Explanation:

A higher number of discoverable API-related assets in theapplication network. >> We do NOT get faster response times in fine-grained approach when compared to coarse-grained approach.>> In fact, we get faster response times from a network having coarse-grained APIs compared to a network having fine-grained APIs model. The reasons are below.Fine-grained approach:1. will have more APIs compared to coarse-grained2. So, more orchestration needs to be done to achieve a functionality in business process.3. Which means, lots of API calls to be made. So, more connections will needs to be established. So, obviously more hops, more network i/o, more number of integration points compared to coarse-grained approach where fewer APIs with bulk functionality embedded in them.4. That is why, because of all these extra hops and added latencies, fine-grained approach will have bit more response times compared to coarse-grained.5. Not only added latencies and connections, there will be more resources used up in fine-grained approach due to more number of APIs.That's why, fine-grained APIs are good in a way to expose more number of resuable assets in your network and make them discoverable. However, needs more maintenance, taking care of integration points, connections, resources with a little compromise w.r.t network hops and response times.

asked 23/09/2024
Dominic Lugg
44 questions

An organization makes a strategic decision to move towards an IT operating model that emphasizes consumption of reusable IT assets using modern APIs (as defined by MuleSoft).

What best describes each modern API in relation to this new IT operating model?

A.
Each modern API has its own software development lifecycle, which reduces the need for documentation and automation
A.
Each modern API has its own software development lifecycle, which reduces the need for documentation and automation
Answers
B.
Each modem API must be treated like a product and designed for a particular target audience (for instance, mobile app developers)
B.
Each modem API must be treated like a product and designed for a particular target audience (for instance, mobile app developers)
Answers
C.
Each modern API must be easy to consume, so should avoid complex authentication mechanisms such as SAML or JWT D
C.
Each modern API must be easy to consume, so should avoid complex authentication mechanisms such as SAML or JWT D
Answers
D.
Each modern API must be REST and HTTP based
D.
Each modern API must be REST and HTTP based
Answers
Suggested answer: B

Explanation:

Answer:s:1. Each modern API must be treated like a product and designed for a particular target audience (for instance mobile app developers)

Bottom of FormTop of Form

asked 23/09/2024
Romain Casagrande
36 questions

Which of the following best fits the definition of API-led connectivity?

A.
API-led connectivity is not just an architecture or technology but also a way to organize people and processes for efficient IT delivery in the organization
A.
API-led connectivity is not just an architecture or technology but also a way to organize people and processes for efficient IT delivery in the organization
Answers
B.
API-led connectivity is a 3-layered architecture covering Experience, Process and System layers
B.
API-led connectivity is a 3-layered architecture covering Experience, Process and System layers
Answers
C.
API-led connectivity is a technology which enabled us to implement Experience, Process and System layer based APIs
C.
API-led connectivity is a technology which enabled us to implement Experience, Process and System layer based APIs
Answers
Suggested answer: A

Explanation:

API-led connectivity is not just an architecture or technology but also a way to organize people and processes for efficient IT delivery in the organization.

asked 23/09/2024
Florence Li
39 questions

What API policy would LEAST likely be applied to a Process API?

A.
Custom circuit breaker
A.
Custom circuit breaker
Answers
B.
Client ID enforcement
B.
Client ID enforcement
Answers
C.
Rate limiting
C.
Rate limiting
Answers
D.
JSON threat protection
D.
JSON threat protection
Answers
Suggested answer: D

Explanation:

JSON threat protection Fact: Technically, there are no restrictions on what policy can be applied in what layer. Any policy can be applied on any layer API. However, context should also be considered properly before blindly applying the policies on APIs.That is why, this question asked for a policy that would LEAST likely be applied to a Process API.From the given options:>> All policies except 'JSON threat protection' can be applied without hesitation to the APIs in Process tier.>> JSON threat protection policy ideally fits for experience APIs to prevent suspicious JSON payload coming from external API clients. This covers more of a security aspect by trying to avoid possibly malicious and harmful JSON payloads from external clients calling experience APIs.As external API clients are NEVER allowed to call Process APIs directly and also these kind of malicious and harmful JSON payloads are always stopped at experience API layer only using this policy, it is LEAST LIKELY that this same policy is again applied on Process Layer API.

asked 23/09/2024
Ezrah James panuelos
37 questions