ExamGecko
Home Home / Salesforce / Certified MuleSoft Integration Architect I

Salesforce Certified MuleSoft Integration Architect I Practice Test - Questions Answers, Page 11

Question list
Search
Search

List of questions

Search

Related questions











What is a key difference between synchronous and asynchronous logging from Mule applications?

A.
Synchronous logging writes log messages in a single logging thread but does not block the Mule event being processed by the next event processor
A.
Synchronous logging writes log messages in a single logging thread but does not block the Mule event being processed by the next event processor
Answers
B.
Asynchronous logging can improve Mule event processing throughput while also reducing the processing time for each Mule event
B.
Asynchronous logging can improve Mule event processing throughput while also reducing the processing time for each Mule event
Answers
C.
Asynchronous logging produces more reliable audit trails with more accurate timestamps
C.
Asynchronous logging produces more reliable audit trails with more accurate timestamps
Answers
D.
Synchronous logging within an ongoing transaction writes log messages in the same thread that processes the current Mule event
D.
Synchronous logging within an ongoing transaction writes log messages in the same thread that processes the current Mule event
Answers
Suggested answer: B

Explanation:

Types of logging:

A) Synchronous: The execution of thread that is processing messages is interrupted to wait for the log message to be fully handled before it can continue.

The execution of the thread that is processing your message is interrupted to wait for the log message to be fully output before it can continue

Performance degrades because of synchronous logging

Used when the log is used as an audit trail or when logging ERROR/CRITICAL messages

If the logger fails to write to disk, the exception would raise on the same thread that's currently processing the Mule event. If logging is critical for you, then you can rollback the transaction.

B) Asynchronous:

The logging operation occurs in a separate thread, so the actual processing of your message won't be delayed to wait for the logging to complete

Substantial improvement in throughput and latency of message processing

Mule runtime engine (Mule) 4 uses Log4j 2 asynchronous logging by default

The disadvantage of asynchronous logging is error handling.

If the logger fails to write to disk, the thread doing the processing won't be aware of any issues writing to the disk, so you won't be able to rollback anything. Because the actual writing of the log gets differed, there's a chance that log messages might never make it to disk and get lost, if Mule were to crash before the buffers are flushed.

------------------------------------------------------------------------------------------------------------------

So Correct answer is: Asynchronous logging can improve Mule event processing throughput while also reducing the processing time for each Mule event

A global, high-volume shopping Mule application is being built and will be deployed to CloudHub. To improve performance, the Mule application uses a Cache scope that maintains cache state in a CloudHub object store. Web clients will access the Mule application over HTTP from all around the world, with peak volume coinciding with business hours in the web client's geographic location. To achieve optimal performance, what Anypoint Platform region should be chosen for the CloudHub object store?

A.
Choose the same region as to where the Mule application is deployed
A.
Choose the same region as to where the Mule application is deployed
Answers
B.
Choose the US-West region, the only supported region for CloudHub object stores
B.
Choose the US-West region, the only supported region for CloudHub object stores
Answers
C.
Choose the geographically closest available region for each web client
C.
Choose the geographically closest available region for each web client
Answers
D.
Choose a region that is the traffic-weighted geographic center of all web clients
D.
Choose a region that is the traffic-weighted geographic center of all web clients
Answers
Suggested answer: A

Explanation:

CloudHub object store should be in same region where the Mule application is deployed. This will give optimal performance.

Before learning about Cache scope and object store in Mule 4 we understand what is in general Caching is and other related things.

WHAT DOES ''CACHING'' MEAN?

Caching is the process of storing frequently used data in memory, file system or database which saves processing time and load if it would have to be accessed from original source location every time.

In computing, a cache is a high-speed data storage layer which stores a subset of data, so that future requests for that data are served up faster than is possible by accessing the data's primary storage location. Caching allows you to efficiently reuse previously retrieved or computed data.

How does Caching work?

The data in a cache is generally stored in fast access hardware such as RAM (Random-access memory) and may also be used in correlation with a software component. A cache's primary purpose is to increase data retrieval performance by reducing the need to access the underlying slower storage layer.

Caching in MULE 4

In Mule 4 caching can be achieved in mule using cache scope and/or object-store. Cache scope internally uses Object Store to store the data.

What is Object Store

Object Store lets applications store data and states across batch processes, Mule components, and applications, from within an application. If used on cloud hub, the object store is shared between applications deployed on Cluster.

Cache Scope is used in below-mentioned cases:

Need to store the whole response from the outbound processor

Data returned from the outbound processor does not change very frequently

As Cache scope internally handle the cache hit and cache miss scenarios it is more readable

Object Store is used in below-mentioned cases:

Need to store custom/intermediary data

To store watermarks

Sharing the data/stage across applications, schedulers, batch.

If CloudHub object store is in same region where the Mule application is deployed it will aid in fast access of data and give optimal performance.

An organization is evaluating using the CloudHub shared Load Balancer (SLB) vs creating a CloudHub dedicated load balancer (DLB). They are evaluating how this choice affects the various types of certificates used by CloudHub deployed Mule applications, including MuleSoft-provided, customer-provided, or Mule application-provided certificates. What type of restrictions exist on the types of certificates for the service that can be exposed by the CloudHub Shared Load Balancer (SLB) to external web clients over the public internet?

A.
Underlying Mule applications need to implement own certificates
A.
Underlying Mule applications need to implement own certificates
Answers
B.
Only MuleSoft provided certificates can be used for server side certificate
B.
Only MuleSoft provided certificates can be used for server side certificate
Answers
C.
Only self signed certificates can be used
C.
Only self signed certificates can be used
Answers
D.
All certificates which can be used in shared load balancer need to get approved by raising support ticket
D.
All certificates which can be used in shared load balancer need to get approved by raising support ticket
Answers
Suggested answer: B

Explanation:

Correct answer is Only MuleSoft provided certificates can be used for server side certificate

* The CloudHub Shared Load Balancer terminates TLS connections and uses its own server-side certificate.

* You would need to use dedicated load balancer which can enable you to define SSL configurations to provide custom certificates and optionally enforce two-way SSL client authentication.

* To use a dedicated load balancer in your environment, you must first create an Anypoint VPC. Because you can associate multiple environments with the same Anypoint VPC, you can use the same dedicated load balancer for your different environments.

Additional Info on SLB Vs DLB:

An organization is implementing a Quote of the Day API that caches today's quote. What scenario can use the CloudHub Object Store connector to persist the cache's state?

A.
When there is one deployment of the API implementation to CloudHub and another one to customer hosted mule runtime that must share the cache state.
A.
When there is one deployment of the API implementation to CloudHub and another one to customer hosted mule runtime that must share the cache state.
Answers
B.
When there are two CloudHub deployments of the API implementation by two Anypoint Platform business groups to the same CloudHub region that must share the cache state.
B.
When there are two CloudHub deployments of the API implementation by two Anypoint Platform business groups to the same CloudHub region that must share the cache state.
Answers
C.
When there is one CloudHub deployment of the API implementation to three workers that must share the cache state.
C.
When there is one CloudHub deployment of the API implementation to three workers that must share the cache state.
Answers
D.
When there are three CloudHub deployments of the API implementation to three separate CloudHub regions that must share the cache state.
D.
When there are three CloudHub deployments of the API implementation to three separate CloudHub regions that must share the cache state.
Answers
Suggested answer: C

Explanation:

Object Store Connector is a Mule component that allows for simple key-value storage. Although it can serve a wide variety of use cases, it is mainly design for: - Storing synchronization information, such as watermarks. - Storing temporal information such as access tokens. - Storing user information. Additionally, Mule Runtime uses Object Stores to support some of its own components, for example: - The Cache module uses an Object Store to maintain all of the cached data. - The OAuth module (and every OAuth enabled connector) uses Object Stores to store the access and refresh tokens. Object Store data is in the same region as the worker where the app is initially deployed. For example, if you deploy to the Singapore region, the object store persists in the Singapore region. MuleSoft Reference : https://docs.mulesoft.com/object-store-connector/1.1/ Data can be shared between different instances of the Mule application. This is not recommended for Inter Mule app communication. Coming to the question, object store cannot be used to share cached data if it is deployed as separate Mule applications or deployed under separate Business Groups. Hence correct answer is When there is one CloudHub deployment of the API implementation to three workers that must share the cache state.

An organization has several APIs that accept JSON data over HTTP POST. The APIs are all publicly available and are associated with several mobile applications and web applications. The organization does NOT want to use any authentication or compliance policies for these APIs, but at the same time, is worried that some bad actor could send payloads that could somehow compromise the applications or servers running the API implementations. What out-of-the-box Anypoint Platform policy can address exposure to this threat?

A.
Apply a Header injection and removal policy that detects the malicious data before it is used
A.
Apply a Header injection and removal policy that detects the malicious data before it is used
Answers
B.
Apply an IP blacklist policy to all APIs; the blacklist will Include all bad actors
B.
Apply an IP blacklist policy to all APIs; the blacklist will Include all bad actors
Answers
C.
Shut out bad actors by using HTTPS mutual authentication for all API invocations
C.
Shut out bad actors by using HTTPS mutual authentication for all API invocations
Answers
D.
Apply a JSON threat protection policy to all APIs to detect potential threat vectors
D.
Apply a JSON threat protection policy to all APIs to detect potential threat vectors
Answers
Suggested answer: D

Explanation:

We need to note few things about the scenario which will help us in reaching the correct solution.

Point 1 : The APIs are all publicly available and are associated with several mobile applications and web applications. This means Apply an IP blacklist policy is not viable option. as blacklisting IPs is limited to partial web traffic. It can't be useful for traffic from mobile application

Point 2 : The organization does NOT want to use any authentication or compliance policies for these APIs. This means we can not apply HTTPS mutual authentication scheme.

Header injection or removal will not help the purpose.

By its nature, JSON is vulnerable to JavaScript injection. When you parse the JSON object, the malicious code inflicts its damages. An inordinate increase in the size and depth of the JSON payload can indicate injection. Applying the JSON threat protection policy can limit the size of your JSON payload and thwart recursive additions to the JSON hierarchy.

Hence correct answer is Apply a JSON threat protection policy to all APIs to detect potential threat vectors

A new upstream API Is being designed to offer an SLA of 500 ms median and 800 ms maximum (99th percentile) response time. The corresponding API implementation needs to sequentially invoke 3 downstream APIs of very similar complexity. The first of these downstream APIs offers the following SLA for its response time: median: 100 ms, 80th percentile: 500 ms, 95th percentile: 1000 ms. If possible, how can a timeout be set in the upstream API for the invocation of the first downstream API to meet the new upstream API's desired SLA?

A.
Set a timeout of 100 ms; that leaves 400 ms for the other two downstream APIs to complete
A.
Set a timeout of 100 ms; that leaves 400 ms for the other two downstream APIs to complete
Answers
B.
Do not set a timeout; the Invocation of this API Is mandatory and so we must wait until it responds
B.
Do not set a timeout; the Invocation of this API Is mandatory and so we must wait until it responds
Answers
C.
Set a timeout of 50 ms; this times out more invocations of that API but gives additional room for retries
C.
Set a timeout of 50 ms; this times out more invocations of that API but gives additional room for retries
Answers
D.
No timeout is possible to meet the upstream API's desired SLA; a different SLA must be negotiated with the first downstream API or invoke an alternative API
D.
No timeout is possible to meet the upstream API's desired SLA; a different SLA must be negotiated with the first downstream API or invoke an alternative API
Answers
Suggested answer: D

Explanation:

Before we answer this question , we need to understand what median (50th percentile) and 80th percentile means. If the 50th percentile (median) of a response time is 500ms that means that 50% of my transactions are either as fast or faster than 500ms.

If the 90th percentile of the same transaction is at 1000ms it means that 90% are as fast or faster and only 10% are slower. Now as per upstream SLA , 99th percentile is 800 ms which means 99% of the incoming requests should have response time less than or equal to 800 ms. But as per one of the backend API , their 95th percentile is 1000 ms which means that backend API will take 1000 ms or less than that for 95% of. requests. As there are three API invocation from upstream API , we can not conclude a timeout that can be set to meet the desired SLA as backend SLA's do not support it.

Let see why other answers are not correct.

1) Do not set a timeout --> This can potentially violate SLA's of upstream API

2) Set a timeout of 100 ms; ---> This will not work as backend API has 100 ms as median meaning only 50% requests will be answered in this time and we will get timeout for 50% of the requests. Important thing to note here is, All APIs need to be executed sequentially, so if you get timeout in first API, there is no use of going to second and third API. As a service provider you wouldn't want to keep 50% of your consumers dissatisfied. So not the best option to go with.

*To quote an example: Let's assume you have built an API to update customer contact details.

- First API is fetching customer number based on login credentials

- Second API is fetching Info in 1 table and returning unique key

- Third API, using unique key provided in second API as primary key, updating remaining details

* Now consider, if API times out in first API and can't fetch customer number, in this case, it's useless to call API 2 and 3 and that is why question mentions specifically that all APIs need to be executed sequentially.

3) Set a timeout of 50 ms --> Again not possible due to the same reason as above Hence correct answer is No timeout is possible to meet the upstream API's desired SLA; a different SLA must be negotiated with the first downstream API or invoke an alternative API

An API has been updated in Anypoint Exchange by its API producer from version 3.1.1 to 3.2.0 following accepted semantic versioning practices and the changes have been communicated via the API's public portal. The API endpoint does NOT change in the new version. How should the developer of an API client respond to this change?

A.
The update should be identified as a project risk and full regression testing of the functionality that uses this API should be run.
A.
The update should be identified as a project risk and full regression testing of the functionality that uses this API should be run.
Answers
B.
The API producer should be contacted to understand the change to existing functionality.
B.
The API producer should be contacted to understand the change to existing functionality.
Answers
C.
The API producer should be requested to run the old version in parallel with the new one.
C.
The API producer should be requested to run the old version in parallel with the new one.
Answers
D.
The API client code ONLY needs to be changed if it needs to take advantage of new features.
D.
The API client code ONLY needs to be changed if it needs to take advantage of new features.
Answers
Suggested answer: D

Explanation:

* Semantic Versioning is a 3-component number in the format of X.Y.Z, where :

X stands for a major version.

Y stands for a minor version:

Z stands for a patch.

So, SemVer is of the form Major.Minor.Patch Coming to our question , minor version of the API has been changed which is backward compatible. Hence there is no change required on API client end. If they want to make use of new featured that have been added as a part of minor version change they may need to change code at their end. Hence correct answer is The API client code ONLY needs to be changed if it needs to take advantage of new features.

When designing an upstream API and its implementation, the development team has been advised to not set timeouts when invoking downstream API. Because the downstream API has no SLA that can be relied upon. This is the only donwstream API dependency of that upstream API. Assume the downstream API runs uninterrupted without crashing. What is the impact of this advice?

A.
The invocation of the downstream API will run to completion without timing out.
A.
The invocation of the downstream API will run to completion without timing out.
Answers
B.
An SLA for the upstream API CANNOT be provided.
B.
An SLA for the upstream API CANNOT be provided.
Answers
C.
A default timeout of 500 ms will automatically be applied by the Mule runtime in which the upstream API implementation executes.
C.
A default timeout of 500 ms will automatically be applied by the Mule runtime in which the upstream API implementation executes.
Answers
D.
A load-dependent timeout of less than 1000 ms will be applied by the Mule runtime in which the downstream API implementation executes.
D.
A load-dependent timeout of less than 1000 ms will be applied by the Mule runtime in which the downstream API implementation executes.
Answers
Suggested answer: B

Explanation:

An SLA for the upstream API CANNOT be provided.

What aspects of a CI/CD pipeline for Mule applications can be automated using MuleSoft-provided Maven plugins?

A.
Compile, package, unit test, validate unit test coverage, deploy
A.
Compile, package, unit test, validate unit test coverage, deploy
Answers
B.
Compile, package, unit test, deploy, integration test (Incorrect)
B.
Compile, package, unit test, deploy, integration test (Incorrect)
Answers
C.
Compile, package, unit test, deploy, create associated API instances in API Manager
C.
Compile, package, unit test, deploy, create associated API instances in API Manager
Answers
D.
Import from API designer, compile, package, unit test, deploy, publish to Anypoint Exchange
D.
Import from API designer, compile, package, unit test, deploy, publish to Anypoint Exchange
Answers
Suggested answer: A

Explanation:

Correct answer is 'Compile, package, unit test, validate unit test coverage, deploy' : Anypoint Platform supports continuous integration and continuous delivery using industry standard tools Mule Maven Plugin The Mule Maven plugin can automate building, packaging and deployment of Mule applications from source projects Using the Mule Maven plugin, you can automate your Mule application deployment to CloudHub, to Anypoint Runtime Fabric, or on-premises, using any of the following deployment strategies * CloudHub deployment * Runtime Fabric deployment * Runtime Manager REST API deployment * Runtime Manager agent deployment MUnit Maven Plugin The MUnit Maven plugin can automate test execution, and ties in with the Mule Maven plugin. It provides a full suite of integration and unit test capabilities, and is fully integrated with Maven and Surefire for integration with your continuous deployment environment. Since MUnit 2.x, the coverage report goal is integrated with the maven reporting section. Coverage Reports are generated during Maven's site lifecycle, during the coverage-report goal. One of the features of MUnit Coverage is to fail the build if a certain coverage level is not reached. MUnit is not used for integration testing Also publishing to Anypoint Exchange or to create associated API instances in API Manager is not a part of CICD pipeline which can ne achieved using mulesoft provided maven plugin

Architecture mentioned in the question can be diagrammatically put as below. Persistent Object Store is the correct answer .

* Mule Object Stores: An object store is a facility for storing objects in or across Mule applications. Mule uses object stores to persist data for eventual retrieval.

Mule provides two types of object stores:

1) In-memory store -- stores objects in local Mule runtime memory. Objects are lost on shutdown of the Mule runtime. So we cant use in memory store in our scenario as we want to share watermark within all cloudhub workers

2) Persistent store -- Mule persists data when an object store is explicitly configured to be persistent. Hence this watermark will be available even any of the worker goes down

An auto mobile company want to share inventory updates with dealers Dl and D2 asynchronously and concurrently via queues Q1 and Q2. Dealer Dl must consume the message from the queue Q1 and dealer D2 to must consume a message from the queue Q2.

Dealer D1 has implemented a retry mechanism to reprocess the transaction in case of any errors while processing the inventers updates. Dealer D2 has not implemented any retry mechanism.

How should the dealers acknowledge the message to avoid message loss and minimize impact on the current implementation?

A.
Dealer D1 must use auto acknowledgement and dealer D2 can use manual acknowledgement and acknowledge the message after successful processing
A.
Dealer D1 must use auto acknowledgement and dealer D2 can use manual acknowledgement and acknowledge the message after successful processing
Answers
B.
Dealer D1 can use auto acknowledgement and dealer D2 can use IMMEDIATE acknowledgement and acknowledge the message of successful processing
B.
Dealer D1 can use auto acknowledgement and dealer D2 can use IMMEDIATE acknowledgement and acknowledge the message of successful processing
Answers
C.
Dealer D1 and dealer D2 must use AUTO acknowledgement and acknowledge the message after successful processing
C.
Dealer D1 and dealer D2 must use AUTO acknowledgement and acknowledge the message after successful processing
Answers
D.
Dealer D1 can use AUTO acknowledgement and dealer D2 must use manual acknowledgement and acknowledge the message after successful processing
D.
Dealer D1 can use AUTO acknowledgement and dealer D2 must use manual acknowledgement and acknowledge the message after successful processing
Answers
Suggested answer: D
Total 273 questions
Go to page: of 28