ExamGecko
Home Home / Salesforce / Certified MuleSoft Integration Architect I

Salesforce Certified MuleSoft Integration Architect I Practice Test - Questions Answers, Page 5

Question list
Search
Search

List of questions

Search

Related questions











Refer to the exhibit.

A Mule application is being designed to be deployed to several CIoudHub workers. The Mule application's integration logic is to replicate changed Accounts from Satesforce to a backend system every 5 minutes.

A watermark will be used to only retrieve those Satesforce Accounts that have been modified since the last time the integration logic ran.

What is the most appropriate way to implement persistence for the watermark in order to support the required data replication integration logic?

A.
Persistent Anypoint MQ Queue
A.
Persistent Anypoint MQ Queue
Answers
B.
Persistent Object Store
B.
Persistent Object Store
Answers
C.
Persistent Cache Scope
C.
Persistent Cache Scope
Answers
D.
Persistent VM Queue
D.
Persistent VM Queue
Answers
Suggested answer: B

Explanation:

* An object store is a facility for storing objects in or across Mule applications. Mule uses object stores to persist data for eventual retrieval.

* Mule provides two types of object stores:

1) In-memory store -- stores objects in local Mule runtime memory. Objects are lost on shutdown of the Mule runtime.

2) Persistent store -- Mule persists data when an object store is explicitly configured to be persistent.

In a standalone Mule runtime, Mule creates a default persistent store in the file system. If you do not specify an object store, the default persistent object store is used.

MuleSoft

Reference: https://docs.mulesoft.com/mule-runtime/3.9/mule-object-stores

A new Mule application under development must implement extensive data transformation logic. Some of the data transformation functionality is already available as external transformation services that are mature and widely used across the organization; the rest is highly specific to the new Mule application.

The organization follows a rigorous testing approach, where every service and application must be extensively acceptance tested before it is allowed to go into production.

What is the best way to implement the data transformation logic for this new Mule application while minimizing the overall testing effort?

A.
Implement and expose all transformation logic as mlaoservices using DataWeave, so it can be reused by any application component that needs it, including the new Mule application
A.
Implement and expose all transformation logic as mlaoservices using DataWeave, so it can be reused by any application component that needs it, including the new Mule application
Answers
B.
Implement transformation logic in the new Mute application using DataWeave, replicating the transformation logic of existing transformation services
B.
Implement transformation logic in the new Mute application using DataWeave, replicating the transformation logic of existing transformation services
Answers
C.
Extend the existing transformation services with new transformation logic and Invoke them from the new Mule application
C.
Extend the existing transformation services with new transformation logic and Invoke them from the new Mule application
Answers
D.
Implement transformation logic in the new Mute application using DataWeave, invoking existing transformation services when possible
D.
Implement transformation logic in the new Mute application using DataWeave, invoking existing transformation services when possible
Answers
Suggested answer: D

Explanation:

Correct answer is Implement transformation logic in the new Mule application using DataWeave, invoking existing transformation services when possible. * The key here minimal testing effort, 'Extend existing transformation logic' is not a feasible option because additional functionality is highly specific to the new Mule application so it should not be a part of commonly used functionality. So this option is ruled out. * 'Implement transformation logic in the new Mule application using DataWeave, replicating the transformation logic of existing transformation services' Replicating the transformation logic of existing transformation services will cause duplicity of code. So this option is ruled out. * 'Implement and expose all transformation logic as microservices using DataWeave, so it can be reused by any application component that needs it, including the new Mule application' as question specifies that the transformation is app specific and wont be used outside

A Mule application uses the Database connector.

What condition can the Mule application automatically adjust to or recover from without needing to restart or redeploy the Mule application?

A.
One of the stored procedures being called by the Mule application has been renamed
A.
One of the stored procedures being called by the Mule application has been renamed
Answers
B.
The database server was unavailable for four hours due to a major outage but is now fully operational again
B.
The database server was unavailable for four hours due to a major outage but is now fully operational again
Answers
C.
The credentials for accessing the database have been updated and the previous credentials are no longer valid
C.
The credentials for accessing the database have been updated and the previous credentials are no longer valid
Answers
D.
The database server has been updated and hence the database driver library/JAR needs a minor version upgrade
D.
The database server has been updated and hence the database driver library/JAR needs a minor version upgrade
Answers
Suggested answer: B

Explanation:

* Any change in the application will require a restart except when the issue outside the app. For below situations , you would need to redeploy the code after doing necessary changes

-- One of the stored procedures being called by the Mule application has been renamed. In this case, in the Mule application you will have to do changes to accommodate the new stored procedure name.

-- Required redesign of Mule applications to follow microservice architecture principles. As code is changed, deployment is must

-- If the credentials changed and you need to update the connector or the properties.

-- The credentials for accessing the database have been updated and the previous credentials are no longer valid. In this situation you need to restart or redeploy depending on how credentials are configured in Mule application.

* So Correct answer is The database server was unavailable for four hours due to a major outage but is now fully operational again as this is the only external issue to application.

Refer to the exhibit.

Anypoint Platform supports role-based access control (RBAC) to features of the platform. An organization has configured an external Identity Provider for identity management with Anypoint Platform.

What aspects of RBAC must ALWAYS be controlled from the Anypoint Platform control plane and CANNOT be controlled via the external Identity Provider?

A.
Controlling the business group within Anypoint Platform to which the user belongs
A.
Controlling the business group within Anypoint Platform to which the user belongs
Answers
B.
Assigning Anypoint Platform permissions to a role
B.
Assigning Anypoint Platform permissions to a role
Answers
C.
Assigning Anypoint Platform role(s) to a user
C.
Assigning Anypoint Platform role(s) to a user
Answers
D.
Removing a user's access to Anypoint Platform when they no longer work for the organization
D.
Removing a user's access to Anypoint Platform when they no longer work for the organization
Answers
Suggested answer: B

Explanation:

* By default, Anypoint Platform performs its own user management

-- For user management, one external IdP can be integrated with the Anypoint Platform organization (note: not at business group level)

-- Permissions and access control are still enforced inside Anypoint Platform and CANNOT be controlled via the external Identity Provider * As the Anypoint Platform organization administrator, you can configure identity management in Anypoint Platform to set up users for single sign-on (SSO). * You can map users in a federated organization's group to a role which also gives the flexibility of controlling the business group within Anypoint Platform to which the user belongs to. Also user can nbe removed from external identity management system when they no longer work for the organization. So they wont be able to authenticate using SSO to login to Anypoint Platform. * Using external identity we can no change permissions of a particular role in Mulesoft Anypoint platform.

* So Correct answer is Assigning Anypoint Platform permissions to a role

An organization uses Mule runtimes which are managed by Anypoint Platform - Private Cloud Edition. What MuleSoft component is responsible for feeding analytics data to non-MuleSoft analytics platforms?

A.
Anypoint Exchange
A.
Anypoint Exchange
Answers
B.
The Mule runtimes
B.
The Mule runtimes
Answers
C.
Anypoint API Manager
C.
Anypoint API Manager
Answers
D.
Anypoint Runtime Manager
D.
Anypoint Runtime Manager
Answers
Suggested answer: D

Explanation:



Correct answer is Anypoint Runtime Manager

MuleSoft Anypoint Runtime Manager (ARM) provides connectivity to Mule Runtime engines deployed across your organization to provide centralized management, monitoring and analytics reporting. However, most enterprise customers find it necessary for these on-premises runtimes to integrate with their existing non MuleSoft analytics / monitoring systems such as Splunk and ELK to support a single pane of glass view across the infrastructure.

* You can configure the Runtime Manager agent to export data to external analytics tools.

Using either the Runtime Manager cloud console or Anypoint Platform Private Cloud Edition, you can:

--> Send Mule event notifications, including flow executions and exceptions, to Splunk or ELK.

--> Send API Analytics to Splunk or ELK. Sending data to third-party tools is not supported for applications deployed on CloudHub.

You can use the CloudHub custom log appender to integrate with your logging system.

Reference: https://docs.mulesoft.com/runtime-manager/ https://docs.mulesoft.com/release-notes/runtime-manager-agent/runtime-manager-agent-release-notes

Additional Info:

It can be achieved in 3 steps:

1) register an agent to a runtime manager,

2) configure a gateway to enable API analytics to be sent to non MuleSoft analytics platform (Splunk for ex.) -- as highlighted in the following diagram and

3) setup dashboards.

A Mule application is being designed to do the following:

Step 1: Read a SalesOrder message from a JMS queue, where each SalesOrder consists of a header and a list of SalesOrderLineltems.

Step 2: Insert the SalesOrder header and each SalesOrderLineltem into different tables in an RDBMS.

Step 3: Insert the SalesOrder header and the sum of the prices of all its SalesOrderLineltems into a table In a different RDBMS.

No SalesOrder message can be lost and the consistency of all SalesOrder-related information in both RDBMSs must be ensured at all times.

What design choice (including choice of transactions) and order of steps addresses these requirements?

A.
1) Read the JMS message (NOT in an XA transaction) 2) Perform BOTH DB inserts in ONE DB transaction 3) Acknowledge the JMS message
A.
1) Read the JMS message (NOT in an XA transaction) 2) Perform BOTH DB inserts in ONE DB transaction 3) Acknowledge the JMS message
Answers
B.
1) Read the JMS message (NOT in an XA transaction) 2) Perform EACH DB insert in a SEPARATE DB transaction 3) Acknowledge the JMS message
B.
1) Read the JMS message (NOT in an XA transaction) 2) Perform EACH DB insert in a SEPARATE DB transaction 3) Acknowledge the JMS message
Answers
C.
1) Read the JMS message in an XA transaction 2) In the SAME XA transaction, perform BOTH DB inserts but do NOT acknowledge the JMS message
C.
1) Read the JMS message in an XA transaction 2) In the SAME XA transaction, perform BOTH DB inserts but do NOT acknowledge the JMS message
Answers
D.
1) Read and acknowledge the JMS message (NOT in an XA transaction) 2) In a NEW XA transaction, perform BOTH DB inserts
D.
1) Read and acknowledge the JMS message (NOT in an XA transaction) 2) In a NEW XA transaction, perform BOTH DB inserts
Answers
Suggested answer: A

Explanation:

Option A says 'Perform EACH DB insert in a SEPARATE DB transaction'. In this case if first DB insert is successful and second one fails then first insert won't be rolled back causing inconsistency. This option is ruled out.

Option D says Perform BOTH DB inserts in ONE DB transaction.

Rule of thumb is when one or more DB connections are required we must use XA transaction as local transactions support only one resource. So this option is also ruled out.

Option B acknowledges the before DB processing, so message is removed from the queue. In case of system failure at later point, message can't be retrieved.

Option C is Valid: Though it says 'do not ack JMS message', message will be auto acknowledged at the end of transaction. Here is how we can ensure all components are part of XA transaction: https://docs.mulesoft.com/jms-connector/1.7/jms-transactions

Additional Information about transactions:

XA Transactions - You can use an XA transaction to group together a series of operations from multiple transactional resources, such as JMS, VM or JDBC resources, into a single, very reliable, global transaction.

The XA (eXtended Architecture) standard is an X/Open group standard which specifies the interface between a global transaction manager and local transactional resource managers.

The XA protocol defines a 2-phase commit protocol which can be used to more reliably coordinate and sequence a series of 'all or nothing' operations across multiple servers, even servers of different types

Use JMS ack if

-- Acknowledgment should occur eventually, perhaps asynchronously

-- The performance of the message receipt is paramount

-- The message processing is idempotent

-- For the choreography portion of the SAGA pattern

Use JMS transactions

-- For all other times in the integration you want to perform an atomic unit of work

-- When the unit of work comprises more than the receipt of a single message

-- To simply and unify the programming model (begin/commit/rollback)

What metrics about API invocations are available for visualization in custom charts using Anypoint Analytics?

A.
Request size, request HTTP verbs, response time
A.
Request size, request HTTP verbs, response time
Answers
B.
Request size, number of requests, JDBC Select operation result set size
B.
Request size, number of requests, JDBC Select operation result set size
Answers
C.
Request size, number of requests, response size, response time
C.
Request size, number of requests, response size, response time
Answers
D.
Request size, number of requests, JDBC Select operation response time
D.
Request size, number of requests, JDBC Select operation response time
Answers
Suggested answer: C

Explanation:

Correct answer is Request size, number of requests, response size, response time Analytics API Analytics can provide insight into how your APIs are being used and how they are performing. From API Manager, you can access the Analytics dashboard, create a custom dashboard, create and manage charts, and create reports. From API Manager, you can get following types of analytics: - API viewing analytics - API events analytics - Charted metrics in API Manager

It can be accessed using: http://anypoint.mulesoft.com/analytics

API Analytics provides a summary in chart form of requests, top apps, and latency for a particular duration.

The custom dashboard in Anypoint Analytics contains a set of charts for a single API or for all APIs Each chart displays various API characteristics

-- Requests size: Line chart representing size of requests in KBs

-- Requests : Line chart representing number of requests over a period

-- Response size : Line chart representing size of response in KBs

-- Response time :Line chart representing response time in ms

* To check this, You can go to API Manager > Analytics > Custom Dashboard > Edit Dashboard > Create Chart > Metric

Additional Information:

The default dashboard contains a set of charts

-- Requests by date: Line chart representing number of requests

-- Requests by location: Map chart showing the number of requests for each country of origin

-- Requests by application: Bar chart showing the number of requests from each of the top five registered applications

-- Requests by platform: Ring chart showing the number of requests broken down by platform

What aspects of a CI/CD pipeline for Mute applications can be automated using MuleSoft-provided Maven plugins?

A.
Compile, package, unit test, deploy, create associated API instances in API Manager
A.
Compile, package, unit test, deploy, create associated API instances in API Manager
Answers
B.
Import from API designer, compile, package, unit test, deploy, publish to Am/point Exchange
B.
Import from API designer, compile, package, unit test, deploy, publish to Am/point Exchange
Answers
C.
Compile, package, unit test, validate unit test coverage, deploy
C.
Compile, package, unit test, validate unit test coverage, deploy
Answers
D.
Compile, package, unit test, deploy, integration test
D.
Compile, package, unit test, deploy, integration test
Answers
Suggested answer: C

A Mule application currently writes to two separate SQL Server database instances across the internet using a single XA transaction. It is 58. proposed to split this one transaction into two separate non-XA transactions with no other changes to the Mule application.

What non-functional requirement can be expected to be negatively affected when implementing this change?

A.
Throughput
A.
Throughput
Answers
B.
Consistency
B.
Consistency
Answers
C.
Response time
C.
Response time
Answers
D.
Availability
D.
Availability
Answers
Suggested answer: B

Explanation:

Correct answer is Consistency as XA transactions are implemented to achieve this. XA transactions are added in the implementation to achieve goal of ACID properties. In the context of transaction processing, the acronym ACID refers to the four key properties of a transaction: atomicity, consistency, isolation, and durability. Atomicity : All changes to data are performed as if they are a single operation. That is, all the changes are performed, or none of them are. For example, in an application that transfers funds from one account to another, the atomicity property ensures that, if a debit is made successfully from one account, the corresponding credit is made to the other account. Consistency : Data is in a consistent state when a transaction starts and when it ends.For example, in an application that transfers funds from one account to another, the consistency property ensures that the total value of funds in both the accounts is the same at the start and end of each transaction. Isolation : The intermediate state of a transaction is invisible to other transactions. As a result, transactions that run concurrently appear to be serialized. For example, in an application that transfers funds from one account to another, the isolation property ensures that another transaction sees the transferred funds in one account or the other, but not in both, nor in neither. Durability : After a transaction successfully completes, changes to data persist and are not undone, even in the event of a system failure. For example, in an application that transfers funds from one account to another, the durability property ensures that the changes made to each account will not be reversed. MuleSoft reference: https://docs.mulesoft.com/mule-runtime/4.3/xa-transactions

A Mule application contains a Batch Job with two Batch Steps (Batch_Step_l and Batch_Step_2). A payload with 1000 records is received by the Batch Job.

How many threads are used by the Batch Job to process records, and how does each Batch Step process records within the Batch Job?

A.
Each Batch Job uses SEVERAL THREADS for the Batch Steps Each Batch Step instance receives ONE record at a time as the payload, and RECORDS are processed IN PARALLEL within and between the two Batch Steps
A.
Each Batch Job uses SEVERAL THREADS for the Batch Steps Each Batch Step instance receives ONE record at a time as the payload, and RECORDS are processed IN PARALLEL within and between the two Batch Steps
Answers
B.
Each Batch Job uses a SINGLE THREAD for all Batch steps Each Batch step instance receives ONE record at a time as the payload, and RECORDS are processed IN ORDER, first through Batch_Step_l and then through Batch_Step_2
B.
Each Batch Job uses a SINGLE THREAD for all Batch steps Each Batch step instance receives ONE record at a time as the payload, and RECORDS are processed IN ORDER, first through Batch_Step_l and then through Batch_Step_2
Answers
C.
Each Batch Job uses a SINGLE THREAD to process a configured block size of record Each Batch Step instance receives A BLOCK OF records as the payload, and BLOCKS of records are processed IN ORDER
C.
Each Batch Job uses a SINGLE THREAD to process a configured block size of record Each Batch Step instance receives A BLOCK OF records as the payload, and BLOCKS of records are processed IN ORDER
Answers
D.
Each Batch Job uses SEVERAL THREADS for the Batch Steps Each Batch Step instance receives ONE record at a time as the payload, and BATCH STEP INSTANCES execute IN PARALLEL to process records and Batch Steps in ANY order as fast as possible
D.
Each Batch Job uses SEVERAL THREADS for the Batch Steps Each Batch Step instance receives ONE record at a time as the payload, and BATCH STEP INSTANCES execute IN PARALLEL to process records and Batch Steps in ANY order as fast as possible
Answers
Suggested answer: A

Explanation:

*Each Batch Job uses SEVERAL THREADS for the Batch Steps

*Each Batch Step instance receives ONE record at a time as the payload. It's not received in a block, as it does not wait for multiple records to be completed before moving to next batch step. (So Option D is out of choice)

*RECORDS are processed IN PARALLEL within and between the two Batch Steps.

*RECORDSare not processed in order. Let's say if second record completes batch_step_1 before record 1, then it moves to batch_step_2 before record 1. (So option C and D are out of choice)

*A batch job is the scope element in an application in which Mule processes a message payload as a batch of records. The term batch job is inclusive of all three phases of processing: Load and Dispatch, Process, and On Complete.

*A batch job instance is an occurrence in a Mule application whenever a Mule flow executes a batch job. Mule creates the batch job instance in the Load and Dispatch phase. Every batch job instance is identified internally using a unique String known as batchjob instance id.

Total 273 questions
Go to page: of 28