ExamGecko
Ask Question

Salesforce Certified MuleSoft Integration Architect I Practice Test - Questions Answers, Page 4

List of questions

Question 31

Report
Export
Collapse

An organization is creating a set of new services that are critical for their business. The project team prefers using REST for all services but is willing to use SOAP with common WS-' standards if a particular service requires it.

What requirement would drive the team to use SOAP/WS-* for a particular service?

Must use XML payloads for the service and ensure that it adheres to a specific schema
Must use XML payloads for the service and ensure that it adheres to a specific schema
Must publish and share the service specification (including data formats) with the consumers of the service
Must publish and share the service specification (including data formats) with the consumers of the service
Must support message acknowledgement and retry as part of the protocol
Must support message acknowledgement and retry as part of the protocol
Must secure the service, requiring all consumers to submit a valid SAML token
Must secure the service, requiring all consumers to submit a valid SAML token
Suggested answer: D

Explanation:

Security Assertion Markup Language (SAML) is an open standard that allows identity providers (IdP) to pass authorization credentials to service providers (SP).

SAML transactions use Extensible Markup Language (XML) for standardized communications between the identity provider and service providers.

SAML is the link between the authentication of a user's identity and the authorization to use a service.

WS-Security is the key extension that supports many authentication models including: basic username/password credentials, SAML, OAuth and more.

A common way that SOAP API's are authenticated is via SAML Single Sign On (SSO). SAML works by facilitating the exchange of authentication and authorization credentials across applications. However, there is no specification that describes how to add SAML to REST web services.

asked 23/09/2024
Osman Rana
36 questions

Question 32

Report
Export
Collapse

Refer to the exhibit.

Salesforce Certified MuleSoft Integration Architect I image Question 32 66049 09232024002916000000

Salesforce Certified MuleSoft Integration Architect I image Question 32 66049 09232024002916000000

A business process involves two APIs that interact with each other asynchronously over HTTP. Each API is implemented as a Mule application. API 1 receives the initial HTTP request and invokes API 2 (in a fire and forget fashion) while API 2, upon completion of the processing, calls back into API l to notify about completion of the asynchronous process.

Each API Is deployed to multiple redundant Mule runtimes and a separate load balancer, and is deployed to a separate network zone.

In the network architecture, how must the firewall rules be configured to enable the above Interaction between API 1 and API 2?

To authorize the certificate to be used both APIs
To authorize the certificate to be used both APIs
To enable communication from each API's Mule Runtimes and Network zone to the load balancer of the other API
To enable communication from each API's Mule Runtimes and Network zone to the load balancer of the other API
To open direct two-way communication between the Mule Runtimes of both API's
To open direct two-way communication between the Mule Runtimes of both API's
To allow communication between load balancers used by each API
To allow communication between load balancers used by each API
Suggested answer: B

Explanation:

* If your API implementation involves putting a load balancer in front of your APIkit application, configure the load balancer to redirect URLs that reference the baseUri of the application directly. If the load balancer does not redirect URLs, any calls that reach the load balancer looking for the application do not reach their destination.

* When you receive incoming traffic through the load balancer, the responses will go out the same way. However, traffic that is originating from your instance will not pass through the load balancer. Instead, it is sent directly from the public IP address of your instance out to the Internet. The ELB is not involved in that scenario.

* The question says ''each API is deployed to multiple redundant Mule runtimes'', that seems to be a hint for self hosted Mule runtime cluster. Set Inbound allowed for the LB, outbound allowed for runtime to request out.

* Hence correct way is to enable communication from each API's Mule Runtimes and Network zone to the load balancer of the other API. Because communication is asynchronous one

asked 23/09/2024
Cheah Eng Soon
34 questions

Question 33

Report
Export
Collapse

An organization is designing the following two Mule applications that must share data via a common persistent object store instance:

- Mule application P will be deployed within their on-premises datacenter.

- Mule application C will run on CloudHub in an Anypoint VPC.

The object store implementation used by CloudHub is the Anypoint Object Store v2 (OSv2).

what type of object store(s) should be used, and what design gives both Mule applications access to the same object store instance?

Application P uses the Object Store connector to access a persistent object store Application C accesses this persistent object store via the Object Store REST API through an IPsec tunnel
Application P uses the Object Store connector to access a persistent object store Application C accesses this persistent object store via the Object Store REST API through an IPsec tunnel
Application C and P both use the Object Store connector to access the Anypoint Object Store v2
Application C and P both use the Object Store connector to access the Anypoint Object Store v2
Application C uses the Object Store connector to access a persistent object Application P accesses the persistent object store via the Object Store REST API
Application C uses the Object Store connector to access a persistent object Application P accesses the persistent object store via the Object Store REST API
Application C and P both use the Object Store connector to access a persistent object store
Application C and P both use the Object Store connector to access a persistent object store
Suggested answer: C

Explanation:

Correct answer is Application A accesses the persistent object store via the Object Store REST API Application B uses the Object Store connector to access a persistent object * Object Store v2 lets CloudHub applications store data and states across batch processes, Mule components and applications, from within an application or by using the Object Store REST API. * On-premise Mule applications cannot use Object Store v2. * You can select Object Store v2 as the implementation for Mule 3 and Mule 4 in CloudHub by checking the Object Store V2 checkbox in Runtime Manager at deployment time. * CloudHub Mule applications can use Object Store connector to write to the object store * The only way on-premises Mule applications can access Object Store v2 is via the Object Store REST API. * You can configure a Mule app to use the Object Store REST API to store and retrieve values from an object store in another Mule app.

asked 23/09/2024
Jorge Diaz
35 questions

Question 34

Report
Export
Collapse

What limits if a particular Anypoint Platform user can discover an asset in Anypoint Exchange?

Design Center and RAML were both used to create the asset
Design Center and RAML were both used to create the asset
The existence of a public Anypoint Exchange portal to which the asset has been published
The existence of a public Anypoint Exchange portal to which the asset has been published
The type of the asset in Anypoint Exchange
The type of the asset in Anypoint Exchange
The business groups to which the user belongs
The business groups to which the user belongs
Suggested answer: D

Explanation:

* 'The existence of a public Anypoint Exchange portal to which the asset has been published' - question does not mention anything about the public portal. Beside the public portal is open to the internet, to anyone. * If you cannot find an asset in the current business group scopes, search in other scopes. In the left navigation bar click All assets (assets provided by MuleSoft and your own master organization), Provided by MuleSoft, or a business group scope. User belonging to one Business Group can see assets related to his group only

Reference: https://docs.mulesoft.com/exchange/to-find-info https://docs.mulesoft.com/exchange/asset-details Correct answer is The business groups to which the user belongs

asked 23/09/2024
Ahmed Khan
47 questions

Question 35

Report
Export
Collapse

When using Anypoint Platform across various lines of business with their own Anypoint Platform business groups, what configuration of Anypoint Platform is always performed at the organization level as opposed to at the business group level?

Environment setup
Environment setup
Identity management setup
Identity management setup
Role and permission setup
Role and permission setup
Dedicated Load Balancer setup
Dedicated Load Balancer setup
Suggested answer: B

Explanation:

* Roles are business group specific. Configure identity management in the Anypoint Platform master organization. As the Anypoint Platform organization administrator, you can configure identity management in Anypoint Platform to set up users for single sign-on (SSO). * Roles and permissions can be set up at business group and organization level also. But Identity Management setup is only done at Organization level * Business groups are self-contained resource groups that contain Anypoint Platform resources such as applications and APIs. Business groups provide a way to separate and control access to Anypoint Platform resources because users have access only to the busine

asked 23/09/2024
Martin Ng
43 questions

Question 36

Report
Export
Collapse

Mule application A receives a request Anypoint MQ message REQU with a payload containing a variable-length list of request objects. Application A uses the For Each scope to split the list into individual objects and sends each object as a message to an Anypoint MQ queue.

Service S listens on that queue, processes each message independently of all other messages, and sends a response message to a response queue.

Application A listens on that response queue and must in turn create and publish a response Anypoint MQ message RESP with a payload containing the list of responses sent by service S in the same order as the request objects originally sent in REQU.

Assume successful response messages are returned by service S for all request messages.

What is required so that application A can ensure that the length and order of the list of objects in RESP and REQU match, while at the same time maximizing message throughput?

Use a Scatter-Gather within the For Each scope to ensure response message order Configure the Scatter-Gather with a persistent object store
Use a Scatter-Gather within the For Each scope to ensure response message order Configure the Scatter-Gather with a persistent object store
Perform all communication involving service S synchronously from within the For Each scope, so objects in RESP are in the exact same order as request objects in REQU
Perform all communication involving service S synchronously from within the For Each scope, so objects in RESP are in the exact same order as request objects in REQU
Use an Async scope within the For Each scope and collect response messages in a second For Each scope in the order In which they arrive, then send RESP using this list of responses
Use an Async scope within the For Each scope and collect response messages in a second For Each scope in the order In which they arrive, then send RESP using this list of responses
Keep track of the list length and all object indices in REQU, both in the For Each scope and in all communication involving service Use persistent storage when creating RESP
Keep track of the list length and all object indices in REQU, both in the For Each scope and in all communication involving service Use persistent storage when creating RESP
Suggested answer: D

Explanation:

Correct answer is Perform all communication involving service S synchronously from within the For Each scope, so objects in RESP are in the exact same order as request objects in REQU : Using Anypoint MQ, you can create two types of queues: Standard queue These queues don't guarantee a specific message order. Standard queues are the best fit for applications in which messages must be delivered quickly. FIFO (first in, first out) queue These queues ensure that your messages arrive in order. FIFO queues are the best fit for applications requiring strict message ordering and exactly-once delivery, but in which message delivery speed is of less importance Use of FIFO queue is no where in the option and also it decreased throughput. Similarly persistent object store is not the preferred solution approach when you maximizing message throughput. This rules out one of the options. Scatter Gather does not support ObjectStore. This rules out one of the options. Standard Anypoint MQ queues don't guarantee a specific message order hence using another for each block to collect response wont work as requirement here is to ensure the order. Hence considering all the above factors the feasible approach is Perform all communication involving service S synchronously from within the For Each scope, so objects in RESP are in the exact same order as request objects in REQU

asked 23/09/2024
lagwendon Scott
35 questions

Question 37

Report
Export
Collapse

Refer to the exhibit.

A Mule application is being designed to expose a SOAP web service to its clients.

What language is typically used inside the web service's interface definition to define the data structures that the web service Is expected to exchange with its clients?

Salesforce Certified MuleSoft Integration Architect I image Question 37 66054 09232024002916000000

WSDL
WSDL
XSD
XSD
JSON Schema
JSON Schema
RAMI
RAMI
Suggested answer: B

Explanation:

Answer:: XSD In this approach to developing a web service, you begin with-----------------------------------------------------------------------------------------------------------------

Reference: https://www.w3schools.com/xml/schema_intro.asp

asked 23/09/2024
Ervin Loong
45 questions

Question 38

Report
Export
Collapse

An organization has various integrations implemented as Mule applications. Some of these Mule applications are deployed to custom hosted Mule runtimes (on-premises) while others execute in the MuleSoft-hosted runtime plane (CloudHub). To perform the Integra functionality, these Mule applications connect to various backend systems, with multiple applications typically needing to access the backend systems.

How can the organization most effectively avoid creating duplicates in each Mule application of the credentials required to access the backend systems?

Create a Mule domain project that maintains the credentials as Mule domain-shared resources Deploy the Mule applications to the Mule domain, so the credentials are available to the Mule applications
Create a Mule domain project that maintains the credentials as Mule domain-shared resources Deploy the Mule applications to the Mule domain, so the credentials are available to the Mule applications
Store the credentials in properties files in a shared folder within the organization's data center Have the Mule applications load properties files from this shared location at startup
Store the credentials in properties files in a shared folder within the organization's data center Have the Mule applications load properties files from this shared location at startup
Segregate the credentials for each backend system into environment-specific properties files Package these properties files in each Mule application, from where they are loaded at startup
Segregate the credentials for each backend system into environment-specific properties files Package these properties files in each Mule application, from where they are loaded at startup
Configure or create a credentials service that returns the credentials for each backend system, and that is accessible from customer-hosted and MuleSoft-hosted Mule runtimes Have the Mule applications toad the properties at startup by invoking that credentials service
Configure or create a credentials service that returns the credentials for each backend system, and that is accessible from customer-hosted and MuleSoft-hosted Mule runtimes Have the Mule applications toad the properties at startup by invoking that credentials service
Suggested answer: D

Explanation:

* 'Create a Mule domain project that maintains the credentials as Mule domain-shared resources' is wrong as domain project is not supported in Cloudhub * We should Avoid Creating duplicates in each Mule application but below two options cause duplication of credentials - Store the credentials in properties files in a shared folder within the organization's data center. Have the Mule applications load properties files from this shared location at startup - Segregate the credentials for each backend system into environment-specific properties files. Package these properties files in each Mule application, from where they are loaded at startup So these are also wrong choices * Credentials service is the best approach in this scenario. Mule domain projects are not supported on CloudHub. Also its is not recommended to have multiple copies of configuration values as this makes difficult to maintain Use the Mule Credentials Vault to encrypt data in a .properties file. (In the context of this document, we refer to the .properties file simply as the properties file.) The properties file in Mule stores data as key-value pairs which may contain information such as usernames, first and last names, and credit card numbers. A Mule application may access this data as it processes messages, for example, to acquire login credentials for an external Web service. However, though this sensitive, private data must be stored in a properties file for Mule to access, it must also be protected against unauthorized -- and potentially malicious -- use by anyone with access to the Mule application

asked 23/09/2024
Hariett Mambo
44 questions

Question 39

Report
Export
Collapse

Refer to the exhibit.

Salesforce Certified MuleSoft Integration Architect I image Question 39 66056 09232024002916000000

A Mule application is deployed to a cluster of two customer-hosted Mute runtimes. The Mute application has a flow that polls a database and another flow with an HTTP Listener.

HTTP clients send HTTP requests directly to individual cluster nodes.

What happens to database polling and HTTP request handling in the time after the primary (master) node of the cluster has railed, but before that node is restarted?

Database polling continues Only HTTP requests sent to the remaining node continue to be accepted
Database polling continues Only HTTP requests sent to the remaining node continue to be accepted
Database polling stops All HTTP requests continue to be accepted
Database polling stops All HTTP requests continue to be accepted
Database polling continues All HTTP requests continue to be accepted, but requests to the failed node Incur increased latency
Database polling continues All HTTP requests continue to be accepted, but requests to the failed node Incur increased latency
Database polling stops All HTTP requests are rejected
Database polling stops All HTTP requests are rejected
Suggested answer: A

Explanation:

Correct answer is Database polling continues Only HTTP requests sent to the remaining node continue to be accepted. : Architecture descripted in the question could be described as follows.When node 1 is down , DB polling will still continue via node 2 . Also requests which are coming directly to node 2 will also be accepted and processed in BAU fashion. Only thing that wont work is when requests are sent to Node 1 HTTP connector. The flaw with this architecture is HTTP clients are sending HTTP requests directly to individual cluster nodes. By default, clustering Mule runtime engines ensures high system availability. If a Mule runtime engine node becomes unavailable due to failure or planned downtime, another node in the cluster can assume the workload and continue to process existing events and messages

Salesforce Certified MuleSoft Integration Architect I image Question 39 explanation 66056 09232024002916000000

asked 23/09/2024
Aleksei Chernikov
47 questions

Question 40

Report
Export
Collapse

A global organization operates datacenters in many countries. There are private network links between these datacenters because all business data (but NOT metadata) must be exchanged over these private network connections.

The organization does not currently use AWS in any way.

The strategic decision has Just been made to rigorously minimize IT operations effort and investment going forward.

What combination of deployment options of the Anypoint Platform control plane and runtime plane(s) best serves this organization at the start of this strategic journey?

MuleSoft-hosted Anypoint Platform control plane CloudHub Shared Worker Cloud in multiple AWS regions
MuleSoft-hosted Anypoint Platform control plane CloudHub Shared Worker Cloud in multiple AWS regions
Anypoint Platform - Private Cloud Edition Customer-hosted runtime plane in each datacenter
Anypoint Platform - Private Cloud Edition Customer-hosted runtime plane in each datacenter
MuleSoft-hosted Anypoint Platform control plane Customer-hosted runtime plane in multiple AWS regions
MuleSoft-hosted Anypoint Platform control plane Customer-hosted runtime plane in multiple AWS regions
MuleSoft-hosted Anypoint Platform control plane Customer-hosted runtime plane in each datacenter
MuleSoft-hosted Anypoint Platform control plane Customer-hosted runtime plane in each datacenter
Suggested answer: D

Explanation:

Correct answer is MuleSoft-hosted Anypoint Platform control plane Customer-hosted runtime plane in each datacenter There are two things to note about the question which can help us figure out correct answer.. * Business data must be exchanged over these private network connections which means we can not use MuleSoft provided Cloudhub option. So we are left with either customer hosted runtime in external cloud provider or customer hosted runtime in their own premises. As customer does not use AWS at the moment. Hence that don't have the immediate option of using Customer-hosted runtime plane in multiple AWS regions. hence the most suitable option for runtime plane is Customer-hosted runtime plane in each datacenter * Metadata has no limitation to reside in organization premises. Hence for control plane MuleSoft hosted Anypoint platform can be used as a strategic solution.

Hybrid is the best choice to start. Mule hosted Control plane and Customer hosted Runtime to start with. Once they mature in cloud migration, everything can be in Mule hosted.

asked 23/09/2024
ola adekanbi
38 questions
Total 273 questions
Go to page: of 28
Search

Related questions