ExamGecko
Home Home / Salesforce / Certified MuleSoft Integration Architect I

Salesforce Certified MuleSoft Integration Architect I Practice Test - Questions Answers, Page 12

Question list
Search
Search

List of questions

Search

Related questions











A company is using Mulesoft to develop API's and deploy them to Cloudhub and on premises targets. Recently it has decided to enable Runtime Fabric deployment option as well and infrastructure is set up for this option.

What can be used to deploy Runtime Fabric?

A.
AnypointCLI
A.
AnypointCLI
Answers
B.
Anypoint platform REST API's
B.
Anypoint platform REST API's
Answers
C.
Directly uploading ajar file from the Runtime manager
C.
Directly uploading ajar file from the Runtime manager
Answers
D.
Mule maven plug-in
D.
Mule maven plug-in
Answers
Suggested answer: D

As an enterprise architect, what are the two reasons for which you would use a canonical data model in the new integration project using Mulesoft Anypoint platform ( choose two answers )

A.
To have consistent data structure aligned in processes
A.
To have consistent data structure aligned in processes
Answers
B.
To isolate areas within a bounded context
B.
To isolate areas within a bounded context
Answers
C.
To incorporate industry standard data formats
C.
To incorporate industry standard data formats
Answers
D.
There are multiple canonical definitions of each data type
D.
There are multiple canonical definitions of each data type
Answers
E.
Because the model isolates the back and systems and support mule applications from change
E.
Because the model isolates the back and systems and support mule applications from change
Answers
Suggested answer: A, B

A company is planning to migrate its deployment environment from on-premises cluster to a Runtime Fabric (RTF) cluster. It also has a requirement to enable Mule applications deployed to a Mule runtime instance to store and share data across application replicas and restarts.

How can these requirements be met?

A.
Anypoint object store V2 to share data between replicas in the RTF cluster
A.
Anypoint object store V2 to share data between replicas in the RTF cluster
Answers
B.
Install the object store pod on one of the cluster nodes
B.
Install the object store pod on one of the cluster nodes
Answers
C.
Configure Persistence Gateway in any of the servers using Mule Object Store
C.
Configure Persistence Gateway in any of the servers using Mule Object Store
Answers
D.
Configure Persistent Gateway at the RTF
D.
Configure Persistent Gateway at the RTF
Answers
Suggested answer: D

An organization designing a hybrid, load balanced, single cluster production environment. Due to performance service level agreement goals, it is looking into running the Mule applications in an active-active multi node cluster configuration.

What should be considered when running its Mule applications in this type of environment?

A.
All event sources, regardless of time , can be configured as the target source by the primary node in the cluster
A.
All event sources, regardless of time , can be configured as the target source by the primary node in the cluster
Answers
B.
An external load balancer is required to distribute incoming requests throughout the cluster nodes
B.
An external load balancer is required to distribute incoming requests throughout the cluster nodes
Answers
C.
A Mule application deployed to multiple nodes runs in an isolation from the other nodes in the cluster
C.
A Mule application deployed to multiple nodes runs in an isolation from the other nodes in the cluster
Answers
D.
Although the cluster environment is fully installed configured and running, it will not process any requests until an outage condition is detected by the primary node in the cluster.
D.
Although the cluster environment is fully installed configured and running, it will not process any requests until an outage condition is detected by the primary node in the cluster.
Answers
Suggested answer: C

An organization has decided on a cloud migration strategy to minimize the organization's own IT resources. Currently the organization has all of its new applications running on its own premises and uses an on-premises load balancer that exposes all APIs under the base URL (https://api.rutujar.com).

As part of migration strategy, the organization is planning to migrate all of its new applications and load balancer CloudHub.

What is the most straightforward and cost-effective approach to Mule application deployment and load balancing that preserves the public URL's?

A.
Deploy the Mule application to Cloudhub Create a CNAME record for base URL( httpsr://api.rutujar.com) in the Cloudhub shared load balancer that points to the A record of theon-premises load balancer Apply mapping rules in SLB to map URLto their corresponding Mule applications
A.
Deploy the Mule application to Cloudhub Create a CNAME record for base URL( httpsr://api.rutujar.com) in the Cloudhub shared load balancer that points to the A record of theon-premises load balancer Apply mapping rules in SLB to map URLto their corresponding Mule applications
Answers
B.
Deploy the Mule application to Cloudhub Update a CNAME record for base URL ( https://api.rutujar.com) in the organization's DNS server to point to the A record of the Cloudhub dedicated load balancer Apply mapping rules in DLB to map URLto their corresponding Mule applications
B.
Deploy the Mule application to Cloudhub Update a CNAME record for base URL ( https://api.rutujar.com) in the organization's DNS server to point to the A record of the Cloudhub dedicated load balancer Apply mapping rules in DLB to map URLto their corresponding Mule applications
Answers
C.
Deploy the Mule application to Cloudhub Update a CNAME record for base URL ( https://api.rutujar.com) in the organization's DNS server to point to the A record of the CloudHub shared load balancer Apply mapping rules in SLB to map URLto their corresponding Mule applications
C.
Deploy the Mule application to Cloudhub Update a CNAME record for base URL ( https://api.rutujar.com) in the organization's DNS server to point to the A record of the CloudHub shared load balancer Apply mapping rules in SLB to map URLto their corresponding Mule applications
Answers
D.
For each migrated Mule application, deploy an API proxy application to Cloudhub with all traffic to the mule applications routed through a Cloud Hub Dedicated load balancer (DLB) Update a CNAME record for base URL ( https://api.rutujar.com) in the organization's DNS server to point to the A record of the CloudHub dedicated load balancer Apply mapping rules in DLB to map each API proxy application who is responding new application
D.
For each migrated Mule application, deploy an API proxy application to Cloudhub with all traffic to the mule applications routed through a Cloud Hub Dedicated load balancer (DLB) Update a CNAME record for base URL ( https://api.rutujar.com) in the organization's DNS server to point to the A record of the CloudHub dedicated load balancer Apply mapping rules in DLB to map each API proxy application who is responding new application
Answers
Suggested answer: B

What condition requires using a CloudHub Dedicated Load Balancer?

A.
When cross-region load balancing is required between separate deployments of the same Mule application
A.
When cross-region load balancing is required between separate deployments of the same Mule application
Answers
B.
When custom DNS names are required for API implementations deployed to customer-hosted Mule runtimes
B.
When custom DNS names are required for API implementations deployed to customer-hosted Mule runtimes
Answers
C.
When API invocations across multiple CloudHub workers must be load balanced
C.
When API invocations across multiple CloudHub workers must be load balanced
Answers
D.
When server-side load-balanced TLS mutual authentication is required between API implementations and API clients
D.
When server-side load-balanced TLS mutual authentication is required between API implementations and API clients
Answers
Suggested answer: D

Explanation:

Correct answer is When server-side load-balanced TLS mutual authentication is required between API implementations and API clients CloudHub dedicated load balancers (DLBs) are an optional component of Anypoint Platform that enable you to route external HTTP and HTTPS traffic to multiple Mule applications deployed to CloudHub workers in a Virtual Private Cloud (VPC). Dedicated load balancers enable you to: * Handle load balancing among the different CloudHub workers that run your application. * Define SSL configurations to provide custom certificates and optionally enforce two-way SSL client authentication. * Configure proxy rules that map your applications to custom domains. This enables you to host your applications under a single domain

A company is building an application network and has deployed four Mule APIs: one experience API, one process API, and two system APIs. The logs from all the APIs are aggregated in an external log aggregation tool. The company wants to trace messages that are exchanged between multiple API implementations. What is the most idiomatic (based on its intended use) identifier that should be used to implement Mule event tracing across the multiple API implementations?

A.
Mule event ID
A.
Mule event ID
Answers
B.
Mule correlation ID
B.
Mule correlation ID
Answers
C.
Client's IP address
C.
Client's IP address
Answers
D.
DataWeave UUID
D.
DataWeave UUID
Answers
Suggested answer: B

Explanation:

Correct answer is Mule correlation ID By design, Correlation Ids cannot be changed within a flow in Mule 4 applications and can be set only at source. This ID is part of the Event Context and is generated as soon as the message is received by the application. When a HTTP Request is received, the request is inspected for 'X-Correlation-Id' header. If 'X-Correlation-Id' header is present, HTTP connector uses this as the Correlation Id. If 'X-Correlation-Id' header is NOT present, a Correlation Id is randomly generated. For Incoming HTTP Requests: In order to set a custom Correlation Id, the client invoking the HTTP request must set 'X-Correlation-Id' header. This will ensure that the Mule Flow uses this Correlation Id. For Outgoing HTTP Requests: You can also propagate the existing Correlation Id to downstream APIs. By default, all outgoing HTTP Requests send 'X-Correlation-Id' header. However, you can choose to set a different value to 'X-Correlation-Id' header or set 'Send Correlation Id' to NEVER.

Mule application is deployed to Customer Hosted Runtime. Asynchronous logging was implemented to improved throughput of the system. But it was observed over the period of time that few of the important exception log messages which were used to rollback transactions are not working as expected causing huge loss to the Organization. Organization wants to avoid these losses. Application also has constraints due to which they cant compromise on throughput much. What is the possible option in this case?

A.
Logging needs to be changed from asynchronous to synchronous
A.
Logging needs to be changed from asynchronous to synchronous
Answers
B.
External log appender needs to be used in this case
B.
External log appender needs to be used in this case
Answers
C.
Persistent memory storage should be used in such scenarios
C.
Persistent memory storage should be used in such scenarios
Answers
D.
Mixed configuration of asynchronous or synchronous loggers should be used to log exceptions via synchronous way
D.
Mixed configuration of asynchronous or synchronous loggers should be used to log exceptions via synchronous way
Answers
Suggested answer: D

Explanation:

Correct approach is to use Mixed configuration of asynchronous or synchronous loggers shoud be used to log exceptions via synchronous way Asynchronous logging poses a performance-reliability trade-off. You may lose some messages if Mule crashes before the logging buffers flush to the disk. In this case, consider that you can have a mixed configuration of asynchronous or synchronous loggers in your app. Best practice is to use asynchronous logging over synchronous with a minimum logging level of WARN for a production application. In some cases, enable INFO logging level when you need to confirm events such as successful policy installation or to perform troubleshooting. Configure your logging strategy by editing your application's src/main/resources/log4j2.xml file

As a part of business requirement , old CRM system needs to be integrated using Mule application. CRM system is capable of exchanging data only via SOAP/HTTP protocol. As an integration architect who follows API led approach , what is the the below step you will perform so that you can share document with CRM team?

A.
Create RAML specification using Design Center
A.
Create RAML specification using Design Center
Answers
B.
Create SOAP API specification using Design Center
B.
Create SOAP API specification using Design Center
Answers
C.
Create WSDL specification using text editor
C.
Create WSDL specification using text editor
Answers
D.
Create WSDL specification using Design Center
D.
Create WSDL specification using Design Center
Answers
Suggested answer: C

Explanation:

Correct answer is Create WSDL specification using text editor SOAP services are specified using WSDL. A client program connecting to a web service can read the WSDL to determine what functions are available on the server. We can not create WSDL specification in Design Center. We need to use external text editor to create WSDL.

Insurance organization is planning to deploy Mule application in MuleSoft Hosted runtime plane. As a part of requirement , application should be scalable . highly available. It also has regulatory requirement which demands logs to be retained for at least 2 years. As an Integration Architect what step you will recommend in order to achieve this?

A.
It is not possible to store logs for 2 years in CloudHub deployment. External log management system is required.
A.
It is not possible to store logs for 2 years in CloudHub deployment. External log management system is required.
Answers
B.
When deploying an application to CloudHub , logs retention period should be selected as 2 years
B.
When deploying an application to CloudHub , logs retention period should be selected as 2 years
Answers
C.
When deploying an application to CloudHub, worker size should be sufficient to store 2 years data
C.
When deploying an application to CloudHub, worker size should be sufficient to store 2 years data
Answers
D.
Logging strategy should be configured accordingly in log4j file deployed with the application.
D.
Logging strategy should be configured accordingly in log4j file deployed with the application.
Answers
Suggested answer: A

Explanation:

Correct answer is It is not possible to store logs for 2 years in CloudHub deployment. External log management system is required. CloudHub has a specific log retention policy, as described in the documentation: the platform stores logs of up to 100 MB per app & per worker or for up to 30 days, whichever limit is hit first. Once this limit has been reached, the oldest log information is deleted in chunks and is irretrievably lost. The recommended approach is to persist your logs to a external logging system of your choice (such as Splunk, for instance) using a log appender. Please note that this solution results in the logs no longer being stored on our platform, so any support cases you lodge will require for you to provide the appropriate logs for review and case resolution

Total 273 questions
Go to page: of 28