ExamGecko
Ask Question

Salesforce Certified MuleSoft Developer I Practice Test - Questions Answers, Page 7

List of questions

Question 61

Report
Export
Collapse

What Mule application can have API policies applied by

Anypoint Platform to the endpoint exposed by that Mule application?

A) A Mule application that accepts requests over HTTP/1.x

Salesforce Certified MuleSoft Developer I image Question 61 65923 09232024002846000000

B) A Mule application that accepts JSON requests over TCP but is NOT required to provide a response

Salesforce Certified MuleSoft Developer I image Question 61 65923 09232024002846000000

C) A Mute application that accepts JSON requests over WebSocket

Salesforce Certified MuleSoft Developer I image Question 61 65923 09232024002846000000

D) A Mule application that accepts gRPC requests over HTTP/2

Salesforce Certified MuleSoft Developer I image Question 61 65923 09232024002846000000

Option A
Option A
Option B
Option B
Option C
Option C
Option D
Option D
Suggested answer: A

Explanation:

Option A. >> Anypoint API Manager and API policies are applicable to all types of HTTP/1.x APIs.>> They are not applicable to WebSocket APIs, HTTP/2 APIs and gRPC APIs

asked 23/09/2024
Mohit Mohit
45 questions

Question 62

Report
Export
Collapse

What Anypoint Connectors support transactions?

Database, JMS, VM
Database, JMS, VM
Database, 3MS, HTTP
Database, 3MS, HTTP
Database, JMS, VM, SFTP
Database, JMS, VM, SFTP
Database, VM, File
Database, VM, File
Suggested answer: A
asked 23/09/2024
RJ MOTAUNG
36 questions

Question 63

Report
Export
Collapse

An organization wants MuleSoft-hosted runtime plane features (such as HTTP load balancing, zero downtime, and horizontal and vertical scaling) in its Azure environment. What runtime plane minimizes the organization's effort to achieve these features?

Anypoint Runtime Fabric
Anypoint Runtime Fabric
Anypoint Platform for Pivotal Cloud Foundry
Anypoint Platform for Pivotal Cloud Foundry
CloudHub
CloudHub
A hybrid combination of customer-hosted and MuleSoft-hosted Mule runtimes
A hybrid combination of customer-hosted and MuleSoft-hosted Mule runtimes
Suggested answer: A

Explanation:

Anypoint Runtime Fabric. >> When a customer is already having an Azure environment, It is not at all an ideal approach to go with hybrid model having some Mule Runtimes hosted on Azure and some on MuleSoft. This is unnecessary and useless.>> CloudHub is a Mulesoft-hosted Runtime plane and is on AWS. We cannot customize to point CloudHub to customer's Azure environment.>> Anypoint Platform for Pivotal Cloud Foundry is specifically for infrastructure provided by Pivotal Cloud Foundry>> Anypoint Runtime Fabric is right answer as it is a container service that automates the deployment and orchestration of Mule applications and API gateways. Runtime Fabric runs within a customer-managed infrastructure on AWS, Azure, virtual machines (VMs), and bare-metal servers.-Some of the capabilities of Anypoint Runtime Fabric include:-Isolation between applications by running a separate Mule runtime per application.-Ability to run multiple versions of Mule runtime on the same set of resources.-Scaling applications across multiple replicas.-Automated application fail-over.-Application management with Anypoint Runtime Manager.

asked 23/09/2024
gareth warner
21 questions

Question 64

Report
Export
Collapse

Say, there is a legacy CRM system called CRM-Z which is offering below functions:

1. Customer creation

2. Amend details of an existing customer

3. Retrieve details of a customer

4. Suspend a customer

Implement a system API named customerManagement which has all the functionalities wrapped in it as various operations/resources
Implement a system API named customerManagement which has all the functionalities wrapped in it as various operations/resources
Implement different system APIs named createCustomer, amendCustomer, retrieveCustomer and suspendCustomer as they are modular and has seperation of concerns
Implement different system APIs named createCustomer, amendCustomer, retrieveCustomer and suspendCustomer as they are modular and has seperation of concerns
Implement different system APIs named createCustomerInCRMZ, amendCustomerInCRMZ, retrieveCustomerFromCRMZ and suspendCustomerInCRMZ as they are modular and has seperation of concerns
Implement different system APIs named createCustomerInCRMZ, amendCustomerInCRMZ, retrieveCustomerFromCRMZ and suspendCustomerInCRMZ as they are modular and has seperation of concerns
Suggested answer: B

Explanation:

Implement different system APIs named createCustomer, amendCustomer, retrieveCustomer and suspendCustomer as they are modular and has seperation of concerns. >> It is quite normal to have a single API and different Verb + Resource combinations. However, this fits well for an Experience API or a Process API but not a best architecture style for System APIs. So, option with just one customerManagement API is not the best choice here.>> The option with APIs in createCustomerInCRMZ format is next close choice w.r.t modularization and less maintenance but the naming of APIs is directly coupled with the legacy system. A better foreseen approach would be to name your APIs by abstracting the backend system names as it allows seamless replacement/migration of any backend system anytime. So, this is not the correct choice too.>> createCustomer, amendCustomer, retrieveCustomer and suspendCustomer is the right approach and is the best fit compared to other options as they are both modular and same time got the names decoupled from backend system and it has covered all requirements a System API needs.

asked 23/09/2024
Sana Mehak
40 questions

Question 65

Report
Export
Collapse

An Anypoint Platform organization has been configured with an external identity provider (IdP) for identity management and client management. What credentials or token must be provided to Anypoint CLI to execute commands against the Anypoint Platform APIs?

The credentials provided by the IdP for identity management
The credentials provided by the IdP for identity management
The credentials provided by the IdP for client management
The credentials provided by the IdP for client management
An OAuth 2.0 token generated using the credentials provided by the IdP for client management
An OAuth 2.0 token generated using the credentials provided by the IdP for client management
An OAuth 2.0 token generated using the credentials provided by the IdP for identity management
An OAuth 2.0 token generated using the credentials provided by the IdP for identity management
Suggested answer: A

Explanation:

The credentials provided by the IdP for identity management. >> There is no support for OAuth 2.0 tokens from client/identity providers to authenticate via Anypoint CLI. Only possible tokens are 'bearer tokens' that too only generated using Anypoint Organization/Environment Client Id and Secret from https://anypoint.mulesoft.com/accounts/login. Not the client credentials of client provider. So, OAuth 2.0 is not possible. More over, the token is mainly for API Manager purposes and not associated with a user. You can NOT use it to call most APIs (for example Cloudhub and etc) as per this Mulesoft Knowledge article.>> The other option allowed by Anypoint CLI is to use client credentials. It is possible to use client credentials of a client provider but requires setting up Connected Apps in client management but such details are not given in the scenario explained in the question.>> So only option left is to use user credentials from identify provider

asked 23/09/2024
Louis Flink
43 questions

Question 66

Report
Export
Collapse

A company requires Mule applications deployed to CloudHub to be isolated between non-production and production environments. This is so Mule applications deployed to non-production environments can only access backend systems running in their customer-hosted non-production environment, and so Mule applications deployed to production environments can only access backend systems running in their customer-hosted production environment. How does MuleSoft recommend modifying Mule applications, configuring environments, or changing infrastructure to support this type of per-environment isolation between Mule applications and backend systems?

Modify properties of Mule applications deployed to the production Anypoint Platform environments to prevent access from non-production Mule applications
Modify properties of Mule applications deployed to the production Anypoint Platform environments to prevent access from non-production Mule applications
Configure firewall rules in the infrastructure inside each customer-hosted environment so that only IP addresses from the corresponding Anypoint Platform environments are allowed to communicate with corresponding backend systems
Configure firewall rules in the infrastructure inside each customer-hosted environment so that only IP addresses from the corresponding Anypoint Platform environments are allowed to communicate with corresponding backend systems
Create non-production and production environments in different Anypoint Platform business groups
Create non-production and production environments in different Anypoint Platform business groups
Create separate Anypoint VPCs for non-production and production environments, then configure connections to the backend systems in the corresponding customer-hosted environments
Create separate Anypoint VPCs for non-production and production environments, then configure connections to the backend systems in the corresponding customer-hosted environments
Suggested answer: D

Explanation:

Create separate Anypoint VPCs for non-production and production environments, then configure connections to the backend systems in the corresponding customer-hosted environments.. >>Creating different Business Groups does NOT make any difference w.r.t accessing the non-prod and prod customer-hosted environments. Still they will be accessing from both Business Groups unless process network restrictions are put in place.>>We need to modify or couple the Mule Application Implementations with the environment. In fact, we should never implements application coupled with environments by binding them in the properties. Only basic things like endpoint URL etc should be bundled in properties but not environment level access restrictions.>>IP addresses on CloudHub are dynamic until unless a special static addresses are assigned. So it is not possible to setup firewall rules in customer-hosted infrastrcture. More over, even if static IP addresses are assigned, there could be 100s of applications running on cloudhub and setting up rules for all of them would be a hectic task, non-maintainable and definitely got a good practice.>>Thebest practice recommendedby Mulesoft (In fact any cloud provider), is to have your Anypoint VPCs seperated for Prod and Non-Prod and perform the VPC peering or VPN tunneling for these Anypoint VPCs to respective Prod and Non-Prod customer-hosted environment networks.Bottom of FormTop of Form

asked 23/09/2024
Luke Smith
30 questions

Question 67

Report
Export
Collapse

An API has been updated in Anypoint exchange by its API producer from version 3.1.1 to 3.2.0 following accepted semantic versioning practices and the changes have been communicated via the APIs public portal. The API endpoint does NOT change in the new version. How should the developer of an API client respond to this change?

The API producer should be requested to run the old version in parallel with the new one
The API producer should be requested to run the old version in parallel with the new one
The API producer should be contacted to understand the change to existing functionality
The API producer should be contacted to understand the change to existing functionality
The API client code only needs to be changed if it needs to take advantage of the new features
The API client code only needs to be changed if it needs to take advantage of the new features
The API clients need to update the code on their side and need to do full regression
The API clients need to update the code on their side and need to do full regression
Suggested answer: C
asked 23/09/2024
Vasuki Pramod Kara
26 questions

Question 68

Report
Export
Collapse

Question 10: Skipped

An API implementation returns three X-RateLimit-* HTTP response headers to a requesting API client. What type of information do these response headers indicate to the API client?

The error codes that result from throttling
The error codes that result from throttling
A correlation ID that should be sent in the next request
A correlation ID that should be sent in the next request
The HTTP response size
The HTTP response size
The remaining capacity allowed by the API implementation
The remaining capacity allowed by the API implementation
Suggested answer: D

Explanation:

The remaining capacity allowed by the API implementation.. >>

Reference: https://docs.mulesoft.com/api-manager/2.x/rate-limiting-and-throttling-sla-based-policies#response-headers

Salesforce Certified MuleSoft Developer I image Question 68 explanation 65930 09232024002846000000

asked 23/09/2024
Curl Pushover
28 questions

Question 69

Report
Export
Collapse

A retail company with thousands of stores has an API to receive data about purchases and insert it into a single database. Each individual store sends a batch of purchase data to the API about every 30 minutes. The API implementation uses a database bulk insert command to submit all the purchase data to a database using a custom JDBC driver provided by a data analytics solution provider. The API implementation is deployed to a single CloudHub worker. The JDBC driver processes the data into a set of several temporary disk files on the CloudHub worker, and then the data is sent to an analytics engine using a proprietary protocol. This process usually takes less than a few minutes. Sometimes a request fails. In this case, the logs show a message from the JDBC driver indicating an out-of-file-space message. When the request is resubmitted, it is successful. What is the best way to try to resolve this throughput issue?

se a CloudHub autoscaling policy to add CloudHub workers
se a CloudHub autoscaling policy to add CloudHub workers
Use a CloudHub autoscaling policy to increase the size of the CloudHub worker
Use a CloudHub autoscaling policy to increase the size of the CloudHub worker
Increase the size of the CloudHub worker(s)
Increase the size of the CloudHub worker(s)
Increase the number of CloudHub workers
Increase the number of CloudHub workers
Suggested answer: D

Explanation:

Increase the size of the CloudHub worker(s). The key details that we can take out from the given scenario are:>> API implementation uses a database bulk insert command to submit all the purchase data to a database>> JDBC driver processes the data into a set of several temporary disk files on the CloudHub worker>> Sometimes a request fails and the logs show a message indicating an out-of-file-space messageBased on above details:>> Both auto-scaling options does NOT help because we cannot set auto-scaling rules based on error messages. Auto-scaling rules are kicked-off based on CPU/Memory usages and not due to some given error or disk space issues.>> Increasing the number of CloudHub workers also does NOT help here because the reason for the failure is not due to performance aspects w.r.t CPU or Memory. It is due to disk-space.>> Moreover, the API is doing bulk insert to submit the received batch data. Which means, all data is handled by ONE worker only at a time. So, the disk space issue should be tackled on 'per worker' basis. Having multiple workers does not help as the batch may still fail on any worker when disk is out of space on that particular worker.Therefore, the right way to deal this issue and resolve this is to increase the vCore size of the worker so that a new worker with more disk space will be provisioned.

asked 23/09/2024
Patrick Thiel
36 questions

Question 70

Report
Export
Collapse

Due to a limitation in the backend system, a system API can only handle up to 500 requests per second. What is the best type of API policy to apply to the system API to avoid overloading the backend system?

Rate limiting
Rate limiting
HTTP caching
HTTP caching
Rate limiting - SLA based
Rate limiting - SLA based
Spike control
Spike control
Suggested answer: D

Explanation:

Spike control. >> First things first, HTTP Caching policy is for purposes different than avoiding the backend system from overloading. So this is OUT.>> Rate Limiting and Throttling/ Spike Control policies are designed to limit API access, but have different intentions.>> Rate limiting protects an API by applying a hard limit on its access.>> Throttling/ Spike Control shapes API access by smoothing spikes in traffic.That is why, Spike Control is the right option.

asked 23/09/2024
Welton Harris
38 questions
Total 95 questions
Go to page: of 10
Search

Related questions