Salesforce Certified MuleSoft Platform Architect I Practice Test - Questions Answers, Page 9
List of questions
Question 81

An API has been updated in Anypoint exchange by its API producer from version 3.1.1 to 3.2.0 following accepted semantic versioning practices and the changes have been communicated via the APIs public portal. The API endpoint does NOT change in the new version. How should the developer of an API client respond to this change?
Question 82

Question 10: Skipped
An API implementation returns three X-RateLimit-* HTTP response headers to a requesting API client. What type of information do these response headers indicate to the API client?
The remaining capacity allowed by the API implementation. >>
Reference: https://docs.mulesoft.com/api-manager/2.x/rate-limiting-and-throttling-sla-based-policies#response-headers
Question 83

A retail company with thousands of stores has an API to receive data about purchases and insert it into a single database. Each individual store sends a batch of purchase data to the API about every 30 minutes. The API implementation uses a database bulk insert command to submit all the purchase data to a database using a custom JDBC driver provided by a data analytics solution provider. The API implementation is deployed to a single CloudHub worker. The JDBC driver processes the data into a set of several temporary disk files on the CloudHub worker, and then the data is sent to an analytics engine using a proprietary protocol. This process usually takes less than a few minutes. Sometimes a request fails. In this case, the logs show a message from the JDBC driver indicating an out-of-file-space message. When the request is resubmitted, it is successful. What is the best way to try to resolve this throughput issue?
Increase the size of the CloudHub worker(s) The key details that we can take out from the given scenario are:>> API implementation uses a database bulk insert command to submit all the purchase data to a database>> JDBC driver processes the data into a set of several temporary disk files on the CloudHub worker>> Sometimes a request fails and the logs show a message indicating an out-of-file-space messageBased on above details:>> Both auto-scaling options does NOT help because we cannot set auto-scaling rules based on error messages. Auto-scaling rules are kicked-off based on CPU/Memory usages and not due to some given error or disk space issues.>> Increasing the number of CloudHub workers also does NOT help here because the reason for the failure is not due to performance aspects w.r.t CPU or Memory. It is due to disk-space.>> Moreover, the API is doing bulk insert to submit the received batch data. Which means, all data is handled by ONE worker only at a time. So, the disk space issue should be tackled on 'per worker' basis. Having multiple workers does not help as the batch may still fail on any worker when disk is out of space on that particular worker.Therefore, the right way to deal this issue and resolve this is to increase the vCore size of the worker so that a new worker with more disk space will be provisioned.
Question 84

Due to a limitation in the backend system, a system API can only handle up to 500 requests per second. What is the best type of API policy to apply to the system API to avoid overloading the backend system?
Spike control >> First things first, HTTP Caching policy is for purposes different than avoiding the backend system from overloading. So this is OUT.>> Rate Limiting and Throttling/ Spike Control policies are designed to limit API access, but have different intentions.>> Rate limiting protects an API by applying a hard limit on its access.>> Throttling/ Spike Control shapes API access by smoothing spikes in traffic.That is why, Spike Control is the right option.
Question 85

A company has created a successful enterprise data model (EDM). The company is committed to building an application network by adopting modern APIs as a core enabler of the company's IT operating model. At what API tiers (experience, process, system) should the company require reusing the EDM when designing modern API data models?
At the process and system tiers >> Experience Layer APIs are modeled and designed exclusively for the end user's experience. So, the data models of experience layer vary based on the nature and type of such API consumer. For example, Mobile consumers will need light-weight data models to transfer with ease on the wire, where as web-based consumers will need detailed data models to render most of the info on web pages, so on. So, enterprise data models fit for the purpose of canonical models but not of good use for experience APIs.>> That is why, EDMs should be used extensively in process and system tiers but NOT in experience tier.
Question 86

The application network is recomposable: it is built for change because it 'bends but does not break'
>> Application Network is a disposable architecture.
>> Which means, it can be altered without disturbing entire architecture and its components.
>> It bends as per requirements or design changes but does not break
Question 87

A system API has a guaranteed SLA of 100 ms per request. The system API is deployed to a primary environment as well as to a disaster recovery (DR) environment, with different DNS names in each environment. An upstream process API invokes the system API and the main goal of this process API is to respond to client requests in the least possible time. In what order should the system APIs be invoked, and what changes should be made in order to speed up the response time for requests from the process API?
In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment, and ONLY use the first response. >> The API requirement in the given scenario is to respond in least possible time.>> The option that is suggesting to first try the API in primary environment and then fallback to API in DR environment would result in successful response but NOT in least possible time. So, this is NOT a right choice of implementation for given requirement.>> Another option that is suggesting to ONLY invoke API in primary environment and to add timeout and retries may also result in successful response upon retries but NOT in least possible time. So, this is also NOT a right choice of implementation for given requirement.>> One more option that is suggesting to invoke API in primary environment and API in DR environment in parallel using Scatter-Gather would result in wrong API response as it would return merged results and moreover, Scatter-Gather does things in parallel which is true but still completes its scope only on finishing all routes inside it. So again, NOT a right choice of implementation for given requirementThe Correct choice is to invoke the API in primary environment and the API in DR environment parallelly, and using ONLY the first response received from one of them.
Question 88

Which of the following best fits the definition of API-led connectivity?
API-led connectivity is not just an architecture or technology but also a way to organize people and processes for efficient IT delivery in the organization.
Question 89

What are the major benefits of MuleSoft proposed IT Operating Model?
1. Decrease the IT delivery gap2. Meet various business demands without increasing the IT capacity3. Make consumption of assets at the rate of production.
Question 90

A Mule application exposes an HTTPS endpoint and is deployed to three CloudHub workers that do not use static IP addresses. The Mule application expects a high volume of client requests in short time periods. What is the most cost-effective infrastructure component that should be used to serve the high volume of client requests?
The CloudHub shared load balancer The scenario in this question can be split as below:>> There are 3 CloudHub workers (So, there are already good number of workers to handle high volume of requests)>> The workers are not using static IP addresses (So, one CANNOT use customer load-balancing solutions without static IPs)>> Looking for most cost-effective component to load balance the client requests among the workers.Based on the above details given in the scenario:>> Runtime autoscaling is NOT at all cost-effective as it incurs extra cost. Most over, there are already 3 workers running which is a good number.>> We cannot go for a customer-hosted load balancer as it is also NOT most cost-effective (needs custom load balancer to maintain and licensing) and same time the Mule App is not having Static IP Addresses which limits from going with custom load balancing.>> An API Proxy is irrelevant there as it has no role to play w.r.t handling high volumes or load balancing.So, the only right option to go with and fits the purpose of scenario being most cost-effective is - using a CloudHub Shared Load Balancer.
Question