Salesforce Certified MuleSoft Developer I Practice Test - Questions Answers, Page 3
List of questions
Question 21

How can the application of a rate limiting API policy be accurately reflected in the RAML definition of an API?
By refining the response definitions by adding thex-ratelimit-*responseheaders with description, type, and example.
https://docs.mulesoft.com/api-manager/2.x/rate-limiting-and-throttling#response-headershttps://docs.mulesoft.com/api-manager/2.x/rate-limiting-and-throttling-sla-based-policies#response-headers
Question 22

An organization has several APIs that accept JSON data over HTTP POST. The APIs are all publicly available and are associated with several mobile applications and web applications.
The organization does NOT want to use any authentication or compliance policies for these APIs, but at the same time, is worried that some bad actor could send payloads that could somehow compromise the applications or servers running the API implementations.
What out-of-the-box Anypoint Platform policy can address exposure to this threat?
Apply a JSON threat protection policy to all APIs to detect potentialthreat vectors. >> Usually, if the APIs are designed and developed for specific consumers (known consumers/customers) then we would IP Whitelist the same to ensure that traffic only comes from them.>> However, as this scenario states that the APIs are publicly available and being used by so many mobile and web applications, it is NOT possible to identify and blacklist all possible bad actors.>> So, JSON threat protection policy is the best chance to prevent any bad JSON payloads from such bad actors.
Question 23

What API policy would LEAST likely be applied to a Process API?
JSON threat protection. Fact: Technically, there are no restrictions on what policy can be applied in what layer. Any policy can be applied on any layer API. However, context should also be considered properly before blindly applying the policies on APIs.That is why, this question asked for a policy that would LEAST likely be applied to a Process API.From the given options:>> All policies except 'JSON threat protection' can be applied without hesitation to the APIs in Process tier.>> JSON threat protection policy ideally fits for experience APIs to prevent suspicious JSON payload coming from external API clients. This covers more of a security aspect by trying to avoid possibly malicious and harmful JSON payloads from external clients calling experience APIs.As external API clients are NEVER allowed to call Process APIs directly and also these kind of malicious and harmful JSON payloads are always stopped at experience API layer only using this policy, it is LEAST LIKELY that this same policy is again applied on Process Layer API.
Question 24

What is a key performance indicator (KPI) that measures the success of a typical C4E that is immediately apparent in responses from the Anypoint Platform APIs?
The number of API specifications in RAML or OAS format publishedto Anypoint Exchange. >> The success of C4E always depends on their contribution to the number of reusable assets that they have helped to build and publish to Anypoint Exchange.>> It is NOT due to any factors w.r.t # of outages, Manual vs CI/CD deployments or Publicly accessible HTTP endpoints>> Anypoint Platform APIs helps us to quickly run and get the number of published RAML/OAS assets to Anypoint Exchange. This clearly depicts how successful a C4E team is based on number of returned assets in the response.
Question 25

An organization is implementing a Quote of the Day API that caches today's quote.
What scenario can use the GoudHub Object Store via the Object Store connector to persist the cache's state?
When there is one CloudHub deployment of the API implementation to three CloudHub workers that must share the cache state.. Key details in the scenario:>> Use the CloudHub Object Store via the Object Store connectorConsidering above details:>> CloudHub Object Stores have one-to-one relationship with CloudHub Mule Applications.>> We CANNOT use an application's CloudHub Object Store to be shared among multiple Mule applications running in different Regions or Business Groups or Customer-hosted Mule Runtimes by using Object Store connector.>> If it is really necessary and very badly needed, then Anypoint Platform supports a way by allowing access to CloudHub Object Store of another application using Object Store REST API. But NOT using Object Store connector.So, the only scenario where we can use the CloudHub Object Store via the Object Store connector to persist the cache's state is when there is one CloudHub deployment of the API implementation to multiple CloudHub workers that must share the cache state.
Question 26

What condition requires using a CloudHub Dedicated Load Balancer?
When server-side load-balanced TLS mutual authentication isrequired between API implementations and API clients. Fact/ Memory Tip: Although there are many benefits of CloudHub Dedicated Load balancer, TWO important things that should come to ones mind for considering it are:>> Having URL endpoints with Custom DNS names on CloudHub deployed apps>> Configuring custom certificates for both HTTPS and Two-way (Mutual) authentication.Coming to the options provided for this question :>> WeCANNOT use DLB to perform cross-region load balancing between separate deployments of the same Mule application.>> We can have mapping rules to have more than one DLB URL pointing to same Mule app. But vicevera (More than one Mule app having same DLB URL) is NOT POSSIBLE>> It is true that DLB helps to setup custom DNS names for Cloudhub deployed Mule apps but NOT true for apps deployed to Customer-hosted Mule Runtimes.>> It is true to that we can load balance API invocations across multiple CloudHub workers using DLB but it is NOT A MUST. We can achieve the same (load balancing) using SLB (Shared Load Balancer) too. We DO NOT necessarily require DLB for achieve it.So the only right option that fits the scenario and requires us to use DLB is when TLS mutual authentication is required between API implementations and API clients.
Question 27

What do the API invocation metrics provided by Anypoint Platform provide?
Data on past API invocations to help identify anomalies and usagepatterns across various APIs. API Invocation metrics provided by Anypoint Platform:>> Does NOT provide any Return Of Investment (ROI) related information. So the option suggesting it is OUT.>> Does NOT provide any information w.r.t how APIs are reused, whether there is effective usage of APIs or not etc...>> Does NOT prodive any prediction information as such to help us proactively identify any future policy violations.So, the kind of data/information we can get from such metrics is on past API invocations to help identify anomalies and usage patterns across various APIs.
Question 28

What is true about the technology architecture of Anypoint VPCs?
Traffic between Mule applications deployed to an Anypoint VPC andon-premises systems can stay within a private network. >> The private IP address range of an Anypoint VPC is NOT automatically chosen by CloudHub. It is chosen by us at the time of creating VPC using thr CIDR blocks.CIDR Block: The size of the Anypoint VPC in Classless Inter-Domain Routing (CIDR) notation.For example, if you set it to 10.111.0.0/24, the Anypoint VPC is granted 256 IP addresses from 10.111.0.0 to 10.111.0.255.Ideally, the CIDR Blocks you choose for the Anypoint VPC come from a private IP space, and should not overlap with any other Anypoint VPC's CIDR Blocks, or any CIDR Blocks in use in your corporate network.
that each CloudHub environment requires a separate Anypoint VPC. Once an Anypoint VPC is created, we can choose a same VPC by multiple environments. However, it is generally a best and recommended practice to always have seperate Anypoint VPCs for Non-Prod and Prod environments.>> We use Anypoint VPN to link the underlying AWS VPC to an on-premises (non AWS) private network. NOT VPC Peering.Only true statement in the given choices is that the traffic between Mule applications deployed to an Anypoint VPC and on-premises systems can stay within a private network.https://docs.mulesoft.com/runtime-manager/vpc-connectivity-methods-concept
Question 29

An API implementation is deployed on a single worker on CloudHub and invoked by external API clients (outside of CloudHub). How can an alert be set up that is guaranteed to trigger AS SOON AS that API implementation stops responding to API invocations?
Configure a ''Worker not responding'' alert in Anypoint Runtime Manager.. >>All the options eventually helps to generate the alert required when the application stops responding.>>However, handling exceptions within calling API and then raising alert from API client is inappropriate and silly. There could be many API clients invoking the API implementation and it is not ideal to have this setup consistently in all of them. Not a realistic way to do.>>Implementing a health check/ heartbeat with in the API and calling from outside to detmine the health sounds OK but needs extra setup for it and same time there are very good chances of generating false alarms when there are any intermittent network issues between external tool calling the health check API on API implementation. The API implementation itself may not have any issues but due to some other factors some false alarms may go out.>>Creating an alert in API Manager when the API receives no requests within a specified time period would actually generate realistic alerts but even here some false alarms may go out when there are genuinely no requests fromAPI clients.The best and right way to achieve this requirement is to setup an alert on Runtime Manager with a condition 'Worker not responding'. This would generate an alert ASSOONAS the workers become unresponsive.
Bottom of FormTop of Form
Question 30

Refer to the exhibit.
what is true when using customer-hosted Mule runtimes with the MuleSoft-hosted Anypoint Platform control plane (hybrid deployment)?
API implementations can run successfully in customer-hosted Muleruntimes, even when they are unable to communicate with the control plane.. >>We CANNOT use Shared Load balancer to load balance APIs on customer hosted runtimes
>>For Hybrid deployment models, the on-premises are first connected to Runtime Manager usingRuntime Manager agent.So, the connection is initiated first from On-premises to Runtime Manager. Then all control can be done from Runtime Manager.>>Anypoint Runtime Manager CANNOT ensure automatic HA. Clusters/Server Groups etc should be configured before hand.Only TRUE statement in the given choices is, API implementations can run successfully in customer-hosted Mule runtimes, even when they are unable to communicate with the control plane. There are several references below to justify this statement.https://docs.mulesoft.com/runtime-manager/deployment-strategies#hybrid-deploymentshttps://help.mulesoft.com/s/article/On-Premise-Runtimes-Disconnected-From-US-Control-Plane-June-18th-2018https://help.mulesoft.com/s/article/Runtime-Manager-cannot-manage-On-Prem-Applications-and-Servers-from-US-Control-Plane-June-25th-2019https://help.mulesoft.com/s/article/On-premise-Runtimes-Appear-Disconnected-in-Runtime-Manager-May-29th-2018========================================================
Question