ExamGecko
Home Home / Salesforce / Certified MuleSoft Developer I

Salesforce Certified MuleSoft Developer I Practice Test - Questions Answers, Page 3

Question list
Search
Search

Related questions











How can the application of a rate limiting API policy be accurately reflected in the RAML definition of an API?

A.
By refining the resource definitions by adding a description of the rate limiting policy behavior
A.
By refining the resource definitions by adding a description of the rate limiting policy behavior
Answers
B.
By refining the request definitions by adding a remaining Requests query parameter with description, type, and example
B.
By refining the request definitions by adding a remaining Requests query parameter with description, type, and example
Answers
C.
By refining the response definitions by adding the out-of-the-box Anypoint Platform rate-limit-enforcement securityScheme with description, type, and example
C.
By refining the response definitions by adding the out-of-the-box Anypoint Platform rate-limit-enforcement securityScheme with description, type, and example
Answers
D.
By refining the response definitions by adding the x-ratelimit-* response headers with description, type, and example
D.
By refining the response definitions by adding the x-ratelimit-* response headers with description, type, and example
Answers
Suggested answer: D

Explanation:

By refining the response definitions by adding thex-ratelimit-*responseheaders with description, type, and example.

https://docs.mulesoft.com/api-manager/2.x/rate-limiting-and-throttling#response-headershttps://docs.mulesoft.com/api-manager/2.x/rate-limiting-and-throttling-sla-based-policies#response-headers

An organization has several APIs that accept JSON data over HTTP POST. The APIs are all publicly available and are associated with several mobile applications and web applications.

The organization does NOT want to use any authentication or compliance policies for these APIs, but at the same time, is worried that some bad actor could send payloads that could somehow compromise the applications or servers running the API implementations.

What out-of-the-box Anypoint Platform policy can address exposure to this threat?

A.
Shut out bad actors by using HTTPS mutual authentication for all API invocations
A.
Shut out bad actors by using HTTPS mutual authentication for all API invocations
Answers
B.
Apply an IP blacklist policy to all APIs; the blacklist will Include all bad actors
B.
Apply an IP blacklist policy to all APIs; the blacklist will Include all bad actors
Answers
C.
Apply a Header injection and removal policy that detects the malicious data before it is used
C.
Apply a Header injection and removal policy that detects the malicious data before it is used
Answers
D.
Apply a JSON threat protection policy to all APIs to detect potential threat vectors
D.
Apply a JSON threat protection policy to all APIs to detect potential threat vectors
Answers
Suggested answer: D

Explanation:

Apply a JSON threat protection policy to all APIs to detect potentialthreat vectors. >> Usually, if the APIs are designed and developed for specific consumers (known consumers/customers) then we would IP Whitelist the same to ensure that traffic only comes from them.>> However, as this scenario states that the APIs are publicly available and being used by so many mobile and web applications, it is NOT possible to identify and blacklist all possible bad actors.>> So, JSON threat protection policy is the best chance to prevent any bad JSON payloads from such bad actors.

What API policy would LEAST likely be applied to a Process API?

A.
Custom circuit breaker
A.
Custom circuit breaker
Answers
B.
Client ID enforcement
B.
Client ID enforcement
Answers
C.
Rate limiting
C.
Rate limiting
Answers
D.
JSON threat protection
D.
JSON threat protection
Answers
Suggested answer: D

Explanation:

JSON threat protection. Fact: Technically, there are no restrictions on what policy can be applied in what layer. Any policy can be applied on any layer API. However, context should also be considered properly before blindly applying the policies on APIs.That is why, this question asked for a policy that would LEAST likely be applied to a Process API.From the given options:>> All policies except 'JSON threat protection' can be applied without hesitation to the APIs in Process tier.>> JSON threat protection policy ideally fits for experience APIs to prevent suspicious JSON payload coming from external API clients. This covers more of a security aspect by trying to avoid possibly malicious and harmful JSON payloads from external clients calling experience APIs.As external API clients are NEVER allowed to call Process APIs directly and also these kind of malicious and harmful JSON payloads are always stopped at experience API layer only using this policy, it is LEAST LIKELY that this same policy is again applied on Process Layer API.

What is a key performance indicator (KPI) that measures the success of a typical C4E that is immediately apparent in responses from the Anypoint Platform APIs?

A.
The number of production outage incidents reported in the last 24 hours
A.
The number of production outage incidents reported in the last 24 hours
Answers
B.
The number of API implementations that have a publicly accessible HTTP endpoint and are being managed by Anypoint Platform
B.
The number of API implementations that have a publicly accessible HTTP endpoint and are being managed by Anypoint Platform
Answers
C.
The fraction of API implementations deployed manually relative to those deployed using a CI/CD tool
C.
The fraction of API implementations deployed manually relative to those deployed using a CI/CD tool
Answers
D.
The number of API specifications in RAML or OAS format published to Anypoint Exchange
D.
The number of API specifications in RAML or OAS format published to Anypoint Exchange
Answers
Suggested answer: D

Explanation:

The number of API specifications in RAML or OAS format publishedto Anypoint Exchange. >> The success of C4E always depends on their contribution to the number of reusable assets that they have helped to build and publish to Anypoint Exchange.>> It is NOT due to any factors w.r.t # of outages, Manual vs CI/CD deployments or Publicly accessible HTTP endpoints>> Anypoint Platform APIs helps us to quickly run and get the number of published RAML/OAS assets to Anypoint Exchange. This clearly depicts how successful a C4E team is based on number of returned assets in the response.

An organization is implementing a Quote of the Day API that caches today's quote.

What scenario can use the GoudHub Object Store via the Object Store connector to persist the cache's state?

A.
When there are three CloudHub deployments of the API implementation to three separate CloudHub regions that must share the cache state
A.
When there are three CloudHub deployments of the API implementation to three separate CloudHub regions that must share the cache state
Answers
B.
When there are two CloudHub deployments of the API implementation by two Anypoint Platform business groups to the same CloudHub region that must share the cache state
B.
When there are two CloudHub deployments of the API implementation by two Anypoint Platform business groups to the same CloudHub region that must share the cache state
Answers
C.
When there is one deployment of the API implementation to CloudHub and anottV deployment to a customer-hosted Mule runtime that must share the cache state
C.
When there is one deployment of the API implementation to CloudHub and anottV deployment to a customer-hosted Mule runtime that must share the cache state
Answers
D.
When there is one CloudHub deployment of the API implementation to three CloudHub workers that must share the cache state
D.
When there is one CloudHub deployment of the API implementation to three CloudHub workers that must share the cache state
Answers
Suggested answer: D

Explanation:

When there is one CloudHub deployment of the API implementation to three CloudHub workers that must share the cache state.. Key details in the scenario:>> Use the CloudHub Object Store via the Object Store connectorConsidering above details:>> CloudHub Object Stores have one-to-one relationship with CloudHub Mule Applications.>> We CANNOT use an application's CloudHub Object Store to be shared among multiple Mule applications running in different Regions or Business Groups or Customer-hosted Mule Runtimes by using Object Store connector.>> If it is really necessary and very badly needed, then Anypoint Platform supports a way by allowing access to CloudHub Object Store of another application using Object Store REST API. But NOT using Object Store connector.So, the only scenario where we can use the CloudHub Object Store via the Object Store connector to persist the cache's state is when there is one CloudHub deployment of the API implementation to multiple CloudHub workers that must share the cache state.

What condition requires using a CloudHub Dedicated Load Balancer?

A.
When cross-region load balancing is required between separate deployments of the same Mule application
A.
When cross-region load balancing is required between separate deployments of the same Mule application
Answers
B.
When custom DNS names are required for API implementations deployed to customer-hosted Mule runtimes
B.
When custom DNS names are required for API implementations deployed to customer-hosted Mule runtimes
Answers
C.
When API invocations across multiple CloudHub workers must be load balanced
C.
When API invocations across multiple CloudHub workers must be load balanced
Answers
D.
When server-side load-balanced TLS mutual authentication is required between API implementations and API clients
D.
When server-side load-balanced TLS mutual authentication is required between API implementations and API clients
Answers
Suggested answer: D

Explanation:

When server-side load-balanced TLS mutual authentication isrequired between API implementations and API clients. Fact/ Memory Tip: Although there are many benefits of CloudHub Dedicated Load balancer, TWO important things that should come to ones mind for considering it are:>> Having URL endpoints with Custom DNS names on CloudHub deployed apps>> Configuring custom certificates for both HTTPS and Two-way (Mutual) authentication.Coming to the options provided for this question :>> WeCANNOT use DLB to perform cross-region load balancing between separate deployments of the same Mule application.>> We can have mapping rules to have more than one DLB URL pointing to same Mule app. But vicevera (More than one Mule app having same DLB URL) is NOT POSSIBLE>> It is true that DLB helps to setup custom DNS names for Cloudhub deployed Mule apps but NOT true for apps deployed to Customer-hosted Mule Runtimes.>> It is true to that we can load balance API invocations across multiple CloudHub workers using DLB but it is NOT A MUST. We can achieve the same (load balancing) using SLB (Shared Load Balancer) too. We DO NOT necessarily require DLB for achieve it.So the only right option that fits the scenario and requires us to use DLB is when TLS mutual authentication is required between API implementations and API clients.

What do the API invocation metrics provided by Anypoint Platform provide?

A.
ROI metrics from APIs that can be directly shared with business users
A.
ROI metrics from APIs that can be directly shared with business users
Answers
B.
Measurements of the effectiveness of the application network based on the level of reuse
B.
Measurements of the effectiveness of the application network based on the level of reuse
Answers
C.
Data on past API invocations to help identify anomalies and usage patterns across various APIs
C.
Data on past API invocations to help identify anomalies and usage patterns across various APIs
Answers
D.
Proactive identification of likely future policy violations that exceed a given threat threshold
D.
Proactive identification of likely future policy violations that exceed a given threat threshold
Answers
Suggested answer: C

Explanation:

Data on past API invocations to help identify anomalies and usagepatterns across various APIs. API Invocation metrics provided by Anypoint Platform:>> Does NOT provide any Return Of Investment (ROI) related information. So the option suggesting it is OUT.>> Does NOT provide any information w.r.t how APIs are reused, whether there is effective usage of APIs or not etc...>> Does NOT prodive any prediction information as such to help us proactively identify any future policy violations.So, the kind of data/information we can get from such metrics is on past API invocations to help identify anomalies and usage patterns across various APIs.

What is true about the technology architecture of Anypoint VPCs?

A.
The private IP address range of an Anypoint VPC is automatically chosen by CloudHub
A.
The private IP address range of an Anypoint VPC is automatically chosen by CloudHub
Answers
B.
Traffic between Mule applications deployed to an Anypoint VPC and on-premises systems can stay within a private network
B.
Traffic between Mule applications deployed to an Anypoint VPC and on-premises systems can stay within a private network
Answers
C.
Each CloudHub environment requires a separate Anypoint VPC
C.
Each CloudHub environment requires a separate Anypoint VPC
Answers
D.
VPC peering can be used to link the underlying AWS VPC to an on-premises (non AWS) private network
D.
VPC peering can be used to link the underlying AWS VPC to an on-premises (non AWS) private network
Answers
Suggested answer: B

Explanation:

Traffic between Mule applications deployed to an Anypoint VPC andon-premises systems can stay within a private network. >> The private IP address range of an Anypoint VPC is NOT automatically chosen by CloudHub. It is chosen by us at the time of creating VPC using thr CIDR blocks.CIDR Block: The size of the Anypoint VPC in Classless Inter-Domain Routing (CIDR) notation.For example, if you set it to 10.111.0.0/24, the Anypoint VPC is granted 256 IP addresses from 10.111.0.0 to 10.111.0.255.Ideally, the CIDR Blocks you choose for the Anypoint VPC come from a private IP space, and should not overlap with any other Anypoint VPC's CIDR Blocks, or any CIDR Blocks in use in your corporate network.

that each CloudHub environment requires a separate Anypoint VPC. Once an Anypoint VPC is created, we can choose a same VPC by multiple environments. However, it is generally a best and recommended practice to always have seperate Anypoint VPCs for Non-Prod and Prod environments.>> We use Anypoint VPN to link the underlying AWS VPC to an on-premises (non AWS) private network. NOT VPC Peering.Only true statement in the given choices is that the traffic between Mule applications deployed to an Anypoint VPC and on-premises systems can stay within a private network.https://docs.mulesoft.com/runtime-manager/vpc-connectivity-methods-concept

An API implementation is deployed on a single worker on CloudHub and invoked by external API clients (outside of CloudHub). How can an alert be set up that is guaranteed to trigger AS SOON AS that API implementation stops responding to API invocations?

A.
Implement a heartbeat/health check within the API and invoke it from outside the Anypoint Platform and alert when the heartbeat does not respond
A.
Implement a heartbeat/health check within the API and invoke it from outside the Anypoint Platform and alert when the heartbeat does not respond
Answers
B.
Configure a 'worker not responding' alert in Anypoint Runtime Manager
B.
Configure a 'worker not responding' alert in Anypoint Runtime Manager
Answers
C.
Handle API invocation exceptions within the calling API client and raise an alert from that API client when the API Is unavailable
C.
Handle API invocation exceptions within the calling API client and raise an alert from that API client when the API Is unavailable
Answers
D.
Create an alert for when the API receives no requests within a specified time period
D.
Create an alert for when the API receives no requests within a specified time period
Answers
Suggested answer: B

Explanation:

Configure a ''Worker not responding'' alert in Anypoint Runtime Manager.. >>All the options eventually helps to generate the alert required when the application stops responding.>>However, handling exceptions within calling API and then raising alert from API client is inappropriate and silly. There could be many API clients invoking the API implementation and it is not ideal to have this setup consistently in all of them. Not a realistic way to do.>>Implementing a health check/ heartbeat with in the API and calling from outside to detmine the health sounds OK but needs extra setup for it and same time there are very good chances of generating false alarms when there are any intermittent network issues between external tool calling the health check API on API implementation. The API implementation itself may not have any issues but due to some other factors some false alarms may go out.>>Creating an alert in API Manager when the API receives no requests within a specified time period would actually generate realistic alerts but even here some false alarms may go out when there are genuinely no requests fromAPI clients.The best and right way to achieve this requirement is to setup an alert on Runtime Manager with a condition 'Worker not responding'. This would generate an alert ASSOONAS the workers become unresponsive.

Bottom of FormTop of Form

Refer to the exhibit.

what is true when using customer-hosted Mule runtimes with the MuleSoft-hosted Anypoint Platform control plane (hybrid deployment)?

A.
Anypoint Runtime Manager initiates a network connection to a Mule runtime in order to deploy Mule applications
A.
Anypoint Runtime Manager initiates a network connection to a Mule runtime in order to deploy Mule applications
Answers
B.
The MuleSoft-hosted Shared Load Balancer can be used to load balance API invocations to the Mule runtimes
B.
The MuleSoft-hosted Shared Load Balancer can be used to load balance API invocations to the Mule runtimes
Answers
C.
API implementations can run successfully in customer-hosted Mule runtimes, even when they are unable to communicate with the control plane
C.
API implementations can run successfully in customer-hosted Mule runtimes, even when they are unable to communicate with the control plane
Answers
D.
Anypoint Runtime Manager automatically ensures HA in the control plane by creating a new Mule runtime instance in case of a node failure
D.
Anypoint Runtime Manager automatically ensures HA in the control plane by creating a new Mule runtime instance in case of a node failure
Answers
Suggested answer: C

Explanation:

API implementations can run successfully in customer-hosted Muleruntimes, even when they are unable to communicate with the control plane.. >>We CANNOT use Shared Load balancer to load balance APIs on customer hosted runtimes

>>For Hybrid deployment models, the on-premises are first connected to Runtime Manager usingRuntime Manager agent.So, the connection is initiated first from On-premises to Runtime Manager. Then all control can be done from Runtime Manager.>>Anypoint Runtime Manager CANNOT ensure automatic HA. Clusters/Server Groups etc should be configured before hand.Only TRUE statement in the given choices is, API implementations can run successfully in customer-hosted Mule runtimes, even when they are unable to communicate with the control plane. There are several references below to justify this statement.https://docs.mulesoft.com/runtime-manager/deployment-strategies#hybrid-deploymentshttps://help.mulesoft.com/s/article/On-Premise-Runtimes-Disconnected-From-US-Control-Plane-June-18th-2018https://help.mulesoft.com/s/article/Runtime-Manager-cannot-manage-On-Prem-Applications-and-Servers-from-US-Control-Plane-June-25th-2019https://help.mulesoft.com/s/article/On-premise-Runtimes-Appear-Disconnected-in-Runtime-Manager-May-29th-2018========================================================

Total 95 questions
Go to page: of 10