Salesforce Certified MuleSoft Platform Architect I Practice Test - Questions Answers, Page 2
List of questions
Question 11

In an organization, the InfoSec team is investigating Anypoint Platform related data traffic.
From where does most of the data available to Anypoint Platform for monitoring and alerting originate?
From the Mule runtime irrespective of the deployment model >> Monitoring and Alerting metrics are always originated from Mule Runtimes irrespective of the deployment model.>> It may seems that some metrics (Runtime Manager) are originated from Mule Runtime and some are (API Invocations/ API Analytics) from API Manager. However, this is realistically NOT TRUE. The reason is, API manager is just a management tool for API instances but all policies upon applying on APIs eventually gets executed on Mule Runtimes only (Either Embedded or API Proxy).>> Similarly all API Implementations also run on Mule Runtimes.So, most of the day required for monitoring and alerts are originated fron Mule Runtimes only irrespective of whether the deployment model is MuleSoft-hosted or Customer-hosted or Hybrid.
Question 12

When designing an upstream API and its implementation, the development team has been advised to NOT set timeouts when invoking a downstream API, because that downstream API has no SLA that can be relied upon. This is the only downstream API dependency of that upstream API.
Assume the downstream API runs uninterrupted without crashing. What is the impact of this advice?
An SLA for the upstream API CANNOT be provided. >> First thing first, the default HTTP response timeout for HTTP connector is 10000 ms (10 seconds). NOT 500 ms.>> Mule runtime does NOT apply any such 'load-dependent' timeouts. There is no such behavior currently in Mule.>> As there is default 10000 ms time out for HTTP connector, we CANNOT always guarantee that the invocation of the downstream API will run to completion without timing out due to its unreliable SLA times. If the response time crosses 10 seconds then the request may time out.The main impact due to this is that a proper SLA for the upstream API CANNOT be provided.
Question 13

What best explains the use of auto-discovery in API implementations?
It makes API Manager aware of API implementations and henceenables it to enforce policies. >> API Autodiscovery is a mechanism that manages an API from API Manager by pairing the deployed application to an API created on the platform.>> API Management includes tracking, enforcing policies if you apply any, and reporting API analytics.>> Critical to the Autodiscovery process is identifying the API by providing the API name and version.https://docs.mulesoft.com/api-manager/2.x/api-auto-discovery-new-concepthttps://docs.mulesoft.com/api-manager/1.x/api-auto-discoveryhttps://docs.mulesoft.com/api-manager/2.x/api-auto-discovery-new-concept
Question 14

What should be ensured before sharing an API through a public Anypoint Exchange portal?
The visibility level of the API instances of that API that need to bepublicly accessible should be set to public visibility. https://docs.mulesoft.com/exchange/to-share-api-asset-to-portal
Question 15

Refer to the exhibit.
A RAML definition has been proposed for a new Promotions Process API, and has been published to Anypoint Exchange.
The Marketing Department, who will be an important consumer of the Promotions API, has important requirements and expectations that must be met.
What is the most effective way to use Anypoint Platform features to involve the Marketing Department in this early API design phase?
A) Ask the Marketing Department to interact with a mocking implementation of the API using the automatically generated API Console
B) Organize a design workshop with the DBAs of the Marketing Department in which the database schema of the Marketing IT systems is translated into RAML
C) Use Anypoint Studio to Implement the API as a Mule application, then deploy that API implementation to CloudHub and ask the Marketing Department to interact with it
D) Export an integration test suite from API designer and have the Marketing Department execute the tests In that suite to ensure they pass
Ask the Marketing Department to interact with a mocking implementationof the API using the automatically generated API Console. As per MuleSoft's IT Operating Model:>>API consumers need NOT wait until the fullAPI implementation is ready.>>NO technical test-suites needs to be shared with end users to interact with APIs.>>Anypoint Platform offers a mocking capability on all the published API specifications to Anypoint Exchange which also will be rich in documentation covering all details of API functionalities and working nature.>>No needs of arranging days of workshops with end users for feedback.API consumers can use Anypoint Exchange features on the platform and interact with the API using its mocking feature. The feedback can be shared quickly on the same to incorporate any changes.
Question 16

Refer to the exhibit.
what is true when using customer-hosted Mule runtimes with the MuleSoft-hosted Anypoint Platform control plane (hybrid deployment)?
API implementations can run successfully in customer-hosted Muleruntimes, even when they are unable to communicate with the control plane. >>We CANNOT use Shared Load balancer to load balance APIs on customer hosted runtimes
>>For Hybrid deployment models, the on-premises are first connected to Runtime Manager usingRuntime Manager agent.So, the connection is initiated first from On-premises to Runtime Manager. Then all control can be done from Runtime Manager.>>Anypoint Runtime Manager CANNOT ensure automatic HA. Clusters/Server Groups etc should be configured before hand.Only TRUE statement in the given choices is, API implementations can run successfully in customer-hosted Mule runtimes, even when they are unable to communicate with the control plane. There are several references below to justify this statement.https://docs.mulesoft.com/runtime-manager/deployment-strategies#hybrid-deploymentshttps://help.mulesoft.com/s/article/On-Premise-Runtimes-Disconnected-From-US-Control-Plane-June-18th-2018https://help.mulesoft.com/s/article/Runtime-Manager-cannot-manage-On-Prem-Applications-and-Servers-from-US-Control-Plane-June-25th-2019https://help.mulesoft.com/s/article/On-premise-Runtimes-Appear-Disconnected-in-Runtime-Manager-May-29th-2018========================================================
Question 17

A System API is designed to retrieve data from a backend system that has scalability challenges. What API policy can best safeguard the backend system?
SLA-based rate limiting >> Client Id enforement policy is a 'Compliance' related NFR and does not help in maintaining the 'Quality of Service (QoS)'. It CANNOT and NOT meant for protecting the backend systems from scalability challenges.>> IP Whitelisting and OAuth 2.0 token enforcement are 'Security' related NFRs and again does not help in maintaining the 'Quality of Service (QoS)'. They CANNOT and are NOT meant for protecting the backend systems from scalability challenges.Rate Limiting, Rate Limiting-SLA, Throttling, Spike Control are the policies that are 'Quality of Service (QOS)' related NFRs and are meant to help in protecting the backend systems from getting overloaded.https://dzone.com/articles/how-to-secure-apis
Question 18

Refer to the exhibit.
What is a valid API in the sense of API-led connectivity and application networks?
A) Java RMI over TCP
B) Java RMI over TCP
C) CORBA over HOP
D) XML over UDP
XML over HTTP >>API-led connectivity and Application Networks urge to have the APIs on HTTP based protocols for building most effective APIs and networks on top of them.>>The HTTP based APIs allow the platform to apply various varities of policies to address many NFRs>>The HTTP based APIs also allow to implement many standard and effective implementation patterns that adhere to HTTP based w3c rules.
Bottom of FormTop of Form
Question 19

Refer to the exhibit.
Three business processes need to be implemented, and the implementations need to communicate with several different SaaS applications.
These processes are owned by separate (siloed) LOBs and are mainly independent of each other, but do share a few business entities. Each LOB has one development team and their own budget
In this organizational context, what is the most effective approach to choose the API data models for the APIs that will implement these business processes with minimal redundancy of the data models?
A) Build several Bounded Context Data Models that align with coherent parts of the business processes and the definitions of associated business entities
B) Build distinct data models for each API to follow established micro-services and Agile API-centric practices
C) Build all API data models using XML schema to drive consistency and reuse across the organization
D) Build one centralized Canonical Data Model (Enterprise Data Model) that unifies all the data types from all three business processes, ensuring the data model is consistent and non-redundant
Build several Bounded Context Data Models that align with coherent partsof the business processes and the definitions of associated business entities. >>The options w.r.t building API data models using XML schema/ Agile API-centric practices are irrelevant to the scenario given in the question. So these two are INVALID.>>Building EDM (Enterprise Data Model) is not feasible or right fit for this scenario as the teams and LOBs work in silo and they all have different initiatives, budget etc.. Building EDM needs intensive coordination among all the team which evidently seems not possible in this scenario.So, the right fit for this scenario is to build several Bounded Context Data Models that align with coherent parts of the business processes and the definitions of associated business entities.
Question 20

What best describes the Fully Qualified Domain Names (FQDNs), also known as DNS entries, created when a Mule application is deployed to the CloudHub Shared Worker Cloud?
The FQDNs are determined by the application name chosen,IRRESPECTIVE of the region >> When deploying applications to Shared Worker Cloud, the FQDN are always determined by application name chosen.>> It does NOT matter what region the app is being deployed to.>> Although it is fact and true that the generated FQDN will have the region included in it (Ex: exp-salesorder-api.au-s1.cloudhub.io), it does NOT mean that the same name can be used when deploying to another CloudHub region.>> Application name should be universally unique irrespective of Region and Organization and solely determines the FQDN for Shared Load Balancers.
Question