ExamGecko
Home Home / Salesforce / Certified MuleSoft Developer I

Salesforce Certified MuleSoft Developer I Practice Test - Questions Answers, Page 4

Question list
Search
Search

List of questions

Search

Related questions











A System API is designed to retrieve data from a backend system that has scalability challenges. What API policy can best safeguard the backend system?

A.
IPwhitelist
A.
IPwhitelist
Answers
B.
SLA-based rate limiting
B.
SLA-based rate limiting
Answers
C.
Auth 2 token enforcement
C.
Auth 2 token enforcement
Answers
D.
Client ID enforcement
D.
Client ID enforcement
Answers
Suggested answer: B

Explanation:

SLA-based rate limiting. >> Client Id enforement policy is a 'Compliance' related NFR and does not help in maintaining the 'Quality of Service (QoS)'. It CANNOT and NOT meant for protecting the backend systems from scalability challenges.>> IP Whitelisting and OAuth 2.0 token enforcement are 'Security' related NFRs and again does not help in maintaining the 'Quality of Service (QoS)'. They CANNOT and are NOT meant for protecting the backend systems from scalability challenges.Rate Limiting, Rate Limiting-SLA, Throttling, Spike Control are the policies that are 'Quality of Service (QOS)' related NFRs and are meant to help in protecting the backend systems from getting overloaded.https://dzone.com/articles/how-to-secure-apis

Refer to the exhibit.

What is a valid API in the sense of API-led connectivity and application networks?

A) Java RMI over TCP

B) Java RMI over TCP

C) CORBA over HOP

D) XML over UDP

A.
Option A
A.
Option A
Answers
B.
Option B
B.
Option B
Answers
C.
Option C
C.
Option C
Answers
D.
Option D
D.
Option D
Answers
Suggested answer: D

Explanation:

XML over HTTP. >>API-led connectivity and Application Networks urge to have the APIs on HTTP based protocols for building most effective APIs and networks on top of them.>>The HTTP based APIs allow the platform to apply various varities of policies to address many NFRs>>The HTTP based APIs also allow to implement many standard and effective implementation patterns that adhere to HTTP based w3c rules.

Bottom of FormTop of Form

Refer to the exhibit.

Three business processes need to be implemented, and the implementations need to communicate with several different SaaS applications.

These processes are owned by separate (siloed) LOBs and are mainly independent of each other, but do share a few business entities. Each LOB has one development team and their own budget

In this organizational context, what is the most effective approach to choose the API data models for the APIs that will implement these business processes with minimal redundancy of the data models?

A) Build several Bounded Context Data Models that align with coherent parts of the business processes and the definitions of associated business entities

B) Build distinct data models for each API to follow established micro-services and Agile API-centric practices

C) Build all API data models using XML schema to drive consistency and reuse across the organization

D) Build one centralized Canonical Data Model (Enterprise Data Model) that unifies all the data types from all three business processes, ensuring the data model is consistent and non-redundant

A.
Option A
A.
Option A
Answers
B.
Option B
B.
Option B
Answers
C.
Option C
C.
Option C
Answers
D.
Option D
D.
Option D
Answers
Suggested answer: A

Explanation:

Build several Bounded Context Data Models that align with coherent partsof the business processes and the definitions of associated business entities.. >>The options w.r.t building API data models using XML schema/ Agile API-centric practices are irrelevant to the scenario given in the question. So these two are INVALID.>>Building EDM (Enterprise Data Model) is not feasible or right fit for this scenario as the teams and LOBs work in silo and they all have different initiatives, budget etc.. Building EDM needs intensive coordination among all the team which evidently seems not possible in this scenario.So, the right fit for this scenario is to build several Bounded Context Data Models that align with coherent parts of the business processes and the definitions of associated business entities.

What best describes the Fully Qualified Domain Names (FQDNs), also known as DNS entries, created when a Mule application is deployed to the CloudHub Shared Worker Cloud?

A.
A fixed number of FQDNs are created, IRRESPECTIVE of the environment and VPC design
A.
A fixed number of FQDNs are created, IRRESPECTIVE of the environment and VPC design
Answers
B.
The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region
B.
The FQDNs are determined by the application name chosen, IRRESPECTIVE of the region
Answers
C.
The FQDNs are determined by the application name, but can be modified by an administrator after deployment
C.
The FQDNs are determined by the application name, but can be modified by an administrator after deployment
Answers
D.
The FQDNs are determined by both the application name and the Anypoint Platform organization
D.
The FQDNs are determined by both the application name and the Anypoint Platform organization
Answers
Suggested answer: B

Explanation:

The FQDNs are determined by the application name chosen,IRRESPECTIVE of the region. >> When deploying applications to Shared Worker Cloud, the FQDN are always determined by application name chosen.>> It does NOT matter what region the app is being deployed to.>> Although it is fact and true that the generated FQDN will have the region included in it (Ex: exp-salesorder-api.au-s1.cloudhub.io), it does NOT mean that the same name can be used when deploying to another CloudHub region.>> Application name should be universally unique irrespective of Region and Organization and solely determines the FQDN for Shared Load Balancers.

When using CloudHub with the Shared Load Balancer, what is managed EXCLUSIVELY by the API implementation (the Mule application) and NOT by Anypoint Platform?

A.
The assignment of each HTTP request to a particular CloudHub worker
A.
The assignment of each HTTP request to a particular CloudHub worker
Answers
B.
The logging configuration that enables log entries to be visible in Runtime Manager
B.
The logging configuration that enables log entries to be visible in Runtime Manager
Answers
C.
The SSL certificates used by the API implementation to expose HTTPS endpoints
C.
The SSL certificates used by the API implementation to expose HTTPS endpoints
Answers
D.
The number of DNS entries allocated to the API implementation
D.
The number of DNS entries allocated to the API implementation
Answers
Suggested answer: C

Explanation:

The SSL certificates used by the API implementation to exposeHTTPS endpoints. >> The assignment of each HTTP request to a particular CloudHub worker is taken care by Anypoint Platform itself. We need not manage it explicitly in the API implementation and in fact we CANNOT manage it in the API implementation.>> The logging configuration that enables log entries to be visible in Runtime Manager is ALWAYS managed in the API implementation and NOT just for SLB. So this is not something we do EXCLUSIVELY when using SLB.>> We DO NOT manage the number of DNS entries allocated to the API implementation inside the code. Anypoint Platform takes care of this.It is the SSL certificates used by the API implementation to expose HTTPS endpoints that is to be managed EXCLUSIVELY by the API implementation. Anypoint Platform does NOT do this when using SLBs.

Refer to the exhibit.

What is the best way to decompose one end-to-end business process into a collaboration of Experience, Process, and System APIs?

A) Handle customizations for the end-user application at the Process API level rather than the Experience API level

B) Allow System APIs to return data that is NOT currently required by the identified Process or Experience APIs

C) Always use a tiered approach by creating exactly one API for each of the 3 layers (Experience, Process and System APIs)

D) Use a Process API to orchestrate calls to multiple System APIs, but NOT to other Process APIs

A.
Option A
A.
Option A
Answers
B.
Option B
B.
Option B
Answers
C.
Option C
C.
Option C
Answers
D.
Option D
D.
Option D
Answers
Suggested answer: B

Explanation:

Allow System APIs to return data that is NOT currently required bythe identified Process or Experience APIs.. >> All customizations for the end-user application should be handled in 'Experience API' only. Not in Process API>> We should use tiered approach but NOT always by creating exactly one API for each of the 3 layers. Experience APIs might be one but Process APIs and System APIs are often more than one. System APIs for sure will be more than one all the time as they are the smallest modular APIs built in front of end systems.>> Process APIs can call System APIs as well as other Process APIs. There is no such anti-design pattern in API-Led connectivity saying Process APIs should not call other Process APIs.So, the right answer in the given set of options that makes sense as per API-Led connectivity principles is to allow System APIs to return data that is NOT currently required by the identified Process or Experience APIs. This way, some future Process APIs can make use of that data from System APIs and we need NOT touch the System layer APIs again and again.

An API experiences a high rate of client requests (TPS) vwth small message paytoads. How can usage limits be imposed on the API based on the type of client application?

A.
Use an SLA-based rate limiting policy and assign a client application to a matching SLA tier based on its type
A.
Use an SLA-based rate limiting policy and assign a client application to a matching SLA tier based on its type
Answers
B.
Use a spike control policy that limits the number of requests for each client application type
B.
Use a spike control policy that limits the number of requests for each client application type
Answers
C.
Use a cross-origin resource sharing (CORS) policy to limit resource sharing between client applications, configured by the client application type
C.
Use a cross-origin resource sharing (CORS) policy to limit resource sharing between client applications, configured by the client application type
Answers
D.
Use a rate limiting policy and a client ID enforcement policy, each configured by the client application type
D.
Use a rate limiting policy and a client ID enforcement policy, each configured by the client application type
Answers
Suggested answer: A

Explanation:

Use an SLA-based rate limiting policy and assign a clientapplication to a matching SLA tier based on its type.. >> SLA tiers will come into play whenever any limits to be imposed on APIs based on client type

A code-centric API documentation environment should allow API consumers to investigate and execute API client source code that demonstrates invoking one or more APIs as part of representative scenarios.

What is the most effective way to provide this type of code-centric API documentation environment using Anypoint Platform?

A.
Enable mocking services for each of the relevant APIs and expose them via their Anypoint Exchange entry
A.
Enable mocking services for each of the relevant APIs and expose them via their Anypoint Exchange entry
Answers
B.
Ensure the APIs are well documented through their Anypoint Exchange entries and API Consoles and share these pages with all API consumers
B.
Ensure the APIs are well documented through their Anypoint Exchange entries and API Consoles and share these pages with all API consumers
Answers
C.
Create API Notebooks and include them in the relevant Anypoint Exchange entries
C.
Create API Notebooks and include them in the relevant Anypoint Exchange entries
Answers
D.
Make relevant APIs discoverable via an Anypoint Exchange entry
D.
Make relevant APIs discoverable via an Anypoint Exchange entry
Answers
Suggested answer: C

Explanation:

Create API Notebooks and Include them in the relevant Anypoint exchangeentries. >>API Notebooks are the one on Anypoint Platform that enable us to provide code-centric API documentationBottom of FormTop of Form

Refer to the exhibit. An organization is running a Mule standalone runtime and has configured Active Directory as the Anypoint Platform external Identity Provider. The organization does not have budget for other system components.

What policy should be applied to all instances of APIs in the organization to most effecuvelyKestrict access to a specific group of internal users?

A.
Apply a basic authentication - LDAP policy; the internal Active Directory will be configured as the LDAP source for authenticating users
A.
Apply a basic authentication - LDAP policy; the internal Active Directory will be configured as the LDAP source for authenticating users
Answers
B.
Apply a client ID enforcement policy; the specific group of users will configure their client applications to use their specific client credentials
B.
Apply a client ID enforcement policy; the specific group of users will configure their client applications to use their specific client credentials
Answers
C.
Apply an IP whitelist policy; only the specific users' workstations will be in the whitelist
C.
Apply an IP whitelist policy; only the specific users' workstations will be in the whitelist
Answers
D.
Apply an OAuth 2.0 access token enforcement policy; the internal Active Directory will be configured as the OAuth server
D.
Apply an OAuth 2.0 access token enforcement policy; the internal Active Directory will be configured as the OAuth server
Answers
Suggested answer: A

Explanation:

Apply a basic authentication - LDAP policy; the internal ActiveDirectory will be configured as the LDAP source for authenticating users.. >> IP Whitelisting does NOT fit for this purpose. Moreover, the users workstations may not necessarily have static IPs in the network.>> OAuth 2.0 enforcement requires a client provider which isn't in the organizations system components.>> It is not an effective approach to let every user create separate client credentials and configure those for their usage.The effective way it to apply a basic authentication - LDAP policy and the internal Active Directory will be configured as the LDAP source for authenticating users.

What is a best practice when building System APIs?

A.
Document the API using an easily consumable asset like a RAML definition
A.
Document the API using an easily consumable asset like a RAML definition
Answers
B.
Model all API resources and methods to closely mimic the operations of the backend system
B.
Model all API resources and methods to closely mimic the operations of the backend system
Answers
C.
Build an Enterprise Data Model (Canonical Data Model) for each backend system and apply it to System APIs
C.
Build an Enterprise Data Model (Canonical Data Model) for each backend system and apply it to System APIs
Answers
D.
Expose to API clients all technical details of the API implementation's interaction wifch the backend system
D.
Expose to API clients all technical details of the API implementation's interaction wifch the backend system
Answers
Suggested answer: B

Explanation:

Model all API resources and methods to closely mimic theoperations of the backend system.. >> There are NO fixed and straight best practices while opting data models for APIs. They are completly contextual and depends on number of factors. Based upon those factors, an enterprise can choose if they have to go with Enterprise Canonical Data Model or Bounded Context Model etc.>> One should NEVER expose the technical details of API implementation to their API clients. Only the API interface/ RAML is exposed to API clients.>> It is true that the RAML definitions of APIs should be as detailed as possible and should reflect most of the documentation. However, just that is NOT enough to call your API as best documented API. There should be even more documentation on Anypoint Exchange with API Notebooks etc. to make and create a developer friendly API and repository..>> The best practice always when creating System APIs is to create their API interfaces by modeling their resources and methods to closely reflect the operations and functionalities of that backend system.

Total 95 questions
Go to page: of 10