ExamGecko
Home Home / Nutanix / NCP-DB-6.5

Nutanix NCP-DB-6.5 Practice Test - Questions Answers, Page 2

Question list
Search
Search

Related questions

What are two status values that can be set within the Alerts Dashboard? (Choose two.)

A.
Auto
A.
Auto
Answers
B.
Resolved
B.
Resolved
Answers
C.
Data Resiliency
C.
Data Resiliency
Answers
D.
Acknowledged
D.
Acknowledged
Answers
Suggested answer: B, D

Explanation:

The correct answer is B and D because these are the two status values that can be set within the Alerts Dashboard in NDB. Resolved means that the alert has been fixed and no longer requires attention. Acknowledged means that the alert has been seen and is being worked on. Option A is incorrect because Auto is not a status value, but a mode that automatically resolves alerts based on predefined rules. Option C is incorrect because Data Resiliency is not a status value, but a feature that ensures the availability and integrity of the data in NDB.

Nutanix Database Management & Automation (NDMA) course, Module 3: Monitoring Alerts and Storage Usage Within an NDB Implementation, Lesson 3.1: Monitoring Alerts

Nutanix Certified Professional - Database Automation (NCP-DB) v6.5, Knowledge Objectives, Section 3 - Monitor Alerts and Storage Usage Within an NDB Implementation

Nutanix Database Service (NDB) User Guide, Chapter 3: Monitoring Alerts and Storage Usage Within an NDB Implementation, Section 3.1: Monitoring Alerts

[Nutanix Support & Insights], Resolving All Alerts Related to a Database or Database Server VM in NDB

What is the purpose of Data Access Management policies in NDB Multi-Cluster?

A.
To register multiple Nutanix clusters in NDB
A.
To register multiple Nutanix clusters in NDB
Answers
B.
To perform snapshot operations on a single Nutanix cluster
B.
To perform snapshot operations on a single Nutanix cluster
Answers
C.
To manage time machine data availability across all registered Nutanix clusters in NDB
C.
To manage time machine data availability across all registered Nutanix clusters in NDB
Answers
D.
To remove data accessibility of a time machine across all registered Nutanix clusters in NDB
D.
To remove data accessibility of a time machine across all registered Nutanix clusters in NDB
Answers
Suggested answer: C

Explanation:

Data Access Management (DAM) policies are a feature of NDB Multi-Cluster that allows you to control the access and availability of time machine data across different Nutanix clusters. You can use DAM policies to specify which clusters can access the time machine data of a source database, and which clusters can replicate the time machine data for backup or disaster recovery purposes. DAM policies help you to optimize the storage and network resources, as well as ensure the security and compliance of your database workloads. The purpose of DAM policies is not to register multiple Nutanix clusters in NDB, as this is done by using the Add Cluster option in the NDB settings page. The purpose of DAM policies is also not to perform snapshot operations on a single Nutanix cluster, as this is done by using the Time Machine feature in the NDB dashboard. The purpose of DAM policies is also not to remove data accessibility of a time machine across all registered Nutanix clusters in NDB, as this is done by using the Delete option in the Time Machine page.Reference:

Nutanix Database Management & Automation Training Course, Module 6: Managing NDB Multi-Cluster, Lesson 2: Data Access Management Policies, Slide 3: Data Access Management Policies

Nutanix Certified Professional - Database Automation (NCP-DB) 5 Exam, Section 6: Administer an NDB Environment, Objective 6.5: Apply procedural concepts to create Data Access Management (DAM) policies

How does NDB send notifications when alerts are generated?

A.
SNMP
A.
SNMP
Answers
B.
APIs
B.
APIs
Answers
C.
Pulse
C.
Pulse
Answers
D.
Email
D.
Email
Answers
Suggested answer: D

Explanation:

NDB sends notifications when alerts are generated via email. The email notifications can be configured to send to one or more recipients, and can be customized to include the alert severity, category, description, and resolution steps. The email notifications help to inform the database administrator and other stakeholders about the status and issues of the NDB-managed databases and operations.

NDB does not send notifications via SNMP, APIs, or Pulse. SNMP is a protocol for collecting and organizing information about managed devices on a network. APIs are interfaces for communicating and exchanging data between different applications or systems. Pulse is a feature of the Nutanix cluster that collects and sends diagnostic and usage data to Nutanix for analysis and support.

Nutanix Database Management & Automation Training Course, Module 3: Nutanix Era Deployment, Lesson 3.2: Nutanix Era Deployment, slide 11.

Nutanix Database Management & Automation Training Course, Module 5: Nutanix Era Operations, Lesson 5.1: Nutanix Era Operations, slide 6.

Nutanix Database Management & Automation Training Course, Module 5: Nutanix Era Operations, Lesson 5.2: Nutanix Era Alerts and Notifications, slides 5-7.

An administrator is tasked with auditing NDB SLAs. What data will the administrator be reviewing?

A.
Snapshot schedules
A.
Snapshot schedules
Answers
B.
Clone Management
B.
Clone Management
Answers
C.
Data retention policies
C.
Data retention policies
Answers
D.
Recovery Time Objective
D.
Recovery Time Objective
Answers
Suggested answer: C

Explanation:

NDB SLAs are service level agreements that define the data protection and recovery objectives for NDB-managed databases. NDB SLAs consist of data retention policies that specify how long the snapshots and log backups of a database are kept in the Time Machine. Data retention policies can be customized to meet different business and compliance requirements, such as daily, weekly, monthly, or yearly retention periods. NDB SLAs also determine the frequency and schedule of the snapshots and log backups, as well as the storage location and replication options. An administrator who is tasked with auditing NDB SLAs will be reviewing the data retention policies of each database and Time Machine, as well as the snapshot and log backup history and status. The administrator will also be able to monitor the storage usage and performance of the NDB SLAs, and modify or delete the SLAs as needed. The other options are not part of the NDB SLAs, but rather separate features or concepts of NDB. Snapshot schedules are the intervals at which NDB takes snapshots of the databases, which are determined by the SLAs. Clone management is the process of creating, refreshing, or deleting database clones from the Time Machine. Recovery time objective (RTO) is the maximum acceptable time for restoring a database after a failure, which is influenced by the SLAs, but not defined by them.Reference:

Nutanix Certified Professional - Database Automation (NCP-DB) v6.5, Section 5 - Protect NDB-managed Databases Using Time Machine, Objective 5.1: Create, delete, and modify SLA retention policies

Nutanix Database Management & Automation (NDMA) Course, Module 4: Nutanix Database Service (NDB) Data Protection, Lesson 4.1: Data Protection Overview, Topic: SLA Concepts

Nutanix Database Service (NDB) User Guide, Chapter 6: SLAs, Section: SLA Overview

A development team has requested that an administrator provide them a copy of the production Finance database. The business requires that any financial data is masked before going into development.

How should the administrator create a clone with masked data for the development environment?

A.
From the Time Machine, create a clone and paste the masking commands in the post-clone field of the Pre-Post Commands section.
A.
From the Time Machine, create a clone and paste the masking commands in the post-clone field of the Pre-Post Commands section.
Answers
B.
1. Create a masking script on the source DB VM, Dev VM or SW Profile VM. 2. Create the clone from the Time Machine and define the post-clone option with the full path\name of the masking script.
B.
1. Create a masking script on the source DB VM, Dev VM or SW Profile VM. 2. Create the clone from the Time Machine and define the post-clone option with the full path\name of the masking script.
Answers
C.
1. Create a script to mask the data. 2. Create the clone from the Time Machine and define the post-clone option with the full path\name of the masking script.
C.
1. Create a script to mask the data. 2. Create the clone from the Time Machine and define the post-clone option with the full path\name of the masking script.
Answers
D.
From the Time Machine, create a clone and paste the masking commands in the pre-clone field of the Pre-Post Commands section.
D.
From the Time Machine, create a clone and paste the masking commands in the pre-clone field of the Pre-Post Commands section.
Answers
Suggested answer: B

Explanation:

According to the Nutanix Database Automation (NCP-DB) course, the Pre-Post Commands section allows the administrator to specify custom scripts that can be executed before or after the clone operation1.The masking script can be created on any of the VMs that have access to the source database, such as the source DB VM, the Dev VM, or the SW Profile VM2.The script should contain the commands to mask the sensitive data in the Finance database, such as replacing the real values with dummy values or encrypting the data2.The administrator can then create the clone from the Time Machine and define the post-clone option with the full path and name of the masking script1.This will ensure that the script is executed after the clone is created, and the data is masked before it is available for the development team1. The other options are not correct, as they either use the wrong field (pre-clone instead of post-clone), or do not specify where to create or store the masking script.Reference:

1: Nutanix Database Automation (NCP-DB) course, Module 4: Database Cloning, Lesson 4.4: Pre-Post Commands, slide 5

2: Nutanix Database Automation (NCP-DB) course, Module 4: Database Cloning, Lesson 4.4: Pre-Post Commands, slide 7

Refer to the exhibit.

An administrator attempts to provision their first clustered database environment with NDB. The operation fails with the Operation Error shown in the exhibit.

Which database engine was being deployed during this operation?

A.
Oracle
A.
Oracle
Answers
B.
MySQL
B.
MySQL
Answers
C.
Microsoft SQL
C.
Microsoft SQL
Answers
D.
PostgreSQL
D.
PostgreSQL
Answers
Suggested answer: B

Explanation:

The error message in the exhibit indicates that the operation failed during the ''Create and Register Database Server VMs'' step because ''Provisioning of all the observers simultaneously took more than two hours.'' This type of error is associated with MySQL, as it involves observers which are a part of MySQL Group Replication, used for ensuring high availability1. The other options are not related to the error message, as they do not use observers or Group Replication for clustering.Reference:

1: Nutanix Database Automation (NCP-DB) course, Module 5: Database High Availability, Lesson 5.2: MySQL Group Replication, slide 7

Which two options can NDB leverage to refresh a database clone? (Choose two.)

A.
Cerebro logs
A.
Cerebro logs
Answers
B.
Snapshots
B.
Snapshots
Answers
C.
Transaction logs
C.
Transaction logs
Answers
D.
Templates
D.
Templates
Answers
Suggested answer: B, C

Explanation:

NDB can leverage snapshots and transaction logs to refresh a database clone to the latest state of the source database. Snapshots are point-in-time copies of the database that are stored on the Nutanix cluster. Transaction logs are records of the changes made to the database after the snapshot was taken. NDB can use either snapshots or transaction logs, or a combination of both, to refresh a database clone. Cerebro logs and templates are not used for refreshing database clones. Cerebro logs are used for log catch-up operations, which are different from refresh operations. Templates are used for provisioning new databases, not for refreshing existing ones.Reference:

Nutanix Database Management & Automation (NDMA) course, Module 4, Lesson 4.3 - Refreshing Clones

Nutanix Support & Insights, Nutanix NDB User Guide v2.5, Clone Database Management

Which NDB feature collects logs and snapshots from databases?

A.
Database Restore
A.
Database Restore
Answers
B.
Time Machine
B.
Time Machine
Answers
C.
SLA
C.
SLA
Answers
D.
One-click Patching
D.
One-click Patching
Answers
Suggested answer: B

Explanation:

The correct answer is B because the Time Machine feature of NDB collects logs and snapshots from databases and stores them in a distributed file system. The Time Machine enables the administrator to protect, clone, and restore databases using the SLA policies and the NDB UI or API. The Time Machine also manages the replication of database snapshots in an NDB multicluster environment. The other options are not correct because they describe different features or functions of NDB. Option A is not correct because Database Restore is an operation that uses the Time Machine to restore a source database or a clone to a previous point in time. Option C is not correct because SLA is a policy that defines the frequency and retention of database snapshots and logs. Option D is not correct because One-click Patching is a feature that allows the administrator to test, publish, and apply database patches using the NDB UI or API.Reference:Nutanix Database Management & Automation (NDMA) course,Nutanix Certified Professional - Database Automation (NCP-DB) certification,Nutanix NCP-DB Certification Exam Syllabus and Study Guide,Nutanix Support & Insights

What happens to the primary member in a MongoDB Server Cluster during the NDB patching process?

A.
It is patched last and is restored to its original state.
A.
It is patched last and is restored to its original state.
Answers
B.
It becomes a read-only member during the patching process.
B.
It becomes a read-only member during the patching process.
Answers
C.
It is skipped during the patching process to ensure no downtime.
C.
It is skipped during the patching process to ensure no downtime.
Answers
D.
It is patched first and then becomes a secondary member.
D.
It is patched first and then becomes a secondary member.
Answers
Suggested answer: D

Explanation:

According to the NDB documentation, the NDB patching process for MongoDB Server Cluster follows these steps1:

NDB identifies the primary member of the MongoDB Server Cluster and patches it first.

NDB triggers a failover to elect a new primary member from the remaining secondary members.

NDB patches the former primary member, which becomes a secondary member after the failover.

NDB patches the remaining secondary members one by one.

NDB verifies the patching status and the cluster health.

This process ensures that the MongoDB Server Cluster always has a primary member available to handle write operations, while minimizing the downtime and the impact on the cluster performance.

Refer to the exhibit.

A request is received to provision a new Oracle SIHA DB & VM to test ASMLIB on OEL79 and Oracle 19c. When walking through the provisioning workflow, only ASMFD is available in the ASM Driver drop down.

What is necessary to provision the requested SIHA DB and DB VM with ASMLIB?

A.
Update the software profile to include the ASMLIB driver.
A.
Update the software profile to include the ASMLIB driver.
Answers
B.
Install ASMLIB on the NDB server.
B.
Install ASMLIB on the NDB server.
Answers
C.
Update the NDB driver config to enable ASMLIB for Oracle.
C.
Update the NDB driver config to enable ASMLIB for Oracle.
Answers
D.
Install ASMLIB on the database server.
D.
Install ASMLIB on the database server.
Answers
Suggested answer: A

Explanation:

In the context of Nutanix Database Automation (NCP-DB), when provisioning a new Oracle SIHA DB & VM, if only ASMFD is available in the ASM Driver drop-down, it indicates that ASMLIB is not included in the current software profile. To provision the requested SIHA DB and DB VM with ASMLIB, it's essential to update the software profile to include the ASMLIB driver. This action will enable ASMLIB as an option in the ASM Driver drop-down during the provisioning workflow.

Nutanix Database Automation (NCP-DB) Course Details, Section 2.3: Provisioning Oracle Databases

Nutanix Database Automation (NCP-DB) Certification Details, Objective 2.3: Provision Oracle Databases

Nutanix Database Automation (NCP-DB) YouTube Playlist, Video 2.3: Provisioning Oracle Databases

[Nutanix Database Automation (NCP-DB) User Guide], Section 2.3: Provision Oracle Databases

Total 146 questions
Go to page: of 15