ExamGecko
Home Home / CompTIA / CS0-003

CompTIA CS0-003 Practice Test - Questions Answers, Page 25

Question list
Search
Search

List of questions

Search

Related questions











During a security test, a security analyst found a critical application with a buffer overflow vulnerability. Which of the following would be best to mitigate the vulnerability at the application level?

A.
Perform OS hardening.
A.
Perform OS hardening.
Answers
B.
Implement input validation.
B.
Implement input validation.
Answers
C.
Update third-party dependencies.
C.
Update third-party dependencies.
Answers
D.
Configure address space layout randomization.
D.
Configure address space layout randomization.
Answers
Suggested answer: B

Explanation:

Implementing input validation is the best way to mitigate the buffer overflow vulnerability at the application level. Input validation is a technique that checks the data entered by users or attackers against a set of rules or constraints, such as data type, length, format, or range. Input validation can prevent common web application attacks such as SQL injection, cross-site scripting (XSS), or command injection, which exploit the lack of input validation to execute malicious code or commands on the server or the client side. By validating the input before allowing submission, the web application can reject or sanitize any malicious or unexpected input, and protect the application from being compromised12.

Reference: How to detect, prevent, and mitigate buffer overflow attacks - Synopsys, How to mitigate buffer overflow vulnerabilities | Infosec

An organization discovered a data breach that resulted in Pll being released to the public. During the lessons learned review, the panel identified discrepancies regarding who was responsible for external reporting, as well as the timing requirements. Which of the following actions would best address the reporting issue?

A.
Creating a playbook denoting specific SLAs and containment actions per incident type
A.
Creating a playbook denoting specific SLAs and containment actions per incident type
Answers
B.
Researching federal laws, regulatory compliance requirements, and organizational policies to document specific reporting SLAs
B.
Researching federal laws, regulatory compliance requirements, and organizational policies to document specific reporting SLAs
Answers
C.
Defining which security incidents require external notifications and incident reporting in addition to internal stakeholders
C.
Defining which security incidents require external notifications and incident reporting in addition to internal stakeholders
Answers
D.
Designating specific roles and responsibilities within the security team and stakeholders to streamline tasks
D.
Designating specific roles and responsibilities within the security team and stakeholders to streamline tasks
Answers
Suggested answer: B

Explanation:

Researching federal laws, regulatory compliance requirements, and organizational policies to document specific reporting SLAs is the best action to address the reporting issue. Reporting SLAs are service level agreements that specify the time frame and the format for notifying the relevant authorities and the affected individuals of a data breach. Reporting SLAs may vary depending on the type and severity of the breach, the type and location of the data, the industry and jurisdiction of the organization, and the internal policies of the organization. By researching and documenting the reporting SLAs for different scenarios, the organization can ensure that it complies with the legal and ethical obligations of data breach notification, and avoid any penalties, fines, or lawsuits that may result from failing to report a breach in a timely and appropriate manner12.

Reference: When and how to report a breach: Data breach reporting best practices, Incident and Breach Management

Which of the following would an organization use to develop a business continuity plan?

A.
A diagram of all systems and interdependent applications
A.
A diagram of all systems and interdependent applications
Answers
B.
A repository for all the software used by the organization
B.
A repository for all the software used by the organization
Answers
C.
A prioritized list of critical systems defined by executive leadership
C.
A prioritized list of critical systems defined by executive leadership
Answers
D.
A configuration management database in print at an off-site location
D.
A configuration management database in print at an off-site location
Answers
Suggested answer: C

Explanation:

A prioritized list of critical systems defined by executive leadership is the best option to use to develop a business continuity plan. A business continuity plan (BCP) is a system of prevention and recovery from potential threats to a company. The plan ensures that personnel and assets are protected and are able to function quickly in the event of a disaster1. A BCP should include a business impact analysis, which identifies the critical systems and processes that are essential for the continuity of the business operations, and the potential impacts of their disruption2. The executive leadership should be involved in defining the critical systems and their priorities, as they have the strategic vision and authority to make decisions that affect the whole organization3. A diagram of all systems and interdependent applications, a repository for all the software used by the organization, and a configuration management database in print at an off-site location are all useful tools for documenting and managing the IT infrastructure, but they are not sufficient to develop a comprehensive BCP that covers all aspects of the business continuity4.

Reference: What Is a Business Continuity Plan (BCP), and How Does It Work?, Business continuity plan (BCP) in 8 steps, with templates, Business continuity planning | Business Queensland, Understanding the Essentials of a Business Continuity Plan

A security analyst reviews the following results of a Nikto scan:

Which of the following should the security administrator investigate next?

A.
tiki
A.
tiki
Answers
B.
phpList
B.
phpList
Answers
C.
shtml.exe
C.
shtml.exe
Answers
D.
sshome
D.
sshome
Answers
Suggested answer: C

Explanation:

The security administrator should investigate shtml.exe next, as it is a potential vulnerability that allows remote code execution on the web server. Nikto scan results indicate that the web server is running Apache on Windows, and that the shtml.exe file is accessible in the /scripts/ directory. This file is part of the Server Side Includes (SSI) feature, which allows dynamic content generation on web pages. However, if the SSI feature is not configured properly, it can allow attackers to execute arbitrary commands on the web server by injecting malicious code into the URL or the web page12. Therefore, the security administrator should check the SSI configuration and permissions, and remove or disable the shtml.exe file if it is not needed.

Reference: Nikto-Penetration testing. Introduction, Web application scanning with Nikto

A cybersecurity analyst is doing triage in a SIEM and notices that the time stamps between the firewall and the host under investigation are off by 43 minutes. Which of the following is the most likely scenario occurring with the time stamps?

A.
The NTP server is not configured on the host.
A.
The NTP server is not configured on the host.
Answers
B.
The cybersecurity analyst is looking at the wrong information.
B.
The cybersecurity analyst is looking at the wrong information.
Answers
C.
The firewall is using UTC time.
C.
The firewall is using UTC time.
Answers
D.
The host with the logs is offline.
D.
The host with the logs is offline.
Answers
Suggested answer: A

Explanation:

The most likely scenario occurring with the time stamps is that the NTP server is not configured on the host. NTP is the Network Time Protocol, which is used to synchronize the clocks of computers over a network. NTP uses a hierarchical system of time sources, where each level is assigned a stratum number. The most accurate time sources, such as atomic clocks or GPS receivers, are at stratum 0, and the devices that synchronize with them are at stratum 1, and so on. NTP clients can query multiple NTP servers and use algorithms to select the best time source and adjust their clocks accordingly1. If the NTP server is not configured on the host, the host will rely on its own hardware clock, which may drift over time and become inaccurate. This can cause discrepancies in the time stamps between the host and other devices on the network, such as the firewall, which may be synchronized with a different NTP server or use a different time zone. This can affect the security analysis and correlation of events, as well as the compliance and auditing of the network23.

Reference: How the Windows Time Service Works, Time Synchronization - All You Need To Know, Firewall rules logging: a closer look at our new network compliance and ...

Each time a vulnerability assessment team shares the regular report with other teams, inconsistencies regarding versions and patches in the existing infrastructure are discovered. Which of the following is the best solution to decrease the inconsistencies?

A.
Implementing credentialed scanning
A.
Implementing credentialed scanning
Answers
B.
Changing from a passive to an active scanning approach
B.
Changing from a passive to an active scanning approach
Answers
C.
Implementing a central place to manage IT assets
C.
Implementing a central place to manage IT assets
Answers
D.
Performing agentless scanning
D.
Performing agentless scanning
Answers
Suggested answer: C

Explanation:

Implementing a central place to manage IT assets is the best solution to decrease the inconsistencies regarding versions and patches in the existing infrastructure. A central place to manage IT assets, such as a configuration management database (CMDB), can help the vulnerability assessment team to have an accurate and up-to-date inventory of all the hardware and software components in the network, as well as their relationships and dependencies. A CMDB can also track the changes and updates made to the IT assets, and provide a single source of truth for the vulnerability assessment team and other teams to compare and verify the versions and patches of the infrastructure12. Implementing credentialed scanning, changing from a passive to an active scanning approach, and performing agentless scanning are all methods to improve the vulnerability scanning process, but they do not address the root cause of the inconsistencies, which is the lack of a central place to manage IT assets3.

Reference: What is a Configuration Management Database (CMDB)?, How to Use a CMDB to Improve Vulnerability Management, Vulnerability Scanning Best Practices

A security analyst has found the following suspicious DNS traffic while analyzing a packet capture:

* DNS traffic while a tunneling session is active.

* The mean time between queries is less than one second.

* The average query length exceeds 100 characters.

Which of the following attacks most likely occurred?

A.
DNS exfiltration
A.
DNS exfiltration
Answers
B.
DNS spoofing
B.
DNS spoofing
Answers
C.
DNS zone transfer
C.
DNS zone transfer
Answers
D.
DNS poisoning
D.
DNS poisoning
Answers
Suggested answer: A

Explanation:

DNS exfiltration is a technique that uses the DNS protocol to transfer data from a compromised network or device to an attacker-controlled server. DNS exfiltration can bypass firewall rules and security products that do not inspect DNS traffic. The characteristics of the suspicious DNS traffic in the question match the indicators of DNS exfiltration, such as:

DNS traffic while a tunneling session is active: This implies that the DNS protocol is being used to create a covert channel for data transfer.

The mean time between queries is less than one second: This implies that the DNS queries are being sent at a high frequency to maximize the amount of data transferred.

The average query length exceeds 100 characters: This implies that the DNS queries are encoding large amounts of data in the subdomains or other fields of the DNS packets.

Official

Reference:

https://partners.comptia.org/docs/default-source/resources/comptia-cysa-cs0-002-exam-objectives

https://resources.infosecinstitute.com/topic/bypassing-security-products-via-dns-data-exfiltration/

https://www.reddit.com/r/CompTIA/comments/nvjuzt/dns_exfiltration_explanation/

While configuring a SIEM for an organization, a security analyst is having difficulty correlating incidents across different systems. Which of the following should be checked first?

A.
If appropriate logging levels are set
A.
If appropriate logging levels are set
Answers
B.
NTP configuration on each system
B.
NTP configuration on each system
Answers
C.
Behavioral correlation settings
C.
Behavioral correlation settings
Answers
D.
Data normalization rules
D.
Data normalization rules
Answers
Suggested answer: B

Explanation:

The NTP configuration on each system should be checked first, as it is essential for ensuring accurate and consistent time stamps across different systems. NTP is the Network Time Protocol, which is used to synchronize the clocks of computers over a network. NTP uses a hierarchical system of time sources, where each level is assigned a stratum number. The most accurate time sources, such as atomic clocks or GPS receivers, are at stratum 0, and the devices that synchronize with them are at stratum 1, and so on. NTP clients can query multiple NTP servers and use algorithms to select the best time source and adjust their clocks accordingly1. If the NTP configuration is not consistent or correct on each system, the time stamps of the logs and events may differ, making it difficult to correlate incidents across different systems. This can affect the security analysis and correlation of events, as well as the compliance and auditing of the network23.

Reference: How the Windows Time Service Works, Time Synchronization - All You Need To Know, What is SIEM? | Microsoft Security

An analyst is conducting routine vulnerability assessments on the company infrastructure. When performing these scans, a business-critical server crashes, and the cause is traced back to the vulnerability scanner. Which of the following is the cause of this issue?

A.
The scanner is running without an agent installed.
A.
The scanner is running without an agent installed.
Answers
B.
The scanner is running in active mode.
B.
The scanner is running in active mode.
Answers
C.
The scanner is segmented improperly.
C.
The scanner is segmented improperly.
Answers
D.
The scanner is configured with a scanning window.
D.
The scanner is configured with a scanning window.
Answers
Suggested answer: B

Explanation:

The scanner is running in active mode, which is the cause of this issue. Active mode is a type of vulnerability scanning that sends probes or requests to the target systems to test their responses and identify potential vulnerabilities. Active mode can provide more accurate and comprehensive results, but it can also cause more network traffic, performance degradation, or system instability. In some cases, active mode can trigger denial-of-service (DoS) conditions or crash the target systems, especially if they are not configured to handle the scanning requests or if they have underlying vulnerabilities that can be exploited by the scanner12. Therefore, the analyst should use caution when performing active mode scanning, and avoid scanning business-critical or sensitive systems without proper authorization and preparation3.

Reference: Vulnerability Scanning for my Server - Spiceworks Community, Negative Impacts of Automated Vulnerability Scanners and How ... - Acunetix, Vulnerability Scanning Best Practices

An analyst is becoming overwhelmed with the number of events that need to be investigated for a timeline. Which of the following should the analyst focus on in order to move the incident forward?

A.
Impact
A.
Impact
Answers
B.
Vulnerability score
B.
Vulnerability score
Answers
C.
Mean time to detect
C.
Mean time to detect
Answers
D.
Isolation
D.
Isolation
Answers
Suggested answer: A

Explanation:

The analyst should focus on the impact of the events in order to move the incident forward. Impact is the measure of the potential or actual damage caused by an incident, such as data loss, financial loss, reputational damage, or regulatory penalties. Impact can help the analyst prioritize the events that need to be investigated based on their severity and urgency, and allocate the appropriate resources and actions to contain and remediate them. Impact can also help the analyst communicate the status and progress of the incident to the stakeholders and customers, and justify the decisions and recommendations made during the incident response12. Vulnerability score, mean time to detect, and isolation are all important metrics or actions for incident response, but they are not the main focus for moving the incident forward. Vulnerability score is the rating of the likelihood and severity of a vulnerability being exploited by a threat actor. Mean time to detect is the average time it takes to discover an incident. Isolation is the process of disconnecting an affected system from the network to prevent further damage or spread of the incident34 .

Reference: Incident Response: Processes, Best Practices & Tools - Atlassian, Incident Response Metrics: What You Should Be Measuring, Vulnerability Scanning Best Practices, How to Track Mean Time to Detect (MTTD) and Mean Time to Respond (MTTR) to Cybersecurity Incidents, [Isolation and Quarantine for Incident Response]

Total 368 questions
Go to page: of 37