ExamGecko
Home Home / Isaca / CISA

Isaca CISA Practice Test - Questions Answers, Page 108

Question list
Search
Search

List of questions

Search

Related questions

An IS auditor discovers that validation controls in a web application have been moved from the server side into the browser to boost performance. This would MOST likely increase the risk of a successful attack by:

A.
structured query language (SQL) injection
A.
structured query language (SQL) injection
Answers
B.
buffer overflow.
B.
buffer overflow.
Answers
C.
denial of service (DoS).
C.
denial of service (DoS).
Answers
D.
phishing.
D.
phishing.
Answers
Suggested answer: A

Explanation:

Validation controls are used to check the input data from the user before processing it on the server. If the validation controls are moved from the server side to the browser, it means that the user can modify or bypass them using tools such as browser developer tools, JavaScript console, or proxy tools. This would increase the risk of a successful attack by structured query language (SQL) injection, which is a technique that exploits a security vulnerability in an application's software layer that allows an attacker to execute arbitrary SQL commands on the underlying database. SQL injection can result in data theft, data corruption, or unauthorized access to the system.

Buffer overflow, denial of service (DoS), and phishing are not directly related to the validation controls in a web application. Buffer overflow is a type of attack that exploits a memory management flaw in an application or system that allows an attacker to write data beyond the allocated buffer size and overwrite adjacent memory locations. DoS is a type of attack that prevents legitimate users from accessing a service or resource by overwhelming it with requests or traffic. Phishing is a type of attack that uses fraudulent emails or websites to trick users into revealing sensitive information or installing malware.

Client-side form validation - Learn web development | MDN

JavaScript: client-side vs. server-side validation - Stack Overflow

SQL Injection - OWASP

An IT strategic plan that BEST leverages IT in achieving organizational goals will include:

A.
a comparison of future needs against current capabilities.
A.
a comparison of future needs against current capabilities.
Answers
B.
a risk-based ranking of projects.
B.
a risk-based ranking of projects.
Answers
C.
enterprise architecture (EA) impacts.
C.
enterprise architecture (EA) impacts.
Answers
D.
IT budgets linked to the organization's budget.
D.
IT budgets linked to the organization's budget.
Answers
Suggested answer: C

Explanation:

An IT strategic plan that best leverages IT in achieving organizational goals will include enterprise architecture (EA) impacts.EA is the practice of analyzing, designing, planning, and implementing enterprise analysis to successfully execute on business strategies1.EA helps organizations structure IT projects and policies to align with business goals, to stay agile and resilient in the face of rapid change, and to stay on top of industry trends and disruptions1.EA also describes an organization's processes, information processes and personnel and other organizational subunits aligned with the organization's core goals and strategies2.By including EA impacts in the IT strategic plan, an organization can ensure that the IT initiatives are consistent with the business vision, objectives, and tactics, and that they support the desired business outcomes3.

A comparison of future needs against current capabilities, a risk-based ranking of projects, and IT budgets linked to the organization's budget are all important elements of an IT strategic plan, but they do not necessarily leverage IT in achieving organizational goals. A comparison of future needs against current capabilities can help identify gaps and opportunities for improvement, but it does not provide a clear direction or roadmap for how to achieve them. A risk-based ranking of projects can help prioritize the most critical and beneficial projects, but it does not ensure that they are aligned with the business strategy or that they deliver value to the stakeholders. IT budgets linked to the organization's budget can help allocate resources and monitor costs, but they do not reflect the impact or contribution of IT to the business performance or growth.

Implement Agile IT Strategic Planning with Enterprise Architecture - The Open Group Blog

What is enterprise architecture? A framework for transformation | CIO

Strategic Planning and Enterprise Architecture

An organization's security team created a simulated production environment with multiple vulnerable applications. What would be the PRIMARY purpose of creating such an environment?

A.
To collect digital evidence of cyberattacks
A.
To collect digital evidence of cyberattacks
Answers
B.
To attract attackers in order to study their behavior
B.
To attract attackers in order to study their behavior
Answers
C.
To provide training to security managers
C.
To provide training to security managers
Answers
D.
To test the intrusion detection system (IDS)
D.
To test the intrusion detection system (IDS)
Answers
Suggested answer: B

Explanation:

The primary purpose of creating a simulated production environment with multiple vulnerable applications is to attract attackers in order to study their behavior.This is a technique known ashoneypotting, which is a form of deception security that lures attackers into a fake system or network that mimics the real one, but is isolated and monitored1.Honeypotting can help security teams to learn about the attackers' methods, tools, motives, and targets, and to collect valuable intelligence that can be used to improve the security posture of the organization1.Honeypotting can also help to divert the attackers' attention from the real assets and to waste their time and resources2.

The other options are not the primary purpose of creating a simulated production environment with multiple vulnerable applications.To collect digital evidence of cyberattacks, security teams would need to use forensic tools and techniques that can preserve and analyze the data from the compromised systems or networks3.To provide training to security managers, security teams would need to use simulation tools and scenarios that can test and enhance their skills and knowledge in responding to cyber incidents4.To test the intrusion detection system (IDS), security teams would need to use penetration testing tools and methods that can evaluate the effectiveness and performance of the IDS in detecting and preventing malicious activities5.

What is a Honeypot? | Imperva

Honeypots: A sweet solution for identifying intruders | CSO Online

Digital Forensics - an overview | ScienceDirect Topics

Cybersecurity Training & Exercises - Homeland Security

What is Penetration Testing? | Types & Stages | Imperva

A global organization's policy states that all workstations must be scanned for malware each day. Which of the following would provide an IS auditor with the BEST evidence of continuous compliance with this policy?

A.
Penetration testing results
A.
Penetration testing results
Answers
B.
Management attestation
B.
Management attestation
Answers
C.
Anti-malware tool audit logs
C.
Anti-malware tool audit logs
Answers
D.
Recent malware scan reports
D.
Recent malware scan reports
Answers
Suggested answer: C

Explanation:

Anti-malware tool audit logs would provide an IS auditor with the best evidence of continuous compliance with the global organization's policy that states that all workstations must be scanned for malware each day.Anti-malware tool audit logs are records that capture the activities and events related to the anti-malware software installed on the workstations, such as scan schedules, scan results, updates, alerts, and actions taken1.These logs can help the IS auditor to verify that the anti-malware software is functioning properly, that the scans are performed regularly and effectively, and that any malware incidents are detected and resolved in a timely manner2.Anti-malware tool audit logs can also help the IS auditor to identify any gaps or weaknesses in the anti-malware policy or implementation, and to provide recommendations for improvement3.

The other options are not the best evidence of continuous compliance with the anti-malware policy.Penetration testing results are reports that show the vulnerabilities and risks of the workstations and network from an external or internal attacker's perspective4. While penetration testing can help to assess the security posture and resilience of the organization, it does not provide information on the daily anti-malware scans or their outcomes.Management attestation is a statement or declaration from the management that they have complied with the anti-malware policy5. While management attestation can demonstrate commitment and accountability, it does not provide objective or verifiable evidence of compliance. Recent malware scan reports are documents that show the summary or details of the latest anti-malware scans performed on the workstations. While recent malware scan reports can indicate the current status and performance of the anti-malware software, they do not provide historical or comprehensive evidence of compliance.

Malwarebytes Anti-Malware (MBAM) log collection and threat reports ...

Malicious Behavior Detection using Windows Audit Logs

PCI Requirement 5.2 -- Ensure all Anti-Virus Mechanisms are Current ...

Management Attestation - an overview | ScienceDirect Topics

How to Read a Malware Scan Report | Techwalla

The PRIMARY objective of a control self-assessment (CSA) is to:

A.
educate functional areas on risks and controls.
A.
educate functional areas on risks and controls.
Answers
B.
ensure appropriate access controls are implemented.
B.
ensure appropriate access controls are implemented.
Answers
C.
eliminate the audit risk by leveraging management's analysis.
C.
eliminate the audit risk by leveraging management's analysis.
Answers
D.
gain assurance for business functions that cannot be audited.
D.
gain assurance for business functions that cannot be audited.
Answers
Suggested answer: A

Explanation:

The primary objective of a control self-assessment (CSA) is to educate functional areas on risks and controls.CSA is a technique that allows managers and work teams directly involved in business units, functions or processes to participate in assessing the organization's risk management and control processes1.CSA can help functional areas to obtain a clear and shared understanding of their major activities and objectives, to foster an improved awareness of risk and controls among management and staff, to enhance responsibility and accountability for risks and controls, and to highlight best practices and opportunities to improve business performance2.

The other options are not the primary objective of a CSA. Ensuring appropriate access controls are implemented is a specific type of control that may be assessed by a CSA, but it is not the main goal of the technique. Eliminating the audit risk by leveraging management's analysis is not a realistic or desirable outcome of a CSA, as audit risk can never be completely eliminated, and management's analysis may not be sufficient or reliable without independent verification. Gaining assurance for business functions that cannot be audited is not a valid reason for conducting a CSA, as all business functions should be subject to audit, and a CSA is not a substitute for an audit.

Control Self Assessments - PwC

Control self-assessment - Wikipedia

Control Self Assessment - AuditNet

If a source code is not recompiled when program changes are implemented, which of the following is a compensating control to ensure synchronization of source and object?

A.
Comparison of object and executable code
A.
Comparison of object and executable code
Answers
B.
Review of audit trail of compile dates
B.
Review of audit trail of compile dates
Answers
C.
Comparison of date stamping of source and object code
C.
Comparison of date stamping of source and object code
Answers
D.
Review of developer comments in executable code
D.
Review of developer comments in executable code
Answers
Suggested answer: C

Explanation:

Source code synchronization is the process of ensuring that the source code and the object code (the compiled version of the source code) are consistent and up-to-date1. When program changes are implemented, the source code should be recompiled to generate a new object code that reflects the changes. However, if the source code is not recompiled, there is a risk that the object code may be outdated or incorrect.A compensating control is a measure that reduces the risk of an existing control weakness or deficiency2. A compensating control for source code synchronization is to compare the date stamping of the source and object code.Date stamping is a method of recording the date and time when a file is created or modified3. By comparing the date stamping of the source and object code, one can verify if they are synchronized or not. If the date stamping of the source code is newer than the object code, it means that the source code has been changed but not recompiled. If the date stamping of the object code is newer than the source code, it means that the object code has been compiled from a different source code. If the date stamping of both files are identical, it means that they are synchronized.

Which of the following is the MOST important consideration for a contingency facility?

A.
The contingency facility has the same badge access controls as the primary site.
A.
The contingency facility has the same badge access controls as the primary site.
Answers
B.
Both the contingency facility and the primary site have the same number of business assets in their inventory.
B.
Both the contingency facility and the primary site have the same number of business assets in their inventory.
Answers
C.
The contingency facility is located a sufficient distance away from the primary site.
C.
The contingency facility is located a sufficient distance away from the primary site.
Answers
D.
Both the contingency facility and the primary site are easily identifiable.
D.
Both the contingency facility and the primary site are easily identifiable.
Answers
Suggested answer: C

Explanation:

A contingency facility is a backup site that can be used to resume business operations in the event of a disaster or disruption at the primary site. The most important consideration for a contingency facility is that it is located a sufficient distance away from the primary site, so that it is not affected by the same event that caused the disruption. For example, if the primary site is damaged by a fire, flood, earthquake, or terrorist attack, the contingency facility should be in a different geographic area that is unlikely to experience the same hazard. This way, the organization can continue to provide its services and products to its customers and stakeholders without interruption.

The other options are not as important as the location of the contingency facility. The badge access controls, the number of business assets, and the identifiability of the sites are secondary factors that may affect the security and efficiency of the contingency facility, but they are not essential for its functionality. Therefore, option C is the correct answer.

The Importance of Contingency Planning

WHO guidance for contingency planning

A transaction processing system interfaces with the general ledger. Data analytics has identified that some transactions are being recorded twice in the general ledger. While management states a system fix has been implemented, what should the IS auditor recommend to validate the interface is working in the future?

A.
Perform periodic reconciliations.
A.
Perform periodic reconciliations.
Answers
B.
Ensure system owner sign-off for the system fix.
B.
Ensure system owner sign-off for the system fix.
Answers
C.
Conduct functional testing.
C.
Conduct functional testing.
Answers
D.
Improve user acceptance testing (UAT).
D.
Improve user acceptance testing (UAT).
Answers
Suggested answer: A

Explanation:

A transaction processing system (TPS) is a system that captures, processes, and stores data related to business transactions1.A general ledger is a system that records the financial transactions of an organization in different accounts2.An interface is a connection point between two systems that allows data exchange3.A system fix is a change or update to a system that resolves a problem or improves its functionality4.

The IS auditor should recommend to perform periodic reconciliations to validate the interface between the TPS and the general ledger is working in the future.A reconciliation is a process of comparing and verifying the data in two systems to ensure accuracy and consistency1. By performing periodic reconciliations, the IS auditor can detect and correct any errors or discrepancies in the data, such as duplicate transactions, missing transactions, or incorrect amounts. This way, the IS auditor can ensure the reliability and integrity of the financial data in both systems.

The other options are not as effective as periodic reconciliations to validate the interface.System owner sign-off for the system fix is a form of approval that indicates the system owner agrees with the change and its expected outcome4. However, this does not guarantee that the system fix will work as intended or prevent future errors.Conducting functional testing is a process of verifying that the system performs its intended functions correctly and meets its requirements4. However, this is usually done before or after the system fix is implemented, not on an ongoing basis.Improving user acceptance testing (UAT) is a process of evaluating whether the system meets the needs and expectations of the end users4. However, this is also done before or after the system fix is implemented, not on an ongoing basis. Therefore, option A is the correct answer.

Transaction Interface: Organization, Process, and System

Validation of Interfaces - Ensuring Data Integrity and Quality across Systems

Oracle Payments Implementation Guide

Receiving Transactions Inserted Into Interface Table as BATCH And PENDING Are Not Processed By Receiving Transaction Processor

What Is a Transaction Processing System (TPS)? (Plus Types)

Which of the following would the IS auditor MOST likely review to determine whether modifications to the operating system parameters were authorized?

A.
Documentation of exit routines
A.
Documentation of exit routines
Answers
B.
System initialization logs
B.
System initialization logs
Answers
C.
Change control log
C.
Change control log
Answers
D.
Security system parameters
D.
Security system parameters
Answers
Suggested answer: C

Explanation:

Operating system parameters are settings or values that affect the behavior or performance of the operating system1. Modifications to the operating system parameters may be necessary to improve the system functionality, security, or efficiency. However, such modifications may also introduce risks or errors that can affect the system stability, compatibility, or integrity.Therefore, modifications to the operating system parameters should be authorized and documented by the appropriate authority2.

A change control log is a record of all changes made to the system, including the date, time, description, reason, authorization, and impact of each change3.A change control log can help the IS auditor to verify whether modifications to the operating system parameters were authorized by comparing the log entries with the actual system settings and the change approval documents4.

Which of the following is the GREATEST risk when relying on reports generated by end-user computing (EUC)?

A.
Data may be inaccurate.
A.
Data may be inaccurate.
Answers
B.
Reports may not work efficiently.
B.
Reports may not work efficiently.
Answers
C.
Reports may not be timely.
C.
Reports may not be timely.
Answers
D.
Historical data may not be available.
D.
Historical data may not be available.
Answers
Suggested answer: A

Explanation:

End-user computing (EUC) is a system in which users are able to create working applications besides the divided development process of design, build, test and release that is typically followed by software engineers1.Examples of EUC tools include spreadsheets, databases, low-code/no-code platforms, and generative AI applications2.EUC tools can provide flexibility, efficiency, and innovation for the users, but they also pose significant risks if not properly managed and controlled3.

The greatest risk when relying on reports generated by EUC is that the data may be inaccurate.Data accuracy refers to the extent to which the data in the reports reflect the true values of the underlying information4. Inaccurate data can lead to erroneous decisions, misleading analysis, unreliable reporting, and compliance violations. Some of the factors that can cause data inaccuracy in EUC reports are:

Lack of rigorous testing: EUC tools may not undergo the same level of testing and validation as IT-developed applications, which can result in errors, bugs, or inconsistencies in the data processing and output3.

Lack of version and change control: EUC tools may not have a clear record of the changes made to them over time, which can create confusion, duplication, or loss of data.Users may also modify or overwrite the data without proper authorization or documentation3.

Lack of documentation and reliance on end-user who developed it: EUC tools may not have sufficient documentation to explain their purpose, functionality, assumptions, limitations, and dependencies.Users may also rely on the knowledge and expertise of the original developer, who may not be available or may not have followed best practices3.

Lack of maintenance processes: EUC tools may not have regular updates, backups, or reviews to ensure their functionality and security.Users may also neglect to delete or archive obsolete or redundant data3.

Lack of security: EUC tools may not have adequate access controls, encryption, or authentication mechanisms to protect the data from unauthorized access, modification, or disclosure.Users may also store or share the data in insecure locations or devices3.

Lack of audit trail: EUC tools may not have a traceable history of the data sources, inputs, outputs, calculations, and transformations.Users may also manipulate or falsify the data without detection or accountability3.

Overreliance on manual controls: EUC tools may depend on human intervention to input, verify, or correct the data, which can introduce errors, delays, or biases.Users may also lack the skills or training to use the EUC tools effectively and efficiently3.

The other options are not as great as data inaccuracy when relying on EUC reports. Reports may not work efficiently, reports may not be timely, and historical data may not be available are all potential risks associated with EUC tools, but they are less severe and less frequent than data inaccuracy. Moreover, these risks can be mitigated by improving the performance, scheduling, and storage of the EUC tools. However, data inaccuracy can have a pervasive and lasting impact on the quality and credibility of the reports and the decisions based on them. Therefore, option A is the correct answer.

What is Data Accuracy?

What Is End User Computing (EUC) Risk?

End-user computing

End-User Computing (EUC) Risks: A Comprehensive Guide

Total 1.198 questions
Go to page: of 120