ExamGecko
Home Home / ISTQB / CTFL-2018

ISTQB CTFL-2018 Practice Test - Questions Answers, Page 22

Question list
Search
Search

List of questions

Search

Related questions











A company purchased a new system which deals with all financial transactions in the company Which test types call for involvement of an expert from the financial department?

A.
Component testing
A.
Component testing
Answers
B.
Acceptance tests
B.
Acceptance tests
Answers
C.
Maintenance testing
C.
Maintenance testing
Answers
D.
System tests
D.
System tests
Answers
Suggested answer: B

Explanation:

Acceptance tests are the test types that call for involvement of an expert from the financial department for a new system that deals with all financial transactions in a company. Acceptance tests are tests conducted to determine if the requirements of a specification or contract are met by a system or software component prior to its delivery or deployment.Acceptance tests are usually performed by end users or customers who have domain knowledge and expertise in evaluating if the system meets their needs and expectations1defines acceptance tests as follows:

Acceptance Testing is a level of software testing where a system is tested for acceptability. The purpose of this test is to evaluate the system's compliance with the business requirements and assess whether it is acceptable for delivery.

Acceptance Testing is also known as User Acceptance Testing (UAT), End-User Testing, Operational Acceptance Testing (OAT) or Field (Acceptance) Testing.

Acceptance Testing is performed by end users or customers who have domain knowledge and expertise in evaluating if the system meets their needs and expectations.

A, C, and D are incorrect answers. Component testing, maintenance testing, and system testing are not test types that call for involvement of an expert from the financial department for a new system that deals with all financial transactions in a company. Component testing is testing of individual software components in isolation from other components, usually done by developers. Maintenance testing is testing of a modified system or component after changes have been made to it, usually done by testers. System testing is testing of an integrated system as a whole to verify that it meets specified requirements, usually done by testers.

Which of the following is NOT a major responsibility of a tester?

A.
Producing interim test reports.
A.
Producing interim test reports.
Answers
B.
Finding the root cause of a defect.
B.
Finding the root cause of a defect.
Answers
C.
Writing the test specification
C.
Writing the test specification
Answers
D.
Reporting and tracking bugs.
D.
Reporting and tracking bugs.
Answers
Suggested answer: B

Explanation:

Finding the root cause of a defect is not a major responsibility of a tester. Finding the root cause of a defect is usually done by developers who have access to the source code and can debug it to identify and fix the defect.Testers are responsible for reporting and tracking defects, but not for finding their root causes2states this as follows:

The role of testers is to find defects in software products and report them to developers who are responsible for fixing them. Testers do not need to know how to fix defects or find their root causes, as this requires access to the source code and debugging skills that are typically possessed by developers.

A, C, and D are incorrect answers. Producing interim test reports, writing the test specification, and reporting and tracking bugs are major responsibilities of a tester. Producing interim test reports is part of test monitoring and control, which involves measuring and evaluating test progress and quality against objectives and criteria. Writing the test specification is part of test analysis and design, which involves identifying test conditions based on test basis and designing test cases based on test techniques. Reporting and tracking bugs is part of test implementation and execution, which involves logging incidents when observed outcomes deviate from expected outcomes and tracking their status until closure.

An online form has a 'Title' input field The valid values for this field are: Mr, Ms. Mrs. Which of the following is a correct list of the equivalence classes of the input values for this field?

A.
Any one of: Mr, Mrs., Ms; any other input
A.
Any one of: Mr, Mrs., Ms; any other input
Answers
B.
Mr: Ms; Mrs.: no input; any other input
B.
Mr: Ms; Mrs.: no input; any other input
Answers
C.
Any one of: Mr, Mrs., Ms; no input, any other input
C.
Any one of: Mr, Mrs., Ms; no input, any other input
Answers
D.
Mr; Mrs.; Ms; any other input
D.
Mr; Mrs.; Ms; any other input
Answers
Suggested answer: D

Explanation:

Mr; Mrs.; Ms; any other input is the correct list of the equivalence classes of the input values for this field. Equivalence partitioning is a technique to divide the input domain into partitions that are expected to behave similarly or produce the same output.Each partition should have at least one representative value as a test case3explains this as follows:

Equivalence Partitioning (or Equivalence Class Partitioning) is an software testing technique that divides the input data of a software unit into partitions of equivalent data from which test cases can be derived. In principle, test cases are designed to cover each partition at least once.

The fundamental concept of Equivalence Partitioning is that you can find more errors in a test case by using a representative value from an equivalence class than by using any other value from the class.

The input domain for this field can be divided into four partitions: Mr, Mrs., Ms, and any other input. The first three partitions are valid, as they are the only acceptable values for this field. The last partition is invalid, as it includes any value that is not Mr, Mrs., or Ms, such as Dr, Prof, Miss, etc.

Which of the following statements are 'testing general principles''?

I, Exhaustive testing is impossible

II, The defects found during the pre-release tests, or the operational failures, are uniformly distributed across the system's software modules

II,I, Testing can show the presence of defects, but cannot demonstrate their absence

IV, Testing is context-independent

A.
I, II,I
A.
I, II,I
Answers
B.
I, II,
B.
I, II,
Answers
C.
I, IV
C.
I, IV
Answers
D.
II, II,I
D.
II, II,I
Answers
Suggested answer: A

Explanation:

I and II,I are ''testing general principles''. These principles state that exhaustive testing is impossible and testing can show the presence of defects but cannot demonstrate their absence.These principles reflect the limitations and objectives of testing software systems4states these principles as follows:

Exhaustive testing is impossible: Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts.

Testing shows the presence of defects: Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness.

B, C, and D are incorrect answers. II, and IV are not ''testing general principles''. These statements are false or misleading. The defects found during the pre-release tests or the operational failures are not uniformly distributed across the system's software modules (II,), as they tend to cluster in certain areas or components. Testing is not context-independent (IV), as it depends on the specific objectives, requirements, risks, and characteristics of the system under test.

Which of the following statements is LEAST likely to be true of non-functional testing?

A.
It covers the evaluation of the interaction of various specified components.
A.
It covers the evaluation of the interaction of various specified components.
Answers
B.
It tests 'how'' the system works.
B.
It tests 'how'' the system works.
Answers
C.
It may include testing the ease of modification of systems.
C.
It may include testing the ease of modification of systems.
Answers
D.
It may be performed at unit, integration system and acceptance test levels.
D.
It may be performed at unit, integration system and acceptance test levels.
Answers
Suggested answer: C

Explanation:

Testing the ease of modification of systems is the statement that is least likely to be true of non-functional testing. Non-functional testing is testing of the non-functional aspects of a system or software component, such as performance, usability, reliability, security, etc.Non-functional testing does not usually include testing the ease of modification of systems, as this is more related to maintainability, which is a quality attribute rather than a non-functional requirement1defines non-functional testing as follows:

Non-functional Testing is defined as a type of Software testing to check non-functional aspects (performance, usability, reliability, etc) of a software application. It is designed to test the readiness of a system as per nonfunctional parameters which are never addressed by functional testing.

Non-functional Testing involves testing of non-functional requirements such as Load Testing, Stress Testing, Security, Volume, Recovery Testing etc. The objective of NFT testing is to ensure whether the response time of software or application is quick enough as per the business requirement.

A, B, and D are incorrect answers. Evaluating the interaction of various specified components (A), testing how the system works (B), and performing at unit, integration, system and acceptance test levels (D) are statements that are likely to be true of non-functional testing. Non-functional testing can cover the interaction of various components in terms of performance, reliability, compatibility, etc. Non-functional testing can test how the system works in terms of speed, efficiency, stability, etc. Non-functional testing can be performed at different test levels depending on the scope and objectives of the test.

When a test case is created based on a Use Case, what type of test is created?

A.
Structural Test
A.
Structural Test
Answers
B.
Functional Test
B.
Functional Test
Answers
C.
Performance test
C.
Performance test
Answers
D.
Regression Test
D.
Regression Test
Answers
Suggested answer: B

Explanation:

Functional test is the type of test that is created when a test case is based on a use case. A use case is a description of how a system interacts with one or more actors (users or other systems) to achieve a specific goal or function. A functional test is a test that verifies that a system or software component performs its specified functions according to its requirements.Functional tests can be derived from use cases by identifying test scenarios and test cases that cover the main flow and alternative flows of each use case2explains this as follows:

Use cases are one of the most commonly used techniques for analyzing and modeling functional requirements for a system. A use case describes how an actor interacts with a system to accomplish a specific goal.

Functional Testing is a type of software testing whereby the system is tested against the functional requirements/specifications. Functions are tested by feeding them input and examining the output.

Use cases can be used as a source for deriving functional tests by identifying test scenarios and test cases that cover the main flow and alternative flows of each use case.

A, C, and D are incorrect answers. Structural test, performance test, and regression test are not types of tests that are created when a test case is based on a use case. Structural test is a type of test that is based on the internal structure and logic of the code rather than the functionality or requirements. Performance test is a type of test that measures the speed, responsiveness, scalability, or stability of a system under various workloads or conditions. Regression test is a type of test that verifies that previously working functionality still works after changes are made to the system or its environment.

In which of the following test documents would you expect to find test exit criteria described?

A.
Test plan
A.
Test plan
Answers
B.
Project plan
B.
Project plan
Answers
C.
Test design specification
C.
Test design specification
Answers
D.
Requirements specification
D.
Requirements specification
Answers
Suggested answer: A

Explanation:

Test plan is the test document where you would expect to find test exit criteria described. Test exit criteria are the conditions or requirements that must be met before testing can be completed or stopped. Test exit criteria are usually defined during test planning and control phase and evaluated during evaluating exit criteria and reporting phase. Test plan is a document that describes the scope, approach, resources, schedule, risks, metrics etc., for testing activities.Test plan also includes test exit criteria as part of its contents3defines test exit criteria and test plan as follows:

Test exit criteria: The set of generic or specific conditions for permitting process exit criteria that are defined for each Test Level.

Test plan: A document describing the scope, approach, resources and schedule of intended test activities.

Which type of automation test design is used in the example below?

An automation team designs an automation framework for testing of their web-based applications. Realizing that they need to use different data for different test cycles, they decide not to hard-code any data

in their scripts Instead they read all the data from text files while test execution is in progress.

A.
Dynamic test design
A.
Dynamic test design
Answers
B.
Data-driven
B.
Data-driven
Answers
C.
Keyword-driven
C.
Keyword-driven
Answers
D.
Data coverage analysis
D.
Data coverage analysis
Answers
Suggested answer: B

Explanation:

Data-driven is the type of automation test design that is used in the example below. Data-driven testing is a technique to separate the test data from the test scripts and store them in external sources, such as text files, databases, spreadsheets, etc.Data-driven testing allows the test scripts to read the test data from these sources during test execution, which makes the test scripts more reusable and maintainable1explains data-driven testing as follows:

Data-driven testing is a software testing methodology that is used in the automation testing framework to store the test data in a table or spreadsheet format. This allows automation engineers to have a single test script that can execute tests for all the test data in the table.

Data-driven testing helps to increase the efficiency of automated testing by reducing the number of test scripts required for different scenarios. It also helps to improve the quality of testing by covering more variations of input data and expected results.

A, C, and D are incorrect answers. Dynamic test design, keyword-driven, and data coverage analysis are not types of automation test design that are used in the example below. Dynamic test design is a technique to generate test cases based on dynamic analysis of the system behavior or output during test execution. Keyword-driven testing is a technique to create test scripts using keywords that represent actions or commands that can be executed by an automation tool. Data coverage analysis is a technique to measure and evaluate how much of the input domain or data set has been covered by the test cases.

Which of the following is a valid collection of equivalence classes for the following problem: 'An Integer numeric field shall contain values from 1 to 80 both values inclusive'

A.
Less than 0. 1 to 79, 80 and more than 80
A.
Less than 0. 1 to 79, 80 and more than 80
Answers
B.
Less than 0. 1 to 80, more than 80
B.
Less than 0. 1 to 80, more than 80
Answers
C.
Less than 1. 1 to 80, more than 80
C.
Less than 1. 1 to 80, more than 80
Answers
D.
Less than 1. 1 to 79, more than 80
D.
Less than 1. 1 to 79, more than 80
Answers
Suggested answer: B

Explanation:

Less than 0, 1 to 80, more than 80 is a valid collection of equivalence classes for the following problem: ''An Integer numeric field shall contain values from 1 to 80 both values inclusive''. Equivalence partitioning is a technique to divide the input domain into partitions that are expected to behave similarly or produce the same output.Each partition should have at least one representative value as a test case2explains equivalence partitioning as follows:

Equivalence Partitioning (or Equivalence Class Partitioning) is an software testing technique that divides the input data of a software unit into partitions of equivalent data from which test cases can be derived. In principle, test cases are designed to cover each partition at least once.

The fundamental concept of Equivalence Partitioning is that you can find more errors in a test case by using a representative value from an equivalence class than by using any other value from the class.

The input domain for this problem can be divided into three partitions: less than 0, 1 to 80, and more than 80. The first and the last partitions are invalid, as they are outside the range of acceptable values for this field. The middle partition is valid, as it is within the range of acceptable values for this field.

A, C, and D are incorrect answers. A does not include 0 as an invalid partition. C does not include 0 as an invalid partition. D does not include 80 as a valid partition.

Which of the following lists represents the correct sequence of the main activities of the fundamental test process (leaving out the activity of control which should take place in parallel to all the other activities)?

A.
Planning, analysis and reporting, design and implementation, execution, test closure activities, evaluating exit criteria.
A.
Planning, analysis and reporting, design and implementation, execution, test closure activities, evaluating exit criteria.
Answers
B.
Planning, analysis, design and implementation, execution, logging, test closure activities, evaluating exit criteria.
B.
Planning, analysis, design and implementation, execution, logging, test closure activities, evaluating exit criteria.
Answers
C.
Planning, analysis and design, execution, logging and reporting, regression testing
C.
Planning, analysis and design, execution, logging and reporting, regression testing
Answers
D.
Planning, analysis and design, implementation and execution, evaluation exit criteria and reporting, test closure activities
D.
Planning, analysis and design, implementation and execution, evaluation exit criteria and reporting, test closure activities
Answers
Suggested answer: D
Total 365 questions
Go to page: of 37