ISTQB CTFL-2018 Practice Test - Questions Answers, Page 12
List of questions
Related questions
Question 111
It is recommended to perform exhaustive tests for covering all combinations of inputs and preconditions.
Explanation:
Exhaustive testing is a technique that covers all possible combinations of inputs and preconditions2.However, it is not recommended to perform exhaustive testing, as it is usually impractical and impossible due to the large number of test cases required2.Instead, risk analysis and priorities should be used to focus testing efforts on the most important and critical aspects of the system under test2.
Question 112
Refer to the exhibit
Given the following State Transition diagram, match the test cases below with the relevant set of state transitions.
(i) X-Z-V-W
(ii) W-Y-U-U
Explanation:
State transition testing is a technique that uses state transition diagrams as a test basis to derive test cases2.A state transition diagram shows the states of a system and the transitions between them triggered by events or conditions2.A test case can cover one or more state transitions, depending on the test objective and coverage criterion2. In this question, the test cases (i) and (ii) cover different sets of state transitions, as shown below:
Test Case
State Transitions
(i) X-Z-V-W
S1 -- S2 -- S3 -- S4 -- S2
(ii) W-Y-U-U
S4 -- S2 -- S4 -- S4 -- S4
Question 113
A system calculates the amount of customs duty to be paid:
_ No duty is paid on goods value up to, and including, $2,000.
_ The next $8,000 is taxed at 10%.
_ The next $20,000 after that is taxed at 12%.
_ Any further amount after that is taxed at 17%.
To the nearest $, which of these groups of numbers fall into three DIFFERENT equivalence classes?
Explanation:
Equivalence partitioning is a technique that divides the input domain of a system into partitions or classes that are expected to behave similarly or produce similar outputs2.A test case can cover one value from each partition, as it is assumed that all values in the same partition are equivalent for testing purposes2. In this question, the groups of numbers fall into different equivalence classes based on the amount of customs duty to be paid, as shown below:
Group
Equivalence Classes
$20,000 $20,001 $30,001
No duty (up to $2,000), 10% duty ($2,001-$10,000), 12% duty ($10,001-$30,000), 17% duty (above $30,000)
$2,000 $2,001 $10,000
No duty (up to $2,000), 10% duty ($2,001-$10,000), 12% duty ($10,001-$30,000)
$2,000 $8,000 $20,000
No duty (up to $2,000), 10% duty ($2,001-$10,000), 12% duty ($10,001-$30,000)
$1,500 $2,000 $10,000
No duty (up to $2,000), 10% duty ($2,001-$10,000)
Question 114
Which statement BEST describes when test planning should be performed? [K1]
Explanation:
The statement that BEST describes when test planning should be performed is D. Test planning is performed continuously in all life cycle processes and activities. Test planning is the process of defining the objectives, scope, approach, resources, schedule, risks, and deliverables for testing activities. Test planning is not a one-time activity that is done only at the beginning of the life cycle or at each test level. Test planning is a continuous activity that is done throughout the life cycle in all processes and activities that involve testing. Test planning should be aligned with the development process and should be updated regularly to reflect any changes or feedback from previous testing activities. Test planning should also consider different levels of testing (such as unit testing, integration testing, system testing, acceptance testing, etc.) and different types of testing (such as functional testing, non-functional testing, regression testing, etc.). A detailed explanation of test planning can be found in [A Study Guide to the ISTQB Foundation Level 2018 Syllabus], pages 13-15.
Question 115
Refer to the exhibit
The following test cases need to be run, but time is limited, and it is possible that not all will be completed before the end of the test window
The first activity is to run any re-tests, followed by the regression test script. Users have supplied their priority order to tests.
Which of the following gives an appropriate test execution schedule, taking account of the prioritisation and other constraints? [K3]
Explanation:
The test execution schedule should take into account the prioritization and other constraints given in the question. The first activity is to run any re-tests, followed by the regression test script. Users have supplied their priority order to tests. Therefore, the test execution schedule should start with the re-test defect no 52, which is shown in the image as a table with 9 rows and 4 columns. The table is titled ''Re-test defect no 52''. The columns are labeled: ''#'', ''Description'', ''Priority'', ''Note'', ''Re-test''. The table is populated with data about different tasks and their priority and notes. The tasks are related to sales figures and account administration. The table is in black and white. The re-test defect no 52 has three tasks: c, a, and d, which have priority 1, 2, and 3 respectively. Therefore, the test execution schedule should start with c, followed by a, and then d. After that, the regression test script should be run, which is task e. Then, the remaining tasks should be run according to the user priority order: b, g, i, h, and f. Therefore, the test execution schedule is c, a, d, e, b, g, i, h, and f.
Question 116
Which of the following factors will MOST affect the testing effort required to test a software product? [K1]
Explanation:
The testing effort required to test a software product depends on various factors, such as the size and complexity of the product, the quality of the requirements and design documents, the testability of the product, the test strategy and scope, the test environment and tools, the skills and experience of the testers, and the quality expectations and standards of the stakeholders1.Among these factors, the requirements for reliability and security in the product will most affect the testing effort required to test a software product1.Reliability and security are quality attributes that measure how well a software product performs its intended functions under specified conditions and protects itself from unauthorized access or harm1.Testing for reliability and security requires more rigorous and thorough testing techniques, such as reliability testing, security testing, penetration testing, stress testing, etc1.These techniques may require more time, resources, tools, and skills to perform effectively1. Therefore, the requirements for reliability and security in the product will most affect the testing effort required to test a software product.
Question 117
Which of the following metrics could be used to monitor progress along with test preparation and execution? [K1]
Explanation:
Metrics are quantitative measures that can be used to monitor and control various aspects of software testing processes and products1.Metrics can be used to monitor progress along with test preparation and execution by providing information about the status of testing activities, such as test planning, test design, test execution, test evaluation, defect management, etc1. Among the options given in this question, only C is a suitable metric for monitoring progress along with test preparation and execution.The failure rate in testing already completed is a metric that measures how many tests have failed out of the total number of tests executed1.This metric can indicate the quality of the software product under test and the effectiveness of the test cases1.It can also help to identify areas that need more attention or improvement in testing1. Therefore, this metric can be used to monitor progress along with test preparation and execution.
Question 118
Test objectives for systems testing of a safety critical system include completion of all outstanding defect correction. Regression testing is required following defect correction at all test levels. Which TWO of the following metrics would be MOST suitable for determining whether the test objective has been met? [K2]
a. Regression tests run and passed in systems testing
b. Incidents closed in systems testing
c. Planned tests run and passed in system testing
d. Planned tests run and passed at all levels of testing
e. Incidents raised and closed at all levels of testing
Explanation:
Test objectives are specific goals or targets that define what testing activities should achieve or accomplish1. Test objectives for systems testing of a safety critical system include completion of all outstanding defect correction. Regression testing is required following defect correction at all test levels. Therefore, to determine whether this test objective has been met, two metrics that would be most suitable are:
Regression tests run and passed in systems testing: This metric measures how many regression tests have been executed and passed in systems testing1.Regression testing is a type of testing that verifies that previously tested software still performs correctly after changes or defect corrections1. This metric can indicate whether all outstanding defect corrections have been completed and verified in systems testing.
Incidents raised and closed at all levels of testing: This metric measures how many incidents have been reported and resolved at all levels of testing1.An incident is any event occurring during testing that requires investigation1. This metric can indicate whether all defects have been identified and corrected at all levels of testing.
Question 119
Test script TransVal 3.1 tests transaction validation via screen TRN 003B. According to the specification (PID ver 1.3 10b iv) the validation screen should not accept future dated transactions. Test script TransVal 3.1 passes. Test script eod 1.4 tests end of day processing and is run after the execution of TransVal 3.1 using data entered during that test
Which of the following is the BEST detail on an incident report? [K3]
Explanation:
An incident report is a document that records any event occurring during testing that requires investigation1.An incident report should contain sufficient information to enable reproduction of the incident and resolution of the defect1. According to IEEE 829 Standard for Software Test Documentation, an incident report should contain the following information:
Identifier: A unique identifier for the incident report
Summary: A brief summary of the incident
Incident description: A description of the incident, including:
Date: The date when the incident was observed
Author: The name of the person who reported the incident
Source: The software or system lifecycle process in which the incident was observed
Test case: The identification of the test case that caused the incident
Execution phase: The phase of test execution when the incident was observed
Environment: The hardware and software environment in which the incident was observed
Description: A description of the anomaly to enable reproduction of the incident
Expected result: The expected result of the test case
Actual result: The actual result of the test case
Reproducibility: An indication of whether the incident can be reproduced or not
Impact analysis: An analysis of the impact of the incident on other aspects of the software or system
Incident resolution: A description of how the incident was resolved, including:
Resolution date: The date when the incident was resolved
Resolver: The name of the person who resolved the incident
Resolution summary: A brief summary of how the incident was resolved
Status: The current status of the incident (e.g., open, closed, deferred)
Classification information: A classification of the cause and effect of the incident for metrics and reporting purposes
Therefore, among the options given in this question, only D provides the best detail on an incident report. It contains a clear title, a reproducibility indicator, a description that includes both expected and actual results, a reference to the specification document, and a screen shot of the failure. The other options are either missing some important information or providing inaccurate or irrelevant information
Question 120
Which TWO of the following test tools would be classified as test execution tools? [K2]
a. Test data preparation tools
b. Test harness
c. Review tools
d. Test comparators
e. Configuration management tools
Explanation:
The test tools that would be classified as test execution tools are D. b and d. Test execution tools are tools that automate the execution of test cases or test scripts, and compare the actual results with the expected results. Test execution tools can also record and replay user actions, generate test data, and report test results. Test harness and test comparators are examples of test execution tools. A test harness is a tool that creates a test environment for a component or system under test, by simulating the required dependencies, such as stubs, drivers, or mock objects. A test comparator is a tool that compares the actual outputs of a component or system under test with the expected outputs, and reports any differences or anomalies. A detailed explanation of test execution tools can be found inA Study Guide to the ISTQB Foundation Level 2018 Syllabus, pages 111-1121.
Question