ExamGecko
Home / iSQI / CT-AI / List of questions
Ask Question

iSQI CT-AI Practice Test - Questions Answers, Page 2

Add to Whishlist

List of questions

Question 11

Report Export Collapse

Which ONE of the following types of coverage SHOULD be used if test cases need to cause each neuron to achieve both positive and negative activation values?


Value coverage

Value coverage

Threshold coverage

Threshold coverage

Sign change coverage

Sign change coverage

Neuron coverage

Neuron coverage

Suggested answer: C
Explanation:

Coverage for Neuron Activation Values: Sign change coverage is used to ensure that test cases cause each neuron to achieve both positive and negative activation values. This type of coverage ensures that the neurons are thoroughly tested under different activation states.

Reference: ISTQB_CT-AI_Syllabus_v1.0, Section 6.2 Coverage Measures for Neural Networks, which details different types of coverage measures, including sign change coverage.

asked 25/12/2024
javier mungaray
43 questions

Question 12

Report Export Collapse

Which ONE of the following describes a situation of back-to-back testing the LEAST?


Comparison of the results of a current neural network model ML model implemented in platform A (for example Pytorch) with a similar neural network model ML model implemented in platform B (for example Tensorflow), for the same data.

Comparison of the results of a current neural network model ML model implemented in platform A (for example Pytorch) with a similar neural network model ML model implemented in platform B (for example Tensorflow), for the same data.

Comparison of the results of a home-grown neural network model ML model with results in a neural network model implemented in a standard implementation (for example Pytorch) for same data

Comparison of the results of a home-grown neural network model ML model with results in a neural network model implemented in a standard implementation (for example Pytorch) for same data

Comparison of the results of a neural network ML model with a current decision tree ML model for the same data.

Comparison of the results of a neural network ML model with a current decision tree ML model for the same data.

Comparison of the results of the current neural network ML model on the current data set with a slightly modified data set.

Comparison of the results of the current neural network ML model on the current data set with a slightly modified data set.

Suggested answer: C
Explanation:

Back-to-back testing is a method where the same set of tests are run on multiple implementations of the system to compare their outputs. This type of testing is typically used to ensure consistency and correctness by comparing the outputs of different implementations under identical conditions. Let's analyze the options given:

A . Comparison of the results of a current neural network model ML model implemented in platform A (for example Pytorch) with a similar neural network model ML model implemented in platform B (for example Tensorflow), for the same data.

This option describes a scenario where two different implementations of the same type of model are being compared using the same dataset. This is a typical back-to-back testing situation.

B . Comparison of the results of a home-grown neural network model ML model with results in a neural network model implemented in a standard implementation (for example Pytorch) for the same data.

This option involves comparing a custom implementation with a standard implementation, which is also a typical back-to-back testing scenario to validate the custom model against a known benchmark.

C . Comparison of the results of a neural network ML model with a current decision tree ML model for the same data.

This option involves comparing two different types of models (a neural network and a decision tree). This is not a typical scenario for back-to-back testing because the models are inherently different and would not be expected to produce identical results even on the same data.

D . Comparison of the results of the current neural network ML model on the current data set with a slightly modified data set.

This option involves comparing the outputs of the same model on slightly different datasets. This could be seen as a form of robustness testing or sensitivity analysis, but not typical back-to-back testing as it doesn't involve comparing multiple implementations.

Based on this analysis, option C is the one that describes a situation of back-to-back testing the least because it compares two fundamentally different models, which is not the intent of back-to-back testing.

asked 25/12/2024
Azahar Basri
29 questions

Question 13

Report Export Collapse

Which ONE of the following options does NOT describe an Al technology related characteristic which differentiates Al test environments from other test environments?


Challenges resulting from low accuracy of the models.

Challenges resulting from low accuracy of the models.

The challenge of mimicking undefined scenarios generated due to self-learning

The challenge of mimicking undefined scenarios generated due to self-learning

The challenge of providing explainability to the decisions made by the system.

The challenge of providing explainability to the decisions made by the system.

Challenges in the creation of scenarios of human handover for autonomous systems.

Challenges in the creation of scenarios of human handover for autonomous systems.

Suggested answer: D
Explanation:

AI test environments have several unique characteristics that differentiate them from traditional test environments. Let's evaluate each option:

A . Challenges resulting from low accuracy of the models.

Low accuracy is a common challenge in AI systems, especially during initial development and training phases. Ensuring the model performs accurately in varied and unpredictable scenarios is a critical aspect of AI testing.

B . The challenge of mimicking undefined scenarios generated due to self-learning.

AI systems, particularly those that involve machine learning, can generate undefined or unexpected scenarios due to their self-learning capabilities. Mimicking and testing these scenarios is a unique challenge in AI environments.

C . The challenge of providing explainability to the decisions made by the system.

Explainability, or the ability to understand and articulate how an AI system arrives at its decisions, is a significant and unique challenge in AI testing. This is crucial for trust and transparency in AI systems.

D . Challenges in the creation of scenarios of human handover for autonomous systems.

While important, the creation of scenarios for human handover in autonomous systems is not a characteristic unique to AI test environments. It is more related to the operational and deployment challenges of autonomous systems rather than the intrinsic technology-related characteristics of AI .

Given the above points, option D is the correct answer because it describes a challenge related to operational deployment rather than a technology-related characteristic unique to AI test environments.

asked 25/12/2024
Peter Takacs
32 questions

Question 14

Report Export Collapse

Which ONE of the following combinations of Training, Validation, Testing data is used during the process of learning/creating the model?


Training data - validation data - test data

Training data - validation data - test data

Training data - validation data

Training data - validation data

Training data * test data

Training data * test data

Validation data - test data

Validation data - test data

Suggested answer: A
Explanation:

The process of developing a machine learning model typically involves the use of three types of datasets:

Training Data: This is used to train the model, i.e., to learn the patterns and relationships in the data.

Validation Data: This is used to tune the model's hyperparameters and to prevent overfitting during the training process.

Test Data: This is used to evaluate the final model's performance and to estimate how it will perform on unseen data.

Let's analyze each option:

A . Training data - validation data - test data

This option correctly includes all three types of datasets used in the process of creating and validating a model. The training data is used for learning, validation data for tuning, and test data for final evaluation.

B . Training data - validation data

This option misses the test data, which is crucial for evaluating the model's performance on unseen data after the training and validation phases.

C . Training data - test data

This option misses the validation data, which is important for tuning the model and preventing overfitting during training.

D . Validation data - test data

This option misses the training data, which is essential for the initial learning phase of the model.

Therefore, the correct answer is A because it includes all necessary datasets used during the process of learning and creating the model: training, validation, and test data.

asked 25/12/2024
Jeffrey Holt Jr
30 questions

Question 15

Report Export Collapse

Which ONE of the following options BEST DESCRIBES clustering?


Clustering is classification of a continuous quantity.

Clustering is classification of a continuous quantity.

Clustering is supervised learning.

Clustering is supervised learning.

Clustering is done without prior knowledge of output classes.

Clustering is done without prior knowledge of output classes.

Clustering requires you to know the classes.

Clustering requires you to know the classes.

Suggested answer: C
Explanation:

Clustering is a type of machine learning technique used to group similar data points into clusters. It is a key concept in unsupervised learning, where the algorithm tries to find patterns or groupings in data without prior knowledge of output classes. Let's analyze each option:

A . Clustering is classification of a continuous quantity.

This is incorrect. Classification typically involves discrete categories, whereas clustering involves grouping similar data points. Classification of continuous quantities is generally referred to as regression.

B . Clustering is supervised learning.

This is incorrect. Clustering is an unsupervised learning technique because it does not rely on labeled data.

C . Clustering is done without prior knowledge of output classes.

This is correct. In clustering, the algorithm groups data points into clusters without any prior knowledge of the classes. It discovers the inherent structure in the data.

D . Clustering requires you to know the classes.

This is incorrect. Clustering does not require prior knowledge of classes. Instead, it aims to identify and form the classes or groups based on the data itself.

Therefore, the correct answer is C because clustering is an unsupervised learning technique done without prior knowledge of output classes.

asked 25/12/2024
Ragul Ponniah
45 questions

Question 16

Report Export Collapse

Which ONE of the following options is an example that BEST describes a system with Al-based autonomous functions?


A system that utilizes human beings for all important decisions.

A system that utilizes human beings for all important decisions.

A fully automated manufacturing plant that uses no software.

A fully automated manufacturing plant that uses no software.

A system that utilizes a tool like Selenium.

A system that utilizes a tool like Selenium.

A system that is fully able to respond to its environment.

A system that is fully able to respond to its environment.

Suggested answer: D
Explanation:

AI-Based Autonomous Functions: An AI-based autonomous system is one that can respond to its environment without human intervention. The other options either involve human decisions or do not use AI at all.

Reference: ISTQB_CT-AI_Syllabus_v1.0, Sections on Autonomy and Testing Autonomous AI-Based Systems.

asked 25/12/2024
Reaper Gamer
56 questions

Question 17

Report Export Collapse

Which of the following is THE LEAST appropriate tests to be performed for testing a feature related to autonomy?


Test for human handover to give rest to the system.

Test for human handover to give rest to the system.

Test for human handover when it should actually not be relinquishing control.

Test for human handover when it should actually not be relinquishing control.

Test for human handover requiring mandatory relinquishing control.

Test for human handover requiring mandatory relinquishing control.

Test for human handover after a given time interval.

Test for human handover after a given time interval.

Suggested answer: B
Explanation:

Testing Autonomy: Testing for human handover when it should not be relinquishing control is the least appropriate because it contradicts the very definition of autonomous systems. The other tests are relevant to ensuring smooth operation and transitions between human and AI control.

Reference: ISTQB_CT-AI_Syllabus_v1.0, Sections on Testing Autonomous AI-Based Systems and Testing for Human-AI Interaction.

asked 25/12/2024
mohamad rachwani
41 questions

Question 18

Report Export Collapse

'AllerEgo' is a product that uses sell-learning to predict the behavior of a pilot under combat situation for a variety of terrains and enemy aircraft formations. Post training the model was exposed to the real-

world data and the model was found to be behaving poorly. A lot of data quality tests had been performed on the data to bring it into a shape fit for training and testing.

Which ONE of the following options is least likely to describes the possible reason for the fall in the performance, especially when considering the self-learning nature of the Al system?

The difficulty of defining criteria for improvement before the model can be accepted. Defining criteria for improvement is a challenge in the acceptance of AI models, but it is not directly related to the performance drop in real-world scenarios. It relates more to the evaluation and deployment phase rather than affecting the model's real-time performance post-deployment.

The difficulty of defining criteria for improvement before the model can be accepted. Defining criteria for improvement is a challenge in the acceptance of AI models, but it is not directly related to the performance drop in real-world scenarios. It relates more to the evaluation and deployment phase rather than affecting the model's real-time performance post-deployment.

The fast pace of change did not allow sufficient time for testing. This can significantly affect the model's performance. If the system is self-learning, it needs to adapt quickly, and insufficient testing time can lead to incomplete learning and poor performance.

The fast pace of change did not allow sufficient time for testing. This can significantly affect the model's performance. If the system is self-learning, it needs to adapt quickly, and insufficient testing time can lead to incomplete learning and poor performance.

The unknown nature and insufficient specification of the operating environment might have caused the poor performance. This is highly likely to affect performance. Self-learning AI systems require detailed specifications of the operating environment to adapt and learn effectively. If the environment is insufficiently specified, the model may fail to perform accurately in real-world scenarios.

The unknown nature and insufficient specification of the operating environment might have caused the poor performance. This is highly likely to affect performance. Self-learning AI systems require detailed specifications of the operating environment to adapt and learn effectively. If the environment is insufficiently specified, the model may fail to perform accurately in real-world scenarios.

There was an algorithmic bias in the AI system. Algorithmic bias can significantly impact the performance of AI systems. If the model has biases, it will not perform well across different scenarios and data distributions. Given the context of the self-learning nature and the need for real-time adaptability, option A is least likely to describe the fall in performance because it deals with acceptance criteria rather than real-time performance issues.

There was an algorithmic bias in the AI system. Algorithmic bias can significantly impact the performance of AI systems. If the model has biases, it will not perform well across different scenarios and data distributions. Given the context of the self-learning nature and the need for real-time adaptability, option A is least likely to describe the fall in performance because it deals with acceptance criteria rather than real-time performance issues.

Suggested answer: A
asked 25/12/2024
German Dario Jara
40 questions

Question 19

Report Export Collapse

ln the near future, technology will have evolved, and Al will be able to learn multiple tasks by itself without needing to be retrained, allowing it to operate even in new environments. The cognitive abilities of Al are similar to a child of 1-2 years.'

In the above quote, which ONE of the following options is the correct name of this type of Al?


Technological singularity

Technological singularity

Narrow Al

Narrow Al

Super Al

Super Al

General Al

General Al

Suggested answer: D
Explanation:

A. Technological singularity

Technological singularity refers to a hypothetical point in the future when AI surpasses human intelligence and can continuously improve itself without human intervention. This scenario involves capabilities far beyond those described in the question.

B. Narrow AI

Narrow AI, also known as weak AI, is designed to perform a specific task or a narrow range of tasks. It does not have general cognitive abilities and cannot learn multiple tasks by itself without retraining.

C. Super AI

Super AI refers to an AI that surpasses human intelligence and capabilities across all fields. This is an advanced concept and not aligned with the description of having cognitive abilities similar to a young child.

D. General AI

General AI, or strong AI, has the ability to understand, learn, and apply knowledge across a wide range of tasks, similar to human cognitive abilities. It aligns with the description of AI that can learn multiple tasks and operate in new environments without needing retraining.

asked 25/12/2024
Max Archer
46 questions

Question 20

Report Export Collapse

An image classification system is being trained for classifying faces of humans. The distribution of the data is 70% ethnicity A and 30% for ethnicities B, C and D. Based ONLY on the above information, which of the following options BEST describes the situation of this image classification system?


This is an example of expert system bias.

This is an example of expert system bias.

This is an example of sample bias.

This is an example of sample bias.

This is an example of hyperparameter bias.

This is an example of hyperparameter bias.

This is an example of algorithmic bias.

This is an example of algorithmic bias.

Suggested answer: B
Explanation:

A . This is an example of expert system bias.

Expert system bias refers to bias introduced by the rules or logic defined by experts in the system, not by the data distribution.

B . This is an example of sample bias.

Sample bias occurs when the training data is not representative of the overall population that the model will encounter in practice. In this case, the over-representation of ethnicity A (70%) compared to B, C, and D (30%) creates a sample bias, as the model may become biased towards better performance on ethnicity A.

C . This is an example of hyperparameter bias.

Hyperparameter bias relates to the settings and configurations used during the training process, not the data distribution itself.

D . This is an example of algorithmic bias.

Algorithmic bias refers to biases introduced by the algorithmic processes and decision-making rules, not directly by the distribution of training data.

Based on the provided information, option B (sample bias) best describes the situation because the training data is skewed towards ethnicity A, potentially leading to biased model performance.

asked 25/12/2024
Guillermo Fontaine
56 questions
Total 80 questions
Go to page: of 8
Search

Related questions