Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

iSQI Exam CT-AI Topic 9 Question 22 Discussion

Actual exam question for iSQI's CT-AI exam
Question #: 22
Topic #: 9
[All CT-AI Questions]

A wildlife conservation group would like to use a neural network to classify images of different animals. The algorithm is going to be used on a social media platform to automatically pick out pictures of the chosen animal of the month. This month's animal is set to be a wolf. The test team has already observed that the algorithm could classify a picture of a dog as being a wolf because of the similar characteristics between dogs and wolves. To handle such instances, the team is planning to train the model with additional images of wolves and dogs so that the model is able to better differentiate between the two.

What test method should you use to verify that the model has improved after the additional training?

Show Suggested Answer Hide Answer
Suggested Answer: D

Back-to-back testing is used to compare two different versions of an ML model, which is precisely what is needed in this scenario.

The model initially misclassified dogs as wolves due to feature similarities.

The test team retrains the model with additional images of dogs and wolves.

The best way to verify whether this additional training improved classification accuracy is to compare the original model's output with the newly trained model's output using the same test dataset.

Why Other Options Are Incorrect:

A (Metamorphic Testing): Metamorphic testing is useful for generating new test cases based on existing ones but does not directly compare different model versions.

B (Adversarial Testing): Adversarial testing is used to check how robust a model is against maliciously perturbed inputs, not to verify training effectiveness.

C (Pairwise Testing): Pairwise testing is a combinatorial technique for reducing the number of test cases by focusing on key variable interactions, not for validating model improvements.

Supporting Reference from ISTQB Certified Tester AI Testing Study Guide:

ISTQB CT-AI Syllabus (Section 9.3: Back-to-Back Testing)

'Back-to-back testing is used when an updated ML model needs to be compared against a previous version to confirm that it performs better or as expected'.

'The results of the newly trained model are compared with those of the prior version to ensure that changes did not negatively impact performance'.

Conclusion:

To verify that the model's performance improved after retraining, back-to-back testing is the most appropriate method as it compares both model versions. Hence, the correct answer is D.


Contribute your Thoughts:

Jodi
22 days ago
Option B, adversarial testing, might be overkill here. Unless they suspect the training data is somehow corrupted, I think back-to-back testing is the way to go.
upvoted 0 times
Kris
3 days ago
Yeah, I think it's important to directly compare the model before and after the additional training to see if there's any improvement.
upvoted 0 times
...
Dana
8 days ago
I agree, back-to-back testing seems like the most practical approach in this case.
upvoted 0 times
...
...
Glory
1 months ago
That's a good point, Shenika. Maybe we should consider both back-to-back testing and adversarial testing for a more thorough verification.
upvoted 0 times
...
Shenika
1 months ago
But wouldn't adversarial testing also be important to make sure no incorrect images were used in the training?
upvoted 0 times
...
Delbert
1 months ago
Haha, I can just imagine the team trying to train the model to not confuse wolves and dogs. It's like teaching a toddler the difference between a lion and a house cat.
upvoted 0 times
Keneth
10 days ago
A: Exactly, it's the best way to verify the improvement.
upvoted 0 times
...
Alpha
17 days ago
B: Yeah, that way we can compare the model before and after the additional training.
upvoted 0 times
...
Omer
25 days ago
A: We should use back-to-back testing to see if the model has improved.
upvoted 0 times
...
...
Malinda
1 months ago
I agree with Glory, comparing the model before and after training is the best way to see if it has improved.
upvoted 0 times
...
Glory
1 months ago
I think we should use back-to-back testing to verify the model's improvement.
upvoted 0 times
...
Victor
1 months ago
I agree with Eugene. Back-to-back testing is the most straightforward approach to verify the improvement in the model's ability to differentiate between wolves and dogs.
upvoted 0 times
...
Eugene
2 months ago
Option D seems like the way to go. Back-to-back testing will let you clearly see the impact of the additional training on the model's performance.
upvoted 0 times
Trina
6 days ago
Let's go with that method then.
upvoted 0 times
...
Winfred
9 days ago
Agreed, back-to-back testing will show us if the additional training made a difference.
upvoted 0 times
...
Malcom
1 months ago
I think we should use option D for testing.
upvoted 0 times
...
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77