ISTQB Certification Practice - Chapter 4

Disclaimer, following article is my own writing based upon the ISTQB Foundation Level Syllabus, which is owned and copyrighted by the ISTQB.

The following are my notes for Chapter  4 - Test Design Techniques

4.1 - Test Development Process
Test case specification: A document specifying a set of test cases (objective, inputs, test actions, expected results, execution preconditions) for a test item.
test design: The process of transforming general test objectives into tangible test conditions and test cases.
test execution schedule: A scheme for the execution of test procedures, including their context and order.
test procedure specification / test script: A document specifying a sequence of actions for the execution of a test. AKA test script.
traceability: The ability to identify related items in documentation and software.
---
The test development process can vary from informal to very formal, depending on context (time constraints, regulatory needs, etc). 
The first step is Test Analysis, where documentation is reviewed to identify what to test (test conditions). Traceability should be established between specifications and test conditions, to help perform impact analysis when specifications change, and to determine test coverage.
During test tesign, the test cases and test data are created. Every test case should have a test condition, input values, execution preconditions, expected results and execution postconditions. 
Expected results should be tightly specified to prevent erroneous but plausible results from being misinterpreted as correct. Ideally, expected results should be determined before execution to minimize tester bias.
During test implementation, the test cases are developed, implemented, prioritized and organized into a test procedure specification.  These are combined into a Test Execution Schedule.


4.2 Categories of Test Design Techniques
test design technique: Procedure used to derive test conditions, test casts and test data.
White-box test design technique: " based on an analysis of the internal structure of a component or system
Black-box test design technique: " based on an analysis of the specification of a system without reference to its internal structure
experience-based test design technique: " based on the tester's experience, knowledge and intuition
---
There are many test design techniques, which generally share characteristics from three pools: black box, white box and experienced-base.
Black-box (specification based) techniques typically use models of the system under test, with test-cases derived systematically from the model.
White-box (structure-based) techniques use information about the code to derive test cases, and use that knowledge to determine how much code coverage the test cases achieve. Further test cases can be systematically added to increase coverage.
Experience-based techniques use the knowledge of developers, testers and stakeholders to derive test cases. Usually this is knowledge about likely defects and their distribution. Knowledge about the environment and about the system's usage can also be factored in.
    
4.3 Black-box (Specification-based) Techniques
equivalence partitioning: Dividing input or output domains according to where the behaviour for a given input would be the same according to specification.
Boundary value analysis: Designing tests around inputs which are on the edge of an equivalence partition, or the smallest incremental difference on either side of the edge.
Decision table testing: Designing tests to execute combinations of inputs according to a table of inputs-to-outputs mappings.
state transition testing: Designing tests to execute valid and invalid state transitions.
use case testing: Designing tests to execute the use-case scenarios (sequence of ransactions ina  dialogue between an actor and a system with a tangible result)
---
Equivalence partitioning is useful for achieveing input/output coverage goals.
Boundary Value Analysis is typically easy to apply and shows a high rate of success in finding defects. It can be considered an extention of Equivalence Partitioning, and will be more effective with detailed specifications (to make the partitions more accurate).
Decision tables are a good way to capture system requirements with logical conditions, and to document a system's internal design. Once the decision table is created from specifications, it is usually tested with a test for each combination of inputs. This forces unconventional combinations of inputs to be tested, leading to more thorough but also more labour-intensive testing.

When a system takes different actions depending on its history, it is often modelled as having different states and valid/invalid transitions between those states.  When this has been modelled, tests can be created to cover typical sequences of states, valid or invalid sequences, etc.
Use-case testing is very useful for customer/user acceptance testing as use-cases explicitly represent their view of the system under test. However, coverage is difficult to establish as the list of a system's use-cases is open-ended.

4.4 Structure-based or White-box Techniques
Code coverage: Which parts of the software have been executed by a test suite, and which have not.
decision coverage: The percentage of decision outcomes exercised by a test suite, both branches and statements.
statement: The smallest indivisible unit of execution in a programming language.
statement coverage: The percentage of executable statements which have been exercised by a test.
structure-based testing: Testing according to ana analyziss of the internal structure of a system.
---
Component, Integration and System testing are all examples of white-box testing. 
Statement testing typically means writing tests to cover specific statements of code, in order to finish statement coverage after large swathes of code have been covered by other tests.
Decision testing is similar to Decision Table or State Transition testing, but with tests built around known decision-points within the code. Decision coverage invollves testing every possible outcome of every decision point within the system, and achieving this coverage guarantees statement coverage (but not vice versa).


4.5 Experience-based Techniques
Exploratory testing: An informal test design technique where the tester actively controls the design of tests as those tests are performed, and uses the information gained while testing to design new and better tests
(fault) attack: Directed and focused attempt to evaluate the quality (especially reliability) of a test object by attempting to force specific failures to occur.
---
This technique relies on the tester's skill, intuition and experience with similar systems to devise tests. This is useful for creating tests which would no normally be capture by formal test design techniques, but will yield varying results depending upon the tester.
Error Guessing is a commonly-used version of experience-based testing, where the tester tries to anticipate defects. They may build a list of those defects and then attack the system looking for that list of targets.
Exploratory testing is useful when time or specifications are too limited to use formal test design techniques, or as a complement to other formal test design techniques. 


4.6 Choosing Test Techniques
The choice of a test technique depends on many factors, including the type of system, regulatory demands, customer requirements, level and type of risk, test objectives, available documentation, knowledge of testers, time and budget, development life cycle and previous experiences. 
Some techniques are more applicable to certain situations and test levels, others are applicable to all test levels. When creating test cases, testers usually use a combination of test techniques to ensure adequate coverage.