ISTQB Certification Practice - Chapter 6

Disclaimer, following article is my own writing based upon the ISTQB Foundation Level Syllabus, which is owned and copyrighted by the ISTQB.

The following are my notes for Chapter 6 - Tools for Supporting Testing

6.1 Types of Test Tools
Test Oracle: A source to determine expected rsults to compare with the result of software under test. The oracle should not be the SUT itself.
Debugging tool: A tool used to reproduce failures, investigate the state of programs, and find the corresponding defect. Typically allows step-by-step execution of program statements.
Dynamic analysis tool: A tool that provides run-time information on the state of the software code. Typically used to identify unassigned pointers, check pointer arithmetic and monitor memory allocation / usage.
Monitoring tool: A tool which runs concurrently with the system under test and supervises/records/analyses is behaviour.
Performance testing tool: A tool to support performance testing, which can typically simulate increasing load and measure test transactions.
probe effect: The effect that a measurement instrument has on the system being measured. For example, performance measurement tools cause their own performance hit.
test comparator: A tool to automatically identify the diferences between actual and expected test results.
test design tool: A tool which supports test design by generating test inputs from a specification
test harness: A test environment composed of stubs and drivers needed to execute a test
unit test framework tool: A tool providing the test harness needed to test a component in isolation, usually with debugging capabilities.
---
There are three general categories of testing tools. Tools which help execute tests, tools which help monitor tests, and tools which help manage the testing process. Any particular tool may fit any or all of those categories. Generally speaking, these tools improve testing efficiency by automating some tasks, and by making testing more reliable.
Tools for managing tests come in a few common flavours:
 Test Management tools provide interfaces for executing tests, tracking defects and requirements, and support analysis and reporting of the test object. THey also support tracing objects to requirement speficiations.
Requirements Management Toools store requirement statements and attributes (including priority) and support tracing the requirements to individual tests. These tools may also help identify inconsistent or missing requirements.
Incident Management (Defect Tracking) Tools manage incident reports, ie defects, failures, change requests, etc. They help manage the life-cycle of incidents, and may support statistical analysis.
Configuration Management Tools are useful for testing systems which operate in multiple hardware/software environments (eg compilers, browsers)

Tools for static testing include:
Review Tools help with review processes, and may store / communicate checklists and guidelines, comments, and track defects and effort. These are particularly useful or a large or geographically dispersed team.
Static Analysis Tools directly analyze code to help find defects and enforcecoding standards. They may also provide metrics for code to help with planning around that code (ie risk analysis).
Modeling tools are used to validate software models, by enumerating inconsistencies and findign defects. These tools may also generate test cases based on the model.

Tools for Test Specification include:
Test design tools, which generate test inputs, executable tests or test oracles based on requirements, data models, code or GUI's.
Test data preparation tools, which manipulate files/databases/data transmissions to set up test data to be used during execution of tests.

Tools for Executing and Logging include:
Test Execution tools, which enable tests to be executed automatically orr semi-automatically, using stored inputs and expected outcomes.
Test Harness / Framework tools, which facilitate the testing of components by simulating the environment in which those components operate through the provision of mock objects (stubs or drivers)
Test comparators determine the differences between expected and actual test results. This may be included as part of a test execution tool, and may be used in conjunction with a test oracle.
Coverage measurement tools determine, through intrusive or non-intrusive means, the percentage of specific types of code structures which have been exercised by a set of tests. (eg,s tatements, branches/decision, modules, functions)
Security testing tools evaluate the ability of software to produce data confidentiality, integrity, authentication, authorization, availability and non-repudiation. These tools tedt to focus on a particular technology, platform and purpose.

Tools for Performance and Monitoring include:
Dynamic Analysis Tools, which help find defects only evident when teh software is executing, such as time dependencies or memory leaks.
Performance/Load/Stress testing tools report on how a system behaves under a variety of simulated usage conditions.
Monitoring tools continuously analyze, verify and report on the usage of specific systemr esources, and give warnings of possible service problems.

Tools for Specific Testing Needs include:
Data Quality Assessment tools, which review and verify that data-conversion and data-migration rules have been followed correctly. 
Usability testing tools exist but are not detailed in the syllabus.

6.2 Effective Use of Tools: Potential Benefits and Risks
Data-driven testing: A scripting technique that stores test input and expected results in a table, so that a single control script can execute all of the tests in the table. This is often used to support test execution tools.
Keyword-driven testing: A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts which are called by the control script.
Scripting language: A programming language in which executable test scripts are written.
---
Purchasing a test tool down not guarantee useful results. Each type of tool may require additional effort to achieve real benefits. Tools have potential benefits but also risks.
Potential benefits include:
Reduction of repetitive work, improved consitency and repeatability, unbiased assessment, and ease of access to information.
RIsks include:
Unrelaistic expectations of functionalty and ease of use. Underestimating time/cost/effort for introducing, gaining benefits from, or maintaing the tools. Over-reliance on the tool, neglecting version control of test assets within the tool, neglecting reloationships between tools, poor vendor support, or unforseen inability to support certain platforms. In the case of open-source/free tools, there is a risk the entire project may be suspended without recourse.
Test execution tools often require significant effort to generate the automated test scripts which the tool operates on.  Simply recording the manual actions a tester is easy, but does not scale to large numbers of tests and cannot adapt to a changing test environment. Data-driven techniques are preferable, as the difficult scripting work need only be done once, and further data can be generated by less-skilled testers. Data may also be generated algorithmically, further reducing human effort in the long run.
Keyword-driven testing is a variant of data-driven testing, where the test data includes certain keywords which correspond to specific actions (think Macros). In a sense, the keywords for their own simpler scripting language which can be used to write further tests with less effort/scripting knowledge.

With Static Analysis tools, you must take care when first applying them to an existing (large) codebase. This will often generate a large quantity of messages which mayb e overwhelming to the user. It's usually best to apply these tools gradually, with filters to block non-critical messages being removed over time.
Test management tools need to interface with other tools or spreadsheets in order to produce results. Else they have nothing to "manage."

6.3 Introducing a Tool into an Organization
The main considerations in selecting a tool are:
What parts of the organization's test process could be improved by tools, what training the test team will need and the overall cost-benefit ratio. Tools should be evaluated against clear and objective criteria, along with evaluation of the vendor (for future support). Tool evaluation should include a proof-of-concept trial.
When introducing the selected tool into an organization, there shoudl be a pilot project with the following objectives: Learn about the tool, evaluate how it fits existing processes and how those processes must change, decide on standard ways of using the tool, and assess whether benefits will be achieved at a reasonable cost.
Successful deployment of the tool usually includes:
-incremental rollout
-adapting and improving processes to fit the tool
-prociding training and coaching/mentoring
-defining usage guidelines
-monitoring tools use and benefits
-providing support for the test team
-gathering lessons learned from al teams