ISTQB Certification Practice - Chapter 2

Disclaimer, following article is my own writing based upon the ISTQB Foundation Level Syllabus, which is owned and copyrighted by the ISTQB.

The following are my notes for Chapter 2 - Testing Throughout the Software Life Cycle.

2.1 Software Development models

Commercial Off-the-Shelf: A software product developed for the general market, delivered to many customers in identical format
iterative-incremental development model:  The process of gathering requirements, designing, building and testing a system in a series of short development cycles. Agile.
validation: confirmation by examination and objective evidence that the requirements for a specific intended use or application have been fulfilled.
verification: confirmation by examination and objective evidence that the requirements for a specific  requirements have been fulfilled. A validation might include several verifications.
V-model: A model of software development where the stages are organized in the shape of a V, according to their level of abstraction versus time.
---
Testing does not happen in a vacuum, it is part of a larger softare development process. Testing will be done differently depending on what development process is being used.
Typical V-model development often includes four levels of testing: Component, Integration, System and Acceptance.  Verification and Validation can be performed before work-products are ready for those levels of testing.
Iterative-Incremental development requires testing during each increment. Regression testing is particularly important, and verification and validation should be performed in each increment as well.
There are some characteristics of good testing which are shared by every lifecycle model. Every development activity shold have a corresponding testing activity. Each test level should have objectives specific to that level. The analysis and design of tests for a given level shoudl begin during the corresponding development activity. And testers should be involved in reviewing documents as son as the drafts are available in the development cycle.

 

2.2 Test Levels

Alpha Testing: Simulated or actual operational testing by potential customers or an independent test team, outside of the development team.
Beta Testing/field testing: Operational testing by potential customers at an external site not involved with the developer, to determine if the system satisfies customer needs.
Component Testing: The testing of individual software components
Driver: A software component or test tool taht replaces a controlling component of the test object.
Functional Requirement: A requirement which specifies a function that a component or system must perform.
Integration: The process of combining components or systems into larger assemblies
Integration Testing: Testing to expose defects in the interfaces and interactions between integrated components or systems
Non-functional Requirement: A requirement that does not relate to functionality, but to attributes such as reliability, efficiency, usability, maintainability and portability.
Robustness Testing: Testing to determine the degree to which a system can function correctly in the presence of invalid inputs or stressful environmental conditions.
Stub: A special-purpose implementation of a software component, used to help develop or test a component which is dependent upon it.
System Testing: The process of testing an integrated system to verify that it meets specified requirements
Test Environment/Test Bed: An environment containing hardware, instrumentation, simulators, software tools and other support elements needed to conduct a test.
Test Harness: A test environment containing the stubs and drivers needed to execute a test
Test Level: A group of test activities that are organized and managed together. For example, component test, integration test, system test and acceptance test.
Test-driven Development: A way of developing software where the test cases are developed before development of the software which is to be tested
User Acceptance Testing: Formal testing with respect to user needs, requirements and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.
---
For each test level, we can identify generic objectives, the test basis, the test object, typical defects and failures to be found, test harness requirements and tool support, and specific approaches and responsibilities.
Component testing has a basis of component requirements, detailed design documents and raw code. Some typical components would be a data-conversion tool or a database module. The purpose of component testing is to find defects and verify the functioning of components which can be isolated from the rest of the system. This testing may include both functional and non-functional characteristics, robustness testing, and structural testing (ie decision coverage). 
Component testing is typically done with access to the code which is being tested, often by the code's own developer. Defects are usualy fixed as soon as they are found. This can be done using test-driven development, where tests for a given component are written before the component is created.

Integration Testing has as its basis: software and system design documents, architecture, workflows and use-cases. Example test objects are subsystems, database implementations, configuration systems, and interfaces.
Integration testing tests the interaction between different parts of a system. This can be done on a component-level (between two components), or on the system-level (between two systems). Note that as the scope of the integration grows, it becomes harder to isolate and investigate any bugs that appear.
Depending on the architecture of a system, subsystems should be combined as incrementally as possible. As above, integrating multiple pieces at once  (aka "big bang") makes debugging more difficult.  Ideally the integration tests will be planned before the pieces are built, so that they  can be bult in the order in which they are needed for testing. If A needs B but B doesn't need A, build B first and test it while A is under construction.
Note that integration testing should concern itself ONLY with the interaction between two pieces, not with the internal functions of the individual pieces. That should've been done during component testing.

System Testing has as its basis: system and software requirement specifications, use cases, functional specifications and risk analysis reports. Test objects might include manuals and configuration data.
System testing is concerned with the behavious of a whole system/product. Tests should investigate both functional and non-functional requirements. At this stage, tester need to deal with any incomlete or undocumented requirements. Functional testing should be done black-box, with white-box testing used to assess the thoroughness of testing with respect to specific elements of the system.
The environment should match the target production environment to minimize risk of environment-specific failures not being found. This testing is often carried out by an independent team, to minimize their bias and because blackbox testing doesn't require code knowledge.

Acceptance Testing has as its basis: user requirements, system requirements, use cases, business processes, and risk analysis reports. Typical test objects are business processes, maintenance processes, user procedures, forms, reports and configuration data.
This form of testing tries to establish confidence in the system. The goal isn't to find defects per-se, only to assess the system's readiness for deploymement. Note that this isnt necessarily the last step of testing, as deployment may require further integrations which themselves need to be tested. Individual components may also be acceptance-tested during component testing.
Acceptance testing also comes in different flavours for different stakeholders:
User Acceptance testing verifies the system is ready for use by general end-users (normal functions)
Operation acceptance testing tests the functions needed by system administrators (security, disaster recovery, etc)
Contract and Regulatory acceptance testing verify that the product follows any pre-planned strict requirements
Alpha/Beta/Field testing gathers feedback from non-developers, and is particularly useful for COTS (commercial off-the-shelf) products.

2.3 Test Types
Black-box testing: Functional or non-functional testing without a referece to the internal structure of the system
Code coverage: Which parts of the software hav been executed by the test suite.
functional testing: Testing based on the specification of functionality of a system
interoperability testing: Testing the capability of a software product to interact with specified systems
load testing: Performance testing which specifically tests the behaviour of a system under increasing load
maintainability testing: Testing the easy with which a sofware product can be modified to correct defects, meet new requirements, adapt to a new environment, etc.
performance testing: Testing the degree to which a system accomplishes its functions within given constraints of processing time and throughput rate
portability testing: Testing the ease with which a software product can be transferred from one environment to another
reliability testing: Testing the ability of the software product to perform its required functions under stated conditions for a stated duration
security testing: Testing a softare product's ability to prevent unauthorized access to programs and data (both deliberate and accidental)
stress testing: A type of peformance testing specifically testing the ability of a system to operate beyond its expected workload or with fewer than expected resources
structural / white box testing: Testing based on analysis of the internal structure of a system
usability testing: Testing to determine the extent to which a software product is understood, easy to learn and operate, and attractive to its end users under specified conditions
---
We distinguish different types of tests based on what aspects of a system they test. We might test functions, non-functional qualities, structure, or look for specific changes/regressions from previous tests.
Functional testing focuses on "what" a system does. These tests can be written from specifications (although some requirements are unstated) and can be done black-box.  These tests can be performed at all levels (ie component, integration, system, acceptance).
Note that security testing is a specific type of functional testing, covering the ability of the software to deal with external threats. Interoperability testing is another kind of functional test, focusing on the ability of the system to interact with external systems.
Non-functional testing looks at "how" the system works. This testing quantifies the qualities of a system on a varying scale. This testing can usually be done black-box, at any level of testing.
Structural (white-box) testing assesses the thoroughness of other tests, usually by fitting the software code into some known structure and comparing the tests to that structure.
Change testing (ie re-testing, regression) compares the current state of a system to its previous state. This is typically used after debugging to ensure bugs have actually been removed. It is also applied to the rest of the system after any changes to catch defects which may have been introduced during the changes.
Change tests must be repeatable, and regression tests are good candidates for automation because they must be run many times.

2.4 Maintenance Testing
Impact Analysis: The assessment of change to the layers of documentation and components in order to implement a given change to specified requirements
Maintenance Testing: Testing the changes to an operational system or the impact of environmental changes upon the system
---
Once deployed, a software system may be in service for a very long time. During that time, it's likely the system's configuration or environment will change in some way. These changes need to be well-planned to avoid disruption of the (often important) service the system provides.
We distinguish between hotfixes and planned releases. Generally speaking, planned releases are big and planned well in advance, while hotfixes are performed on a short schedule and kept as small as possible.
Maintenance testing does not only test changes which have been made, it also includes regression testing of components which are supposed to be unchanged. The amount of testing to do depends on the impact of the changes. Bigger changes demand more testing.
Note that maintenance testing can be difficult due to out-of-date documents and the gradual loss of domain knowledge among testers (people forget or leave the project).