ISTQB Certification Practice - Chapter 1

Disclaimer, following article is my own writing based upon the ISTQB Foundation Level Syllabus, which is owned and copyrighted by the ISTQB.


Also, a quick word about terminology: the ISTQB has established their own glossary of terms which I must use in their exams, and thus in these notes. The glossary  is a blend ISO and IEEE standards plus some of their own inventions, so it's likely some of the terms used won't match your personal definitions (they certainly don't match mine).


    Chapter 1 - Fundamenals of Testing
    1.1 Why is Testing Necessary?
error, mistake: A human action that produces an incorrect result
defect, fault, bug:  A flaw in a system that can cause the system to fail to perform its required function
failure: deviation of a system from its expected result
risk: A factor that could result in future negative consequences: usually measured as impact * likelyhood
quality: The degree to which a system meets its specified requirements
---
Software systems are integral to modern life, even if a product doesn't directly use software there was probably software involved in its manufacturing process. When software doesn't work properly, all those dependent things break down too.
Humans can make mistakes which produce defects in code, or in documentation that the code will be written from. When a defect is executed the system might do something wrong, causing a failure. 
Faults do not necessarily lead to failures,  and it's possible for environmental factors to cause failures without the presence of a fault. Environmental factors can also cause faults by damaging stored code (don't magnetize your HDD).
Testing software systems and their documentation can reduce the risk of problems occuring during operation. If defects are identified and fixed, then the code quality is improved.
When a well-designed test does not find defects, it gives us confidence in the quality of the software and reduces the risk of that software. However, testing alone cannot change the actual quality of the code.
Testing is one part of Quality Assurance as a whole. The process of planning, executing and reviewing tests should teach useful lessons which you can apply in future projects to reach higher quality.
The amount of testing needed depends on the software's risk, business risks, and project constraints such as time and budget. Testing should give stakeholders enough information to make informed decisions about release or further development of the software.
  

 1.2 What is Testing?
debugging: The process if finding, analyzing and removing the causes of failures in software
requirement: A condition which a system must uphold to satisfy its contract/specifications
review: An evaluation of a product to determine discrepancies from its planned results and to recommend improvements.
test case: A set of input values, preconditions, expected results and postconditions for a particular test condition.
testing: The process of determining if software products meet their requirements. Includes the entire lifecycle.
test objective: The purpose for designing and executing a test.
---
Testing is more than just execution. It also includes preplanning, chosing test conditions, tesigning test cases, execution, checking results, reporting on the testing process, and performing closure tasks. It can also include reviewing documentation and performing static code analysis.
Testing can have several objectives. Not just finding defects, but gaining a general level of confidence about software's quality, and gaining information for decision-making. 
Development Testing tries to cause as many failures as possible, so that defects can be found and fixed.
Acceptance Testing tries to build confidence that the requirements are met.
Maintenance Testing tries to verify taht new bugs have not been introduced since previous tests
Operational testing tries to assess system characteristics such as reliability or availability
Debugging is not the same thing as testing. Dynamic testing finds failures, while debugging analyses those failures to locate and fix the source fault.
 

   1.3 Seven Testing Principles
Exhaustive testing: A test approach where the tes suite comprises all combinations of input values and preconditions.
---
Principle 1 - Testing shows the presence of defects, but cannot prove that there are no defects. A test showing no bugs only lowers the probability of bug existing.
Principle 2 - Exhaustive testing is impossible, except in trivial cases. Use risk analysis and prioritization to focus your limited testing where it will be most useful.
Principle 3 - Early testing is better. Defects are easier to address earlier in the software development life cycle, so testing should start as early in that cycle as possible.
Principle 4 - Defects tend to be clustered within specific modules of a piece of software. When you find a bug in a module, focus a greater proportion of testing on that module.
Principle 5 - The pesticide paradox says that when the same set of tests are repeated over and over, they will eventually stop finding new defects. Tests need to be revised over time to continue finding new defects.
Principle 6 - Testing methods depend on their context. You don't test a nuclear reactor the same way as an e-commerce site.
Principle 7 - The absense of bugs doesn't matter if the system is not built to fulfill users' needs and expectations. 

    1.4 Fundamental Test Process
Confirmation Testing, Re-testing: Running test cases which previously failed, in order to verify fixes were successful
Exit Criteria: The conditions agreed by stakeholder whichpermit a process to be officially completed.
Incident: An event occuring which requires investigation
Regression Testing: Testing a previously tested program following modification, to ensure that the modification did not introduce new defects or uncover old (previously non-executed) defects
Test Basis: All documents from which the requirements of a system canb e inferred. Test cases are based on these documents.
Test Condition: An item or event of a system that could be verified by test cases
Test Coverage: The degree (percentage) of a coverage item which has been exercised by test cases
Test Data: Data which exists before a test is executed, and which affects or is affected by the system under test
Test Execution: The process of running a test on a system and producing actual results
Test Log: A chronological record of relevant details about the execution of tests
Test Plan: A record of the test planning process. A document describing the scope, approach, resources and schedule of intended test activities. It identifies the features to be tested, the testing tasks, who will do each task, the degree of teste independence, the test environment, the test design techniques, the entry and exit criteria to be used, the rationale for their choice, and any risks requiring contingency planning. 
Test Procedure, Test Script: The sequence of actions for the execution of a test.
Test Policy: A high-level documetn describing the principles, approach and major objectives of an organization's testing
Test Suite: A set of several test cases for a system under test. These tests often occur in sequence, with the postcondition of one test service as the precondition for the next.
Test Summary Report: A document summarizing testing activities and results, also containing an evaluation of the corresponding test items against exit criteria.
Testware: Artifacts produced during the test process required to plan, design and execute tests. Documentation, scripts, expected results, clean-up prodecures, etc.
---
Tests aren't only executed, they also need planning, preparation and evaluation. The basic steps are as follows:
1) Test planning and control. Defining the objectives of testing and specifying test activities to meet those objectives. "Control" means continuously comparing actual progress to the plan, and taking action to make them meet.
2) Test analysis and design. Transforming general testing objectives into tangible test conditions and test cases. This can be formalized into several steps: Reviewing the test basis, evaluating testability of the basis, identifying and prioritizing test conditions, designing and prioritizing high-level test case, identifying necessary test data to support the test cases, designing the test environment, and creaing bi-directional tracability between test basis and test cases.
3) Test implementation and execution. Finalizing and implementing test cases, validating all planned elements thus far, executing the tests, comparing the actual and expected results, reporting and analyzing discrepancies, and repeating tests as necessary.
4) Evaluating exit criteria and reporting. Comparing test results to exit criteria and determining if more tests are needed or if the exit criteria should be changes. Summarize results for stakeholders.
5) Test closure activities. Collect data from the previous activities, close any outstanding issues, analyze lessons learned, and archive any useful materials.
 

   1.5 The Psychology of Testing
Error guessing: A test design technique where the tester uses their experience to anticipate what defects might be present in a system and design tests specifically to expose them.
Independence: The separation of responsibilities between people, to ensure tests are completed objectively.
---
When testing, there is a tradeoff between familiarity and bias. The developer of a piece of code (or people close to them) are very knowledgable about the code and capable of good error-guessing, but they may subconsciously (or consciously) try to avoid their own mistakes. 

When the tester need to report bugs to a developer, care shoudl be taken to avoid being seen as an "enemy." Remind people of their common goal of achieving high quality, use neutral fact-focused language, and be very clear.
  

 1.6 Code of Ethics
ISTQB states that its certified testers will:
-Act in the public interest
-Act in the Client and Employer's best interest, consistent with public interest
-Ensure that the deliverables they provide meet the highest professional standards possible
-Maintain the ntegrity and independence of their professional judgement
-Advance the integrity and reputation of the profession, consistend with the public interest
-Be fair and supportive to colleagues, and promote cooperation with software developers
-Participate in lifelong learning regarding the practice of their profession, and promote an ethical approach to the practice of the profession