A
Acceptance Criteria: Acceptance Criteria is the exit criteria which a component or system is expected to satisfy in order to be accepted by the end user, customer or any other authorized agency.
Acceptance Testing:
Acceptance Testing is the best industry practice & its is the final testing conducted to enable a user / customer to determine whether to accept a software product. It is normally performed to validate the software meets a set of agreed acceptance criteria based on specifications provided by the end-user or customer. In theory when all the acceptance tests pass, it can be said that the project is done.
Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).
Accuracy:
Accuracy is the capability of the software product to provide the right or agreed results or effects with the needed degree of precision.
Actual Result:
Actual Result is the byproduct of unit or component testing. It is the behavior as a result of the testing.
Adaptability:
Adaptability is the capability of any software product by which it gets adapted to different specified environments without applying actions or means other than those provided for this purpose for the software in consideration.
Ad-hoc Testing:
Ad-hoc Testing is a testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Ad-hoc Testing is a commonly used term for software testing performed without planning and documentation. It involves test design and simultaneous test execution. It is a part of exploratory testing, being the least formal of test methods. It is generally criticized because it isn't structured & the tester seeks to find bugs quickly with any means that seem appropriate. For Ad-hoc testing the testers possess significant understanding of the software before testing it. Ad-hoc Testing can include negative testing as well.
Affinity Diagram: Affinity Diagram is a group process, mainly developed through the process of brainstorming in which the entire data gets bifurcated into various categories. This consumes a large amount of language data.
Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm.
Alpha Testing: Alpha Testing is simulated or actual operational testing by potential users / customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing. It is usually done when the development of the software product is nearing completion; minor design changes may still be made as a result of such testing.
Analyzability:
Analyzability is the capability of a software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identified.
Anomaly:
Anomaly refers to a condition which deviates from expectation based on requirements specifications, design documents, user documents, standards, etc. or from someone’s perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation.
Application Binary Interface (ABI): A specification defining requirements for portability of applications in binary forms across different system platforms and environments.
Application Programming Interface (API): A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.
Assertion Testing: Assertion Testing is a dynamic analysis technique which inserts assertions about the relationship between program variables into the program code. The truth of the assertions is determined as the program executes.
Attractiveness:
Attractiveness is the capability of the software product to be attractive to the user.
Audit:
Audit is an activity related to inspection / assessment which verifies the compliance of the end results with the plans, policies and procedures and is aimed at conservation of resources.
Audit Trail:
Audit Trail is a path by which the original input to a process or (e.g. data) can be traced back through the process, taking the process output as the starting point. This facilitates defect analysis and allows a process audit to be carried out.
Authentication Testing:
Authentication Testing is a type of testing in which the test engineer feeds different combinations of user names and passwords in order to check whether only the authorized persons are accessing the application or not.
Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality.
Automated Testing: Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing. It involves the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
Automated Testware:
Automated Testware are the testware employed during automated testing, such as tool scripts.
Availability:
Availability is the degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage.
B
Back-To-Back Testing: Back-To-Back Testing refers to the testing process in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies.
Backus-Naur Form: Backus-Naur Form is a meta-language used to formally describe the syntax of a language.
Basic Block: Basic Block is a sequence of one or more consecutive, executable statements containing no branches.
Basis Path Testing: Basis Path Testing is a white box test case design technique that uses the algorithmic flow of the program to design tests.
Basis Set: Basis Set is the set of tests derived using Basis Path Testing. These are set of test cases derived from the internal structure or specification to ensure that 100% of a specified coverage criterion is achieved.
Baseline:Baseline is a quantitative measure of the present / existing level of performance which has been formally reviewed or agreed upon, which thereafter serves as the basis for further development. It is the point at which some deliverable produced during the software engineering process is put under formal change control.Baseline Document:Baseline documents are the documents, which have been approved by the customer and will not have any more changes. Baseline Documents cover all the details of the project and have undergone "walkthrough" process. Once a document is Base-lined it cannot be changed unless there is a change request duly approved by the customer. Service Level Agreement (SLA) & Business Requirement Documents (BRD) are the examples of Baseline Documents.
Behavior:
Behavior refers to the response of a component or system to a set of input values and preconditions.
Benchmarking: Benchmarking is a process of drawing comparison of products, services or processes against the known best / competitive practices. It is aimed at laying down the standards for the performance of a Product, Service or Support processes.
Benchmark Testing:
Benchmark Testing involves tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration.
Bespoke Software:
Bespoke Software refers to the software product developed specifically for a set of users or customers. It is an exact contrast of off-the-shelf software.
Best Practice:
Best Practice refers to a superior method or innovative practice that contributes to the improved performance of an organization under given context, usually recognized as ‘best’ by other peer organizations.
Beta Testing: Beta Testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further evaluation by the users can reveal more faults or bugs in the product. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users. Thus beta testing is done by end-users or others, & not by the programmers or testers.Big-Bang Testing:
Big-Bang Testing refers to the type of integration testing in which software elements, hardware elements, or both are combined all at once into a component or an overall system, rather than in stages. Binary Portability Testing:
Binary Portability Testing involves testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification. Black-Box Testing: Black-Box Testing Involves tests based upon specification requirements and functionality. For Black Box testing, the software tester need not have any knowledge of internal design of the software or its code being tested. Due to this reason, the tester and the programmer can be independent of each other, avoiding programmer bias toward his own work. During black box testing, the tester would only know the "legal" inputs and what the expected outputs should be, but he need not know as to how the program actually arrives at those outputs. Black Box Test Design Techniques:
Black box test design technique refers to documented procedure to derive and select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.Blocked Test Case:
Blocked Test Case refers to the test case, which cannot be executed because the preconditions for its execution are not fulfilled.Bottom Up Testing: Bottom Up Testing is an approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested. Boundary Testing: Boundary Testing involves test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests). Boundary Value:
Boundary Value is an input or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.Boundary Value Analysis: Boundary Value Analysis is a data selection technique in which test data is selected from the "boundaries" of the input or output domain classes, data structures and procedure parameters. Selection mainly include the actual minimum and maximum boundary values, with a tolerance of (+ 1 or - 1) on the maximum and the minimum values. Boundary Value Analysis or BVA is similar to Equivalence Partitioning but focuses on "corner cases".
Boundary Value Coverage:
Boundary Value Coverage is the percentage of boundary values, which have been exercised by a test suite.
Branch:
Branch refers to a basic block which can be selected for execution based on a program construct in which one of two or more alternative program paths are available, e.g. case, jump, go to, if then - else.
Branch Testing or Branch Analysis: Branch Testing is a white box test method in which it is essential to execute the test of each possible branch on each decision at least once.
Branch Coverage: Branch Coverage is an outcome of a decision, and measures the number of decision outcomes or branches which have been tested. This takes a more in-depth view of the source code rather than a simple "Statement Coverage". A branch is an outcome of a decision. For example Boolean decisions like an "If - Statement", has two outcomes or branches (i.e. True and False).
Brainstorming:Brainstorming is an idea generating technique which uses the thinking capacity of a group of people. It is a technique for soliciting a quantity of ideas quickly in a non critical work environment. It encourages teamwork & creativity.
Breadth Testing: Breadth Testing refers to a test suite that exercises the full functionality of a product but does not test features in detail.
Bug: Bug refers to a fault or defect in software program which causes the program to perform in an unintended or unanticipated manner. There are two types of bugs 1) Code Error related bugs and 2) Design Error related bugs. Branch Coverage: Branch Coverage is an outcome of a decision, and measures the number of decision outcomes or branches which have been tested. This takes a more in-depth view of the source code rather than a simple "Statement Coverage". A branch is an outcome of a decision. For example Boolean decisions like an "If - Statement", has two outcomes or branches (i.e. True and False).
Brainstorming:Brainstorming is an idea generating technique which uses the thinking capacity of a group of people. It is a technique for soliciting a quantity of ideas quickly in a non critical work environment. It encourages teamwork & creativity.
Breadth Testing: Breadth Testing refers to a test suite that exercises the full functionality of a product but does not test features in detail.
Bug Priority: Bug Priority refers to the need as to how urgently bug is needed to be fixed. It describes the importance of the bug. Bug priority may change according to the schedule of testing.
Bug Severity:Bug Severity refers to the quantum of danger as to how badly the bug can harm the system. It describes as to how bad the bug is. Severity is a feature of constant nature associated with the bug.
Build Verification Testing or BVT: Build Verification Testing, also known as Build Acceptance Test, is a set of tests run on each new build of a product to verify that the build is testable before the build is released into the hands of the test team. The build acceptance test is generally a short set of tests, which exercises the mainstream functionality of the application software. Any build that fails the build verification test is rejected, and testing continues on the previous build (provided there has been at least one build that has passed the acceptance test). BVT is important because it lets developers know right away if there is a serious problem with the build, and they save the test team wasted time and frustration by avoiding test of an unstable build.
Business Process-Based Testing:
Business Process-Based Testing is an approach to testing in which test cases are designed based on descriptions or knowledge of the business processes.
C
CASE: CASE refers to Computer Aided Software Engineering.
CAST:
CAST refers to Computer Aided Software Testing.
Call Coverage: Call coverage is a metric, which reports whether you executed each function call. The hypothesis is that bugs commonly occur in interfaces between modules. It is also known as call pair coverage.
Capture / Replay Tool: Capture / Replay Tool is a test tool which records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly it is applied to GUI test tools. Such tools are often used to support automated regression testing.
Cause Effect Graph: Cause Effect Graph is a graphical representation of inputs and the associated outputs which can be used to design the test cases.
Cause Effect Graphing: Cause Effect Graphing is a black box test design technique in which test cases are designed from cause-effect graphs.
Certification:
Certification is the process of confirming that a component, system or person complies with its specified requirements, e.g. by passing an exam.
Certification Testing: Certification Testing is for acceptance of a software application by an authorized agency after the software is validated by thoroughly demonstrated practical trials to the full satisfaction of an agent nominated by the agency.
Changeability:
Changeability refers to the capability of the software product to enable specified modifications to be implemented.
Checkpoint or Verification Point: Checkpoint or Verification Point is an expected behavior of the application which must be validated with the actual behavior after certain action has been performed on the application.
Classification Tree Method:
Classification Tree Method is a black box test design technique in which test cases, described by means of a classification tree, are designed to execute combinations of representatives of input and/or output domains. [
Client: Client is a customer who invests his money & pays for the product & becomes the beneficiary of the use of the product.
Client Server Testing: Client/Server testing involves increased size, scope, and duration of the test effort itself. The necessary test phases include build acceptance testing, prototype testing, system reliability testing, multiple phases of regression testing, and beta, pilot, and field testing.
CMM:CMM means "Capability Maturity Model" developed by the Software Engineering Institute (SEI). It is a process capability maturity model, for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes. CMM is intended as a tool for objectively assessing the ability of government contractors' processes to perform a contracted software project.
CMMI:The CMM has now been superseded by the CMMI (Capability Maturity Model Integration). The old CMM has been renamed to Software Engineering CMM (SE-CMM).
Coding:
Coding is a generic term which refers to the generation of a source code.
Code Audit:
Code Audit is an independent review of source code by a person, team, or tool to verify compliance with software design documentation and programming standards. In Code Audit correctness and efficiency may also be evaluated. It is in contrast with code inspection, code review, code walkthrough.
Code Complete:
Code Complete refers to a Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
Code Coverage:
Code Coverage are techniques amongst the first few techniques invented for systematic software testing. It Is a measure used in software testing. It describes the degree to which the source code of a program has been tested. It is a form of testing that inspects the code directly and is therefore a form of white box testing. Test engineers can look at code coverage test results to help them devise test cases and input or configuration sets which will increase the code coverage over vital functions. Currently, the use of code coverage is extended to the field of digital hardware etc.
Code Freeze or Feature Freeze:
Code Freeze or Feature Freeze represents a point in time in the software development process after which the rules for making changes to the source code or related resources become more strict. A freeze helps move the project forward towards a release or the end of an iteration. The stricter rules may include only allowing changes, which fix bugs, or allowing changes only after thorough review by other members of the development team. It is a particular kind of freeze of features, when all work on adding new features is suspended, shifting the effort towards fixing bugs and improving the user experience.
Code Inspection:
Code Inspection is a formal manually performed testing technique where the programmer reviews source code statement by statement with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
Code Walkthrough: Code Walkthrough is a formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.
Co-Existence:
Co-Existence is the capability of the software product to co-exist with other independent software in a common environment sharing common resources.
Comparison Testing: Comparison Testing is the process of comparing strengths and weaknesses of the software with that of some better or similar products from the competitors.
Compatibility Testing: Compatibility Testing is used to determine how well the software performs under different environment of varied types of hardware / software / operating system / network etc. It ensures that the software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
Complexity:
Complexity is the degree to which a component or system has a design and / or internal structure that is difficult to understand, maintain and verify.
Completeness: A product is termed as complete if it meet all the intended requirements.
Compliance:
Compliance is the capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions.
Compliance Testing:
Compliance Testing is the process of testing to determine the compliance of component or system.
Component: Component is a minimal software item for which a separate specification is available & it can be tested in isolation.
Component Integration Testing:
Component Integration Testing is the testing performed to expose defects in the interfaces and interaction between integrated components.
Component Specification:
Component Specification is a description of a component’s function in terms of its output values for specified input values under specified conditions, and required non-functional behavior (e.g. resource-utilization).
Component Testing:Component Testing is like "Unit Testing" with the difference that all Stubs and Simulators are replaced with the real objects. Here a Unit is a component, and integration of one or more such components is also a Component.
Compound Condition:
Compound Condition refers to two or more single conditions joined by means of a logical operator (AND, OR or XOR), e.g. ‘A>B AND C>1000’.
Concurrent Testing or Concurrency Testing: Concurrent Testing Is multi-user testing used to determine the effects of accessing the same application code, module or database records. It Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores etc.
Condition:
Condition is a logical expression that can be evaluated as True or False, e.g. A>B.
Condition Coverage or Condition Testing: Condition Coverage is a white-box testing technique, which measures the numbers or percentage of the decision outcome covered by the designed test cases. For example 100% condition coverage indicates that every possible outcome of every decision had been executed at least once during the testing.
Condition Determination Coverage:
Condition Determination Coverage is the percentage of all single condition outcomes that independently affect a decision outcome that have been exercised by a test case suite. 100% condition determination coverage implies 100% decision condition coverage.
Condition Determination Testing:
Condition Determination Testing is a white box test design technique in which test cases are designed to execute single condition outcomes that independently affect a decision outcome.
Condition Outcome:
Condition Outcome is the evaluation of a condition to True or False.
Configuration:
Configuration refers to the composition of a component or system as defined by the number, nature, and interconnections of its constituent parts.
Configuration Auditing:
Configuration Auditing refers to the function to check on the contents of libraries of configuration items, e.g. for standards compliance.
Configuration Control:
Configuration Control refers to an element of configuration management, consisting of the evaluation, co-ordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification.
Configuration Identification:
Configuration Identification refers to an element of configuration management, consisting of selecting the configuration items for a system and recording their functional and physical characteristics in technical documentation.
Configuration Item:
Configuration Item is an aggregation of hardware, software or both, that is designated for configuration management and treated as a single entity in the configuration management process.
Configuration Management:
Configuration Management is a discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements.
Configuration Management Tool:
Configuration Management Tool are the Tools which are used to keep track of changes made to the systems and all related artifacts. Also called version control tools.
Configuration Testing: Configuration Testing is the Testing of an application on all types of supported hardware and software platforms. It covers various combinations of hardware types, configuration settings and software versions.
Conformance Testing: Conformance Testing is the process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
Consistency: Consistency refers to adherence to a given set of rules repeatedly. It is the degree of uniformity, standardization, and freedom from contradiction among the documents or parts of a component or system.
Context Driven Testing:
The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
Control Flow:
Control Flow refers to an abstract representation of all possible sequences of events (paths) in the execution through a component or system.
Conversion Testing: Conversion Testing validates the effectiveness of data conversion processes, including field-field mapping and data translation. It involves testing of programs or procedures used to convert data from existing systems for use in replacement systems.
Correctness:
Correctness refers to the extent to which software is free from design and coding defects. It is a quality attribute pointing towards an extent to which the program satisfies the desired requirements and user objectives.
Cost of Quality: Cost of Quality refers to the Money spent over and above the expected product development costs. It is aimed to ensure that the customer receives a product, which is of the desired quality giving him due satisfaction. The cost of quality includes prevention, appraisal, and correction or repair costs.
Coverage:
Coverage is the degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.
Coverage Analysis:
Coverage Analysis refers to the measurement of achieved coverage to a specified coverage item during test execution referring to predetermined criteria to determine whether additional testing is required and if so, which test cases are needed.
Coverage Item:
Coverage Item is an entity or property used as a basis for test coverage, e.g. equivalence partitions or code statements.
Coverage Tool:
Coverage Tool refers to a tool that provides objective measures of what structural elements, e.g. statements, branches have been exercised by the test suite.
Cross Browser Testing: Cross Browser Testing is used to test an application with different browser & may be under different OS for usability testing & compatibility testing.
Customer: Customer is an individual or an organization, internal or external to the producing organization which receives the product.
Cyclomatic Complexity: Cyclomatic Complexity is a software metric (measurement). It is a measure of the logical complexity of an algorithm, used in white-box testing. It was developed by Thomas McCabe and is used to measure the complexity of a program. It directly measures the number of linearly independent paths through a program's source code. Empirically it is a number of decision statements plus one.
D
Data Definition: Data Definition is an executable statement where a variable is assigned a value.
Data Dictionary: Data Dictionary is a database that contains definitions of all data items defined during analysis. Data Driven Testing: Data Driven Testing is a testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. It is a common technique in Automated Testing. Data Flow:
Data Flow is an abstract representation of the sequence and possible changes of the state of data objects, where the state of an object is any of: creation, usage, or destruction.
Data Flow Analysis:
Data Flow Analysis is a form of static analysis based on the definition and usage of variables.
Data Flow Coverage:
Data Flow Coverage refers to the percentage of definition-use pairs that have been exercised by a test case suite. It is a variation of path coverage, which considers only the sub-paths from variable assignments to subsequent references of the variables. The advantage of this metric is that the paths reported have direct relevance to the way the program handles the data. One disadvantage of this metric is that it does not include decision coverage. Another disadvantage is its complexity.
Data Flow Diagram: Data Flow Diagram is a modeling notation that represents a functional decomposition of a system. Data Flow Test:
Data flow test is a white box test design technique in which test cases are designed to execute definition and use pairs of variables.
Data Pool: Datapool is a test dataset, a collection of related data records which supplies data values to the variables in a test script during test script playback. Datapools are used to supply realist data and to stress an application with a realistic amount of data. When we create a data-driven test by using Functional Test, we select the objects in an application-under-test to data-drive. Functional Test creates a datapool in which we can edit and add data. we can use a single test script repeatedly with varying input and response data.
Debugging: Broadly debugging is the process of Fixing the identified Bugs. It involves a process of analyzing and rectifying the syntax errors, logic errors and all other types of errors identified during the process of testing. It must be clearly differentiated from "Testing" which refers to locating or identifying the errors or bugs. Debugging occurs as a consequence of successful testing. It is an exercise to connect the external manifestation of the error and the internal cause of the error.
Debugging Tool:
Debugging Tool is a tool used by programmers to reproduce failures, investigate the state of programs and find the corresponding defect. Debuggers enable programmers to execute programs step by step, to halt a program at any program statement and to set and examine program variables.
Decision:
Decision is a program point at which the control flow has two or more alternative routes. A node with two or more links to separate branches.
Decision Coverage: Decision Coverage is a white-box testing technique which measures the number or percentage of the decision directions executed by the designed test case. For example 100% Decision coverage indicates that all decision directions had been executed at least once during the testing.
Decision Condition Testing:
Decision Condition Testing is a white box test design technique in which test cases are designed to execute condition outcomes and decision outcomes.
Decision Coverage:
Decision Coverage refers to the percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.
Decision Table:
Decision Table is a tool to do documentation of unique combinations of conditions and associated results in order to derive unique test cases for validation testing. It is a table showing combinations of inputs and their associated outputs and actions (effects), which can be used to design test cases.
Decision Table Testing:
Decision Table Testing is a black box test design techniques in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table.
Decision Testing:
Decision Testing is a white box test design technique in which test cases are designed to execute decision outcomes.
Decision Outcome:
Decision Outcome is the result of a decision (which therefore determines the branches to be taken).
Defect:
Defect refers to nonconformance to requirements or functional / program specification. It is a flaw in a component or system, which can cause the component, or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system
Defect Density:
Defect Density is a software metric defined as: Total number of defects per LOC (lines of code). Alternatively It can be: Total number of defects per Size of the Project. Here the measure of "Size of the Project" can be number of Function Points, Number of Feature Points, number of Use Cases or KLOC (Kilo Lines of Code) etc
Defect Detection Percentage or DDP:
Defect Detection Percentage is the number of defects found by a test phase, divided by the number found by that test phase and any other means afterwards.
Defect Report:
Defect Report is a document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function.
Defect Management:
Defect Management is the process of recognizing, investigating, taking action and disposing of defects. It involves recording defects, classifying them and identifying the impact.
Defect Masking:
Defect Masking refers to an occurrence in which one defect prevents the detection of another.
Defect Tracking Tools:
Defect Tracking Tools are the tools to do documentation of defects as detected during the testing and for keeping a track of their status till they are fully resolved.
Definition-Use Pair:
Definition-Use Pair refers to the association of the definition of a variable with the use of that variable. Variable uses include computational (e.g. multiplication) or to direct the execution of a path ("predicate" use).
Dependency Testing: Dependency Testing examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
Depth Testing: Depth Testing is a test that exercises a feature of a product in full detail.
Deliverable:Deliverable is any (work) product that must be delivered to someone other that the (work) product’s author.
Design-Based Testing:
Design Based Testing is an approach to testing in which test cases are designed based on the architecture Design-based testing and/or detailed design of a component or system (e.g. tests of interfaces between components or systems).
Desk Check: Desk Check is a verification technique conducted by the author of the artifact to verify the completeness of their own work. This is a standalone technique which does not involve anyone else.
Development Testing:
Development Testing is a formal or informal testing conducted during the implementation of a component or system, usually in the development environment by developers.
Direct URL Testing: Development Testing:
Development Testing is a formal or informal testing conducted during the implementation of a component or system, usually in the development environment by developers.
Direct URL Testing is a type of testing in which the test engineer specifies the direct URL’s of some secured page and checks as to whether the page gets accessed or not.
Documentation Testing:
Documentation Testing refers to the testing the quality of the documentation, e.g. user guide or installation guide.
Domain:
Domain is the set from which valid input and/or output values can be selected.
Driver:
Driver is a software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.
Dynamic Analysis:
Dynamic Analysis is the process of evaluating behavior, e.g. memory performance, CPU usage, of a system or component during execution.
Dynamic Comparison:
Dynamic Comparison refers to the comparison of actual and expected results, performed while the software is being executed, for example by a test execution tool.
Dynamic Testing: Dynamic Testing is used to describe the testing of the dynamic behavior of the software code. It involves actual compilation & running of the software by giving input values and checking if the output is as expected. It is the validation portion of Verification and Validation.
Dynamic Analysis is the process of evaluating behavior, e.g. memory performance, CPU usage, of a system or component during execution.
Dynamic Comparison:
Dynamic Comparison refers to the comparison of actual and expected results, performed while the software is being executed, for example by a test execution tool.
Dynamic Testing: Dynamic Testing is used to describe the testing of the dynamic behavior of the software code. It involves actual compilation & running of the software by giving input values and checking if the output is as expected. It is the validation portion of Verification and Validation.
E
Efficiency: Efficiency is a quality attribute pointing towards the amount of computing resources and code required by the program to perform a particular function. It is the capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions.
Efficiency Testing: Efficiency Testing is the process of testing to determine the efficiency of a software product.
Elementary Comparison Testing:
Elementary Comparison Testing is a black box test design techniques in which test cases are designed to execute combinations of inputs using the concept of condition determination coverage.
Emulator: Emulator is a device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.
End-To-End Testing or E2E Testing: End-To-End Testing or E2E Testing is quite similar to system testing but involves testing of the application in a environment that simulates the real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. Even the transactions performed simulates the end users usage of the application.
Endurance Testing:
Endurance Testing checks for memory leaks or other problems that may occur with prolonged execution.
Entrance Criteria: Entrance Criteria refers to the desired conditions and standards for work product quality, which must be present or met for entry into the next stage of the software development process.
Entry Criteria:
Entry Criteria is the set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria.
Entry Point:
Entry Point is the first executable statement within a component.
Equivalence Class: Equivalence Class is a portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.
Equivalence Partition:
Equivalence Partition is a portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.
Equivalence Partition Coverage:
Equivalence Partition Coverage refers to the percentage of equivalence partitions that have been exercised by a test suite.
Equivalence Partitioning: Equivalence Partitioning is a software testing related technique, with two prime goals like 1) To reduce the number of test cases to a necessary minimum. 2) To select the right test cases to cover all possible scenarios. It is typically applied to the inputs of a tested component, although in rare cases equivalence partitioning is also applied to outputs of a software component. Equivalence Partitioning technique utilizes a subset of data which is representative of a larger class. Equivalence Partitioning is carried out as a substitute of doing exhaustive testing for each value of data in a larger class.
Error or Defect: Error or Defect is a discrepancy between a computed, observed or measured value or condition as compared to the true, specified or theoretically correct value or condition. It can be human action, which resulted in software containing a fault (e.g. omission or misinterpretation of user requirements in the software specification, incorrect translation or omission of a requirement in the design specification).
Error Guessing: Error Guessing is a software testing design technique based on the ability of the tester to draw on his past experience, knowledge and intuition to predict where bugs will be found in the software under test. Some areas to be guessed are: empty or null strings, zero instances, occurrences, blank or null characters in strings, Negative numbers. Software tester uses his judgement to select the test data for picking up the values, which seem likely to cause some defects.
Error Seeding:
Error Seeding is the process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal, and estimating the number of remaining defects.
Error Tolerance:
Error Tolerance is the ability of a system or component to continue normal operation despite the presence of erroneous inputs.
Exception Handling:
Exception Handling is the behavior of a component or system in response to erroneous input, from either a human user or from another component or system, or to an internal failure.
Executable Statement:
Executable Statement is a statement which, when compiled, is translated into object code, and which will be executed procedurally when the program is running and may perform an action on data.
Exercised:
A program element is said to be exercised by a test case when the input value causes the execution of that element, such as a statement, decision, or other structural element.
Exhaustive Testing: Exhaustive Testing refers to executing the program through all-possible combinations of input values and preconditions for an element of the software under test
Exit Criteria: Exit Criteria refers to standards for work product quality which blocks the promotion of incomplete or defective work products to subsequent stages of the software development process. Exit Criteria comprise of a set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used in the testing to report against and to plan when to stop testing.
Exit Point:
Exit Point refers to the last executable statement within a component.
Expected Result:
Expected Result refers to the behavior predicted by the specification, or another source, of the component or system under specified conditions.
Exploratory Testing: Exploratory Testing is an approach in software testing which involves simultaneous learning, test design and test execution. It is a type of "Ad-hoc Testing", but only difference is that in this case, the tester does not have much idea about the application & he explores the system in an attempt to learn the application and simultaneously test it. It is a creative & informal software testing aimed to find faults and defects driven by challenging assumptions. It is not based on formal test plans or test cases. It is a type of testing where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.
F
Fail: A test is considered as fail if its actual result does not match its expected result.Failure:
Failure means actual deviation of the component or system from its expected delivery, service or result.
Failure Mode:
Failure Mode refers to the physical or functional manifestation of a failure. For example, a system in failure mode may be characterized by slow operation, incorrect outputs, or complete termination of execution.
Failure Mode and Effect Analysis (FMEA):
Failure Mode and Effect Analysis is a systematic approach to risk identification and analysis of identifying possible modes of failure and attempting to prevent their occurrence.
Failure Rate:
Failure Rate is the ratio of the number of failures of a given category to a given unit of measure, e.g. failures per unit of time, failures per number of transactions, failures per number of computer runs.
Fault Tolerance:
Fault Tolerance refers to the capability of the software product to maintain a specified level of performance in cases of software faults (defects) or of infringement of its specified interface.
Fault Tree Analysis:
Fault Tree Analysis is a method used to analyze the causes of faults or defects.
Feasible Path:
Feasible Path is a path for which a set of input values and preconditions exists which causes it to be executed.
Feature:
Feature is an attribute of a component or system specified or implied by requirements documentation (for example reliability, usability or design constraints).
Feature Freeze or Code Freeze:
Feature Freeze represents a point in time in the software development process after which the rules for making changes to the source code or related resources become more strict. A freeze helps move the project forward towards a release or the end of an iteration. The stricter rules may include only allowing changes, which fix bugs, or allowing changes only after thorough review by other members of the development team. It is a particular kind of freeze of features, when all work on adding new features is suspended, shifting the effort towards fixing bugs and improving the user experience.
Firewall leakage Testing:
Firewall leakage Testing is a type of testing in which someone will enter as one level of user and try to access the other level unauthorized pages in order to check whether the firewall is working properly or not.
Flexibility: Flexibility is a quality attribute pointing towards an effort required to modify an operational program.
Flowchart:
Pictorial representation of data flow and computer logic. It is easier to understand and assess the structure and logic of an application system by developing a flow chart rather than attempting to understand narrative descriptions or verbal explanations.
Force Field Analysis:
Force Field Analysis refers to a group technique used to identify both driving and restraining forces, which influence a current situation.
Formal Analysis:
Formal Analysis refers to technique which uses rigorous mathematical techniques to analyze the algorithms of a solution for numerical properties, efficiency and correctness.
Frozen Test Basis:
Frozen Test Basis refers to a test basis document that can only be amended by a formal change control process.
Function Coverage:
Function coverage is a metric, which reports whether we invoked each function or procedure. It is useful during preliminary testing to assure at least some coverage in all areas of the software. Broad, shallow testing finds gross deficiencies in a test suite quickly.
Function Point Analysis:
A function point is a unit of measurement to express the amount of business functionality an information system provides to a user. Function points are an ISO recognized software metric to size an information system based on the functionality that is perceived by the user of the information system, independent of the technology used to implement the information system. The method of measuring the size of an information system and expressing it in a number of function points is called function point analysis (FPA). FPA can be used to find the testing effort required in the information system; The formula is Number of Test Cases = (Function Points)
Functional Decomposition:
Functional Decomposition is a technique used during planning, analysis and design; creates a functional hierarchy for the software.
Functional Integration:
Functional Integration is an integration approach which combines the components or systems for the purpose of getting a basic functionality working early.
Functional Requirement:
Functional Requirement refers to a requirement that specifies a function that a component or system must perform.
Functional Specification: Functional Specification is a document that describes in detail the characteristics of the product with regard to its intended features.
Functional Test Design Technique:
Functional Test Design Technique is a set of documented procedure to derive and select test cases based on an analysis of the specification of the functionality of a component or system without reference to its internal structure.
Functional Testing:
Functional Testing refers to testing which ensures all functional requirements are met without any consideration to the final program. structure. It verifies that the application supplies what the users need. It emulates user actions to ensure that execution paths operate correctly and the appropriate responses are returned for the given requests. It is a black-box testing aimed to validate functional requirements of an application. The testers usually do this.
Functionality:
Functionality is the capability of the software product to provide functions which meet stated and implied needs when the software is used under specified conditions.
Functionality Testing:
Functionality Testing is the process of testing to determine the functionality of a software product.
G
Glass Box Testing: Glass Box Testing is an exact contrast of Black Box testing & is structural testing, where test data are derived from direct examination of the code to be tested. Glass-box test design allows one to peek inside the "box", and it focuses specifically on using internal knowledge of the software to guide the selection of test data.
Gorilla Testing: Gorilla Testing involves testing one particular module, functionality heavily.
Gray Box Testing: Gray Box Testing is a combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
H
Heuristic Evaluation: Heuristic Evaluation refers to a static usability test technique to determine the compliance of a user interface with recognized usability principles (the so-called "heuristics").
High Level Test Case:
High Level Test Case refers to the test case without concrete values for input data and expected results.
High Order Tests: High Order Tests are Black-box tests conducted once the software has been integrated.
Histogram:
Histogram is a bar graph which displays the distribution of the measurement data in a data set which are organized according to the frequency or relative frequency of occurrence. It illustrates the shape of the distribution of individual values in a data set along with information regarding the average and variation. Height of the bars reflects the number of items in the class and the width reflects the measurement interval. It is a good way to provide a picture of the historical data.
Horizontal Traceability:
Horizontal Traceability refers to the tracing of requirements for a test level through the layers of test documentation (e.g. test plan, test design specification, test case specification and test procedure specification).
I
Impact Analysis: Impact Analysis refers to the assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified requirements.
Incident:
Incident refers to any event occurring during testing that requires investigation.
Incident Management:
Incident Management refers to the process of recognizing, investigating, taking action and disposing of incidents. It involves recording incidents, classifying them and identifying the impact.
Incident Management Tool:
Incident Management Tool refers to a tool that facilitates the recording and status tracking of incidents found during testing. They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of incidents and provide reporting facilities.
Incident Report:
Incident Report is a document reporting on any event that occurs during the testing which requires investigation.
Incremental Development Model:
Incremental Development Model refers to a development life cycle where a project is broken into a series of increments, each of which delivers a portion of the functionality in the overall project requirements. The requirements are prioritized and delivered in priority order in the appropriate increment. In some (but not all) versions of this life cycle model, each subproject follows a ‘mini V-model’ with its own design, coding and testing phases.
Incremental Integration Testing: Incremental Integration Testing Involves continuous testing of an application while new functionality is simultaneously added. It requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed. This testing is done either by programmers or by testers.
Independence:
Independence means separation of responsibilities, which encourages the accomplishment of objective testing.
Independent Test Group (ITG): Independent Test Group (ITG) is a group of people whose primary responsibility is software testing,
Inspection:
Inspection is a manual testing technique in which program documents like specifications, requirements, design, source code or user's manuals are examined in a very formal and disciplined manner to discover errors, violations of standards and other problems. Checklists are a typical vehicle used in accomplishing this technique. It is a formal assessment of a product conducted by qualified independent. Inspection related to issues on deliverables involve the authors. The inspection is aimed at identification of defects, but it does not involve an action to rectify the defects. Authors take corrective actions and organize follow-up reviews as per the need.
Install / Uninstall Testing: Install / Uninstall Testing involves testing of full, partial, or upgrade install / uninstall processes.
Installation Testing: Installation Testing is performed to ensure that all the Installed features and options of the software are functioning properly. It confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions. The main objective of Installation Testing is to verify that all necessary components of the application are actually installed or not without missing out any component.
Intake Test:
Intake Test refers to a special instance of a smoke test to decide if the component or system is ready for detailed and further testing. An intake test is typically carried out at the start of the test execution phase.
Integrity: Integrity is a quality attribute pointing towards an extent to which unauthorized persons get prevented / controlled from accessing the software or its data.
Integration Testing:
Integration Testing refers to testing of the application after combining / integrating its various parts to find out if all parts function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. It begins after two or more programs or application components have been successfully unit tested. This type of testing is especially relevant to client/server and distributed systems. It is conducted by the development team to validate the interaction or communication between the individual components being integrated.
Interface Testing:
Interface Testing is the testing conducted to evaluate whether systems or components pass data and control correctly to one another.
Interoperability: Interoperability is a quality attribute pointing towards an effort required to couple one system with the other.
Interoperability Testing:
Interoperability Testing is the process of testing to determine the interoperability of a software product.
Invalid Case Testing:
Invalid Case Testing is a testing technique using erroneous [invalid, abnormal, or unexpected] input values or conditions. See: equivalence class partitioning.
Isolation Testing:
Isolation Testing refers to the testing of individual components in isolation from surrounding components, with surrounding components being simulated by stubs and drivers, if needed.
I V & V: I V & V means Independent Verification and Validation. Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. Verification can be done with the help of checklists, issues lists, walkthroughs, and inspection meetings. Whereas Validation typically involves actual testing and takes place after verifications are completed.
K
Keyword Driven Testing:
Keyword Driven Testing is a scripting technique which uses data files to contain not only test data and expected results, but also the keywords related to the application being tested. The keywords are interpreted by special supporting scripts which are called by the control script for the test.
Keyword Driven Testing is a scripting technique which uses data files to contain not only test data and expected results, but also the keywords related to the application being tested. The keywords are interpreted by special supporting scripts which are called by the control script for the test.
L
LCSAJ: LCSAJ means "Linear Code Sequence And Jump", consists of the following three items (conventionally identified by line numbers in a source code listing like 1) The start of the linear sequence of executable statements 2) The end of the linear sequence, and 3) The target line to which control flow is transferred at the end of the linear sequence.
LCSAJ Coverage:
LCSAJ Coverage is the percentage of LCSAJ’s of a component which have been exercised by a test suite. 100% LCSAJ coverage implies 100% decision coverage.
LCSAJ Testing:
LCSAJ testing is a white box test design technique in which test cases are designed to execute LCSAJs.
Life Cycle Testing:
Life Cycle Testing refers to the process of carrying out verification of consistency, completeness and correctness of software at every stage of the development life cycle. It aims at catching the defects as early as possible and thus reduces the cost of fixing them. It achieves this by continuously testing the system during all stages of the development process rather than just limiting testing to the last stage. A separate test team is formed in the beginning of the project. When the project starts both the system development process and system test process begins. Both system development team as well as the Test team starts at the same point using the same information.
Load Testing: Load Testing is a test performed with an objective to determine the maximum sustainable load which the system can handle. Load is varied from a minimum (zero) to the maximum level the system can sustain without running out of resources or causing excessive delay in transactions.
Localization Testing: Localization Testing term refers to making software specifically designed for a specific locality.
Loop Coverage: Loop Coverage is a metric, which reports whether you executed each loop body zero times, exactly once, and more than once (consecutively). For do-while loops, loop coverage reports whether you executed the body exactly once, and more than once. Its strong feature is determining whether Life Cycle Testing refers to the process of carrying out verification of consistency, completeness and correctness of software at every stage of the development life cycle. It aims at catching the defects as early as possible and thus reduces the cost of fixing them. It achieves this by continuously testing the system during all stages of the development process rather than just limiting testing to the last stage. A separate test team is formed in the beginning of the project. When the project starts both the system development process and system test process begins. Both system development team as well as the Test team starts at the same point using the same information.
Load Testing: Load Testing is a test performed with an objective to determine the maximum sustainable load which the system can handle. Load is varied from a minimum (zero) to the maximum level the system can sustain without running out of resources or causing excessive delay in transactions.
Localization Testing: Localization Testing term refers to making software specifically designed for a specific locality.
while-loops and for-loops execute more than once, information not reported by other metrics. Loop Testing: Loop Testing is a white box testing technique that exercises program loops.
Low Level Test Case:
Low Level Test Case refers to the test case with concrete (implementation level) values for input data and expected results.
M
Maintenance: Maintenance is the modification of a software product after delivery to correct defects, to improve performance or other attributes, or to adapt the product to a modified environment.
Maintenance Testing:
Maintenance Testing refers to the testing of changes to an operational system or the impact of a changed environment to an operational system.
Maintainability: Maintainability is a quality attribute pointing towards an effort required in locating and fixing an error in an operational program.
Maintainability Testing:
Maintainability Testing is the process of testing to determine the maintainability of a software product.
Management Review:
Management Review refers to a systematic evaluation of software acquisition, supply, development, operation, or maintenance process, performed by or on behalf of management. It monitors progress, determines the status of plans and schedules, confirms requirements and their system allocation, or evaluates the effectiveness of management approaches to achieve fitness for purpose.
Measure of Completeness: In software testing there are two measures of completeness, code coverage and path coverage. Code coverage is a white box testing technique to determine how much of a program’s source code has been tested. There are several fronts on which code coverage is measured. Code coverage provides a final layer of testing because it searches for the errors that were missed by the other test cases. Path coverage establishes whether every potential route through a segment of code has been executed and tested.
Memory Leak:
Memory Leak refers to a defect in a program’s dynamic store allocation logic that causes it to fail to reclaim memory after it has finished using it, eventually causing the program to fail due to lack of memory.
Metric: Metric is a mathematical number that shows a relationship between two variables. Whereas Software Metric Is a measure to quantify some property of a piece of software or its specifications may be status or results etc.. Since quantitative methods have proved so powerful in the other sciences, computer science practitioners and theoreticians have brought similar approaches to software development. Common software metrics are : Source lines of code, Cyclomatic complexity, Function point analysis, Bugs per line of code, Code coverage, Number of lines of customer requirements, Number of classes and interfaces, Cohesion, Coupling
Milestone:
Milestones are the intermediate points on the timeline in a project at which defined deliverables and results must be ready.
Moderator:
Moderator refers to the leader and main person responsible for an inspection or other review process.
Monitor:
Monitor is a software tool or hardware device that run concurrently with the component or system under test and supervises, records and/or analyses the behavior of the component or system.
Monkey Testing: Monkey Testing is a type of Unit testing which runs with no specific test in mind. It involves testing an Application on the fly, i.e. just few tests here and there to ensure that the application does not crash out.
Here the monkey is the producer of any input data (which can be either a file data or can be an input device data). For example, a monkey test can enter random strings into text boxes to ensure handling of all possible user input or provide garbage files to check for loading routines that have blind faith in their data. While doing monkey test we can press some keys randomly and check whether the software fails or not.
Multiple Condition Coverage: Multiple Condition Coverage refers to the percentage of combinations of all single condition outcomes within one statement that have been exercised by a test suite. Multiple condition coverage reports whether every possible combination of Boolean sub-expressions occurs. 100% multiple condition coverage implies 100% condition determination coverage.
Multiple Condition Testing:
Multiple Condition Testing refers to a white box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).
Mutation Testing: Mutation Testing is a method to find out if a set of test data or test cases is useful or not, by deliberately introducing various code changes (bugs) and re-testing with the original test data / test cases to determine if the 'bugs' get detected.
N
N-switch Coverage: N-switch Coverage is the percentage of sequences of N+1 transitions which have been exercised by a test suite. N-switch testing:
N-switch testing is a form of state transition testing in which test cases are designed to execute all valid sequences of N+1 transitions.
Negative Testing: Negative Testing is aimed at showing that the software does not work. Negative Testing refers to the testing the application for failure like conditions. It involves testing the tool with improper inputs. for example entering the special characters in place of a phone number
N+1 Testing:
N+1 Testing is a variation of Regression Testing. It involves testing conducted with multiple cycles in which errors found in test cycle ‘N’ are resolved and the solution is re-tested in test cycle N+1. The cycles are typically repeated until the solution reaches a steady state and there are no errors.
Non-Functional Testing:
non-functional testing involves testing the attributes of a component or system which do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability.
Non-Functional Test Design Techniques:
Non-Functional Test Design Techniques refer to the methods used to design or select tests for nonfunctional testing.
O
Object Code Branch Coverage: Object code branch coverage is a metric, which reports whether each machine language conditional branch instruction both took the branch and fell through. It gives results which depend on the compiler rather than on the program structure since compiler code generation and optimization techniques can create object code that bears little similarity to the original source code structure.
Off-the-Shelf Software:
Off-the-Shelf Software is a software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.
Operability:
Operability is the capability of the software product to enable the user to operate and control it.
Operational Environment:
Operational Environment refers to the hardware and software products installed at users’ or customers’ sites where the component or system under test will be used. The software may include operating systems, database management systems, and other applications.
Operational Profile Testing:
Operational Profile Testing is the statistical testing using a model of system operations (short duration tasks) and their probability of typical use.
Operational Testing:
Operational Testing refers to the testing conducted to evaluate a component or system in its operational environment.
Orthogonal Defect Classification or ODC:
Orthogonal defect classification is a measurement method, which uses the defect stream to provide precise measurability into the product and the process.
Output:
Output is a variable (whether stored within a component or outside) that is written by a component.
Output Domain:
Output Domain is the set from which valid output values can be selected.
Output Value:
Output Value is an instance of an output.
Output Value is an instance of an output.
P
Pair Programming: Pair Programming is a software development approach whereby lines of code (production and/or test) of a component are written by two programmers sitting at a single computer. This implicitly means ongoing real-time code reviews are performed.
Pair Testing:
Pair testing means when two testers work together to find defects. Typically, they share one computer and trade control of it while testing.
Parallel Testing: Parallel Testing involves testing a new or an altered data processing system with the same source data that is used in another system. The other system is considered as the standard of comparison.
Pass:
A test is deemed to pass if its actual result matches its expected result.
Pass / Fail Criteria:
Pass / Fail Criteria is a set of decision rules formulated to ascertain as to whether the software item or its feature passes or fails a test.
Path:
Path is a sequence of events, e.g. executable statements, of a component or system from an entry point to an exit point.
Path Sensitizing:
Path Sensitizing refers to choosing a set of input values to force the execution of a given path.
Path Testing or Path Coverage:
Path Testing or Path Coverage is a white box method of testing which satisfies the coverage criteria through which the program is tested across each logical path. Usually, paths through the program are grouped into a finite set of classes and one path out of every class is tested. In Path Coverage flow of execution takes place from the start of a method to its exit. Path Coverage ensures that we test all decision outcomes independently of one another
Performance:
Performance is the degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate.
Performance Indicator:
Performance Indicator is a high level metric of effectiveness and/or efficiency used to guide and control progressive development, e.g. Defect Detection Percentage (DDP) for testing.
Performance Testing:
Performance Testing is a validation test aimed to ensure that the online response time as well as the batch run time meet the defined performance requirements. Often this is performed using an automated test tool to simulate large number of users. It is also known as "Load Testing".
Performance Testing Tool:
Performance Testing Tool is a tool to support performance testing and that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution, response time measurements are taken from selected transactions and these are logged. Performance testing tools normally provide reports based on test logs and graphs of load against response times.
Phase Test Plan:
Phase Test Plan is a test plan which typically addresses one test level.
Policy:
Policy is a managerial desires and intents related to either intended objectives of the process or desired attributes of the product.
Population Analysis:
Population Analysis is a process independent of the basic specifications and is aimed at identification of types and frequency of data expected to be processed / produced by the system. The purpose is to verify that the specifications can handle types and frequency of actual data and it can be used further to create validation tests.
Port Testing:Port Testing is a type of testing in which the test engineer checks as to whether the application is comfortable after deploying it into the client’s original environment or not.Policy is a managerial desires and intents related to either intended objectives of the process or desired attributes of the product.
Population Analysis:
Population Analysis is a process independent of the basic specifications and is aimed at identification of types and frequency of data expected to be processed / produced by the system. The purpose is to verify that the specifications can handle types and frequency of actual data and it can be used further to create validation tests.
Portability: Portability is a quality attribute pointing towards the ease with which a software product can be transferred from hardware of one configuration to the other.
Portability Testing:
Portability Testing is the process of testing to determine the portability of a software product.
Positive Testing:
Positive Testing is aimed at showing that the software works. It is also known as "test to pass".
Post Condition:
Post Condition refers to an environmental and state conditions that must be fulfilled after the execution of a test or test procedure.
Post-execution Comparison:
Post-execution Comparison refers to the comparison of actual and expected results, performed after the software has finished running.
Precondition:
Precondition refers to an environmental and state conditions that must be fulfilled before the component or system can be executed with a particular test or test procedure.
Priority:
Priority is the level of (business) importance assigned to an item, e.g. defect.
Procedure:
Procedure is a step-by-step method followed to ensure that the desired standards are followed.
Process:
Process is a set of interrelated activities, which transform inputs into outputs. It is a work effort which produces a product. This includes efforts of the persons and equipment which are guided by the policies, standards and procedures.
Process Cycle Test: Process Cycle Test is a black box test design technique in which test cases are designed to execute business procedures and processes.
Product:
A product is some thing, which is developed based on the manufacturing company’s specifications and used by multiple customers.
Project:
A project is a unique set of coordinated and controlled activities with start and finish dates undertaken an objective conforming to specific requirements, including the constraints of time, cost and resources. In software parlance, It is something developed based on a particular customer requirement and used by that particular customer only.
Project Plan: Project Plan is a management document describing the approach taken for a project. The plan typically describes work to be done, resources required, methods to be used, the configuration management and quality assurance procedures to be followed, the schedules to be met, the project organization, etc. Project in this context is a generic term. Some projects may also need integration plans, security plans, test plans, quality assurance plans, etc. See: documentation plan, software development plan, test plan, software engineering.
Proof of Correctness:
Proof of Correctness refers to the use of mathematical logical techniques to demonstrate that a relationship between program variables assumed true at the stage of program entry implies that another relationship between program variables holds good at the stage of program exit.
Pseudo Code:
Pseudo Code is a comprehensive set of instructions written in English language & is meant for guiding the actual code developers. These are generally written by the Technical Lead.
Pseudo-Random:
Pseudo-Random refers to a series which appears to be random but is in fact generated according to some prearranged sequence.
Q
Qualification Testing: Qualification Testing is a formal testing, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements.
Quality:
Quality is the degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations. A product is considered a quality product if it is free from defects. From the perspective of a producer, a product is a quality product if it meets or conforms to the statement of requirements which defines the product. From perspective of the customer, quality means "fitness for use."
Quality Assurance (QA):
Quality Assurance (QA) is a part of quality management focused on providing confidence that quality requirements will be fulfilled. It refers to all the planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer. It deals with 'prevention' of defects in the product being developed. It is associated with a process. The set of support activities (including facilitation, training, measurement, and analysis) needed to provide adequate confidence that the processes are established and are continuously improved to produce products, which meet specifications and are fit for use.
Quality Attribute:
Quality Attribute is a feature or characteristic that affects an item’s quality.
Quality Audit: Quality Audit is a systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.
Quality Circle: Quality Circle refers to a group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.
Quality Control (QC):
Quality Control (QC) refers to the operational techniques and the activities used to fulfill and verify requirements of quality. It involves activities focused on defect detection and its removal. Testing is a quality control activity
Quality Improvement:
Quality Improvement is an activity aimed to change a production process so that the rate at which defective products (defects) are produced gets reduced. Some process changes may even require a change in the entire product itself.
Quality Management: Quality Management is that aspect of the overall management function which determines and implements the quality policy. It is a set of coordinated activities to direct and control an organization with regard to quality. Direction and control with regard to quality generally includes the establishment of the quality policy and quality objectives, quality planning, quality control, quality assurance and quality improvement.
Quality Policy: Quality Policy refers to the overall intentions and direction of an organization as regards quality as formally expressed by top management.
Quality Policy: Quality Policy refers to the overall intentions and direction of an organization as regards quality as formally expressed by top management.
Quality System: Quality System refers to the organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.
R
Race Condition / Coverage: Race Condition is a cause of concurrency problems. It refers to multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access. It helps detect failure to synchronize access to resources. It is useful for testing multi-threaded programs such as in an operating system. Ramp Testing: Ramp Testing involves continuously raising an input signal until the system breaks down.
Random Testing: Random Testing is a black box test design technique where test cases are selected, possibly using a pseudo-random generation algorithm, to match an operational profile. This technique can be used for testing non-functional attributes such as reliability and performance.
Recoverability:
Recoverability refers to the capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure.
Recoverability Testing:
Recoverability Testing is the process of testing to determine the recoverability of a software product.
Recovery Testing:
Recovery Testing involves testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. It evaluates the contingency features built into the application for handling interruptions and for returning to specific points in the application processing cycle, including checkpoints, backups, restores, and restarts. This test also assures that disaster recovery is feasible.
Regression Testing:
Regression Testing involves repetition of testing on a previously verified program or application after the program has undergone some modifications with a view to extend the functionality or rectification of defects & to verify that no new defects have been introduced. Automated testing tools are quite useful in this type of testing.
Relational Operator Coverage: Recoverability refers to the capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure.
Recoverability Testing:
Recoverability Testing is the process of testing to determine the recoverability of a software product.
Recovery Testing:
Recovery Testing involves testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. It evaluates the contingency features built into the application for handling interruptions and for returning to specific points in the application processing cycle, including checkpoints, backups, restores, and restarts. This test also assures that disaster recovery is feasible.
Regression Testing:
Regression Testing involves repetition of testing on a previously verified program or application after the program has undergone some modifications with a view to extend the functionality or rectification of defects & to verify that no new defects have been introduced. Automated testing tools are quite useful in this type of testing.
Relational operator coverage is a metric, which reports whether boundary situations occur with relational operators (<, <=, >, >=). The hypothesis is that boundary test cases find off-by-one mistakes and uses of the wrong relational operators such as < instead of <=. Relational operator coverage reports whether the situation a==b occurs. If a==b occurs and the program behaves correctly, you can assume the relational operator is not suppose to be <=.
Release Candidate:
Release Candidate is a pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).
Release Note:
Release Note is a document identifying test items, their configuration, current status and other delivery information delivered by development to testing, and possibly other stakeholders, at the start of a test execution phase.
Reliability: Reliability is a quality attribute pointing towards an extent to which the program can be expected to perform its intended functions with desired precision within a specific period of time or for a specified number of operations.
Reliability Testing:
Reliability Testing is the process of testing to determine the reliability of a software product.
Replaceability:
Replaceability is the capability of the software product to be used in place of another specified software product for the same purpose in the same environment.
Requirement: Requirement refers to the statements given by the customer as to what needs to be achieved by the software system. Later on these requirements are converted into specifications which are nothing but feasible or implementable requirements. It is a set of conditions or capabilities needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document.
Requirements-based Testing:
Requirements-based Testing is an approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements, e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.
Requirements Management Tool:
Requirements Management Tool is a tool that supports the recording of requirements, requirements attributes (e.g. priority, knowledge responsible) and annotation, and facilitates traceability through layers of requirements and requirements change management. Some requirements management tools also provide facilities for static analysis, such as consistency checking and violations to pre-defined requirements rules.
Requirements Phase:
Requirements Phase is the period of time in the software life cycle during which the requirements for a software product are defined and documented.
Resource Utilization:
Resource Utilization is the capability of the software product to use appropriate amounts and types of resources, for example the amounts of main and secondary memory used by the program and the sizes of required temporary or overflow files, when the software performs its function under stated conditions.
Resource Utilization Testing:
Resource Utilization Testing is the process of testing to determine the resource-utilization of a software product.
Result:
Result is the consequence/outcome of the execution of a test. It includes outputs to screens, changes to data, reports, and communication messages sent out.
Resumption Criteria:
Resumption Criteria refers to the testing activities that must be repeated when testing is re-started after a suspension.
Re-Testing:
Re-Testing means testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.
Reusability: Reusability is a quality attribute pointing towards an extent to which a program can be used in other applications.
Review:
Review is an evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough.
Reviewer:
Reviewer is the person involved in the review who shall identify and describe anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.
Risk:
Risk is a factor that could result in future negative consequences; usually expressed as impact and likelihood.
Risk Assessment:
Risk assessment when being referred to organizations involves evaluating existing physical and environmental security and controls, and assessing their adequacy relative to the potential threats of the organization.
Risk Analysis:
Risk Analysis is the process of assessing identified risks to estimate their impact and probability or likelihood of occurrence. It broadly includes risk assessment, risk characterization, risk communication, risk management, and policy relating to risk. It is also known as Security risk analysis. Risk analysis when being referred to organizations involves identifying the most probable threats to the organization and analyzing the related vulnerabilities of the organization to these threats.
Risk-Based Testing:
Risk-Based Testing is the testing oriented towards exploring and providing information about product risks.
Risk Control:
Risk Control is the process through which decisions are reached and protective measures are implemented for reducing risks to, or maintaining risks within, specified levels.
Risk Identification:
Risk Identification is the process of identifying risks using techniques such as brainstorming, checklists and failure history.
Risk Management:
Risk Management is the systematic application of procedures and practices to the tasks of identifying, analyzing, prioritizing, and controlling risk.
Risk Matrix:
Risk Matrix refers to the representation of controls within an application systems used to reduce the identified risk, and in what segment of the application those risks exist. One of the dimensions of the matrix is the risk, while the second dimension is the segment of the application system and the controls are present at the intersections within the matrix. For example, if a risk is \"incorrect input\" and the systems segment is \"data entry,\" then the intersection within the matrix would show the controls designed to reduce the risk of incorrect input during the data entry segment of the application system.
Robustness:
Robustness is the degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions.
Root Cause:
Root Cause refers to an underlying factor that caused a non-conformance and possibly should be permanently eliminated through process improvement.
S
Safety:
Safety is the capability of the software product to achieve acceptable levels of risk of harm to people, business, software, property or the environment in a specified context of use.
Safety Testing: Safety is the capability of the software product to achieve acceptable levels of risk of harm to people, business, software, property or the environment in a specified context of use.
Safety Testing is the process of testing to determine the safety of a software product.
Safety Test Analysis: Safety Test Analysis is an analysis demonstrating that safety requirements have been correctly implemented and that the software functions safely within its specified environment. Tests may include; unit level tests, interface tests, software configuration item testing, system level testing, stress testing, and regression testing.
Sanity Testing: Sanity Testing is a brief test & typically involves an initial testing effort to find out if the new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing the systems every 5 minutes, bogging down the systems to a crawl, or destroying the databases, then it can be concluded that the software may not be in a 'sane' enough condition to warrant further testing in its current state.
Scatter Plot Diagram:
Scatter Plot Diagram is a graph designed to indicate as to whether any relationship between two changing variables exist or not.Scalability:
Scalability refers to the capability of the software product to be upgraded to accommodate increased loads.
Scalability Testing:
Scalability testing is the testing performed to determine the scalability of the software product. It is a performance testing involving tests designed to prove that both the functionality and the performance of a system shall be capable to scale up to meet specified requirements of future. It is a part of series of non-functional tests. It is the testing of a software application for measuring its capability to scale up or scale out in terms of any of its non-functional capability - be it the user load supported, the number of transactions, the data volume etc. Scalability testing can be performed as a series of load tests with different hardware or software configurations keeping other settings of testing environment unchanged.
Scribe:
Scribe refers to a person who has to record each defect mentioned and any suggestions for improvement during a review meeting, on a logging form. The scribe has to make ensure that the logging form is readable and understandable.
Scripting Language:
Scripting Language is a programming language in which executable test scripts are written, used by a test execution tool (e.g. a capture/replay tool).
Security:
Security refers to the attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to programs and data.
Security Testing: Security Testing is used to determine how well the system protects against unauthorized internal or external access, willful damage, etc; & may require sophisticated testing techniques. The six basic security concepts that need to be covered by security testing are: confidentiality, integrity, authentication, authorization, availability and non-repudiation.
Severity:
Severity refers to the degree of impact that a defect has on the development or operation of a component or system.
Simulation: Severity refers to the degree of impact that a defect has on the development or operation of a component or system.
Simulation is the representation of selected behavioral characteristics of one physical or abstract system by another system.
Simulator:
Simulator is a device, computer program or system used during testing, which behaves or operates like a given system when provided with a set of controlled inputs.
Six Sigma: Six Sigma stands for Six Standard Deviations from the mean. Initially it had been defined as a metric for measuring defects and improving quality. It is a methodology aimed to reduce defect levels below 3.4 Defects Per one Million Opportunities. Six Sigma approach improves the process performance, decreases variation and maintains consistent quality of the process output. This leads to defect reduction and improvement in profits, product quality and customer satisfaction.
Smoke Testing: Smoke Testing is a quick-and-dirty non-exhaustive software testing, ascertaining that the most crucial functions of the program work well, without getting bothered about finer details of it. Smoke Testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire. The general term of smoke testing has come from leakage testing of sewers & drain lines involving blowing smoke into various parts of the sewer and drain lines to detect sources of unwanted leaks and sources of sewer odors.
Soak Testing:
Soak Testing involves running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
Software Metric: Software Metric is a measure to quantify some property of a piece of software or its specifications may be status or results etc.. Since quantitative methods have proved so powerful in the other sciences, computer science practitioners and theoreticians have brought similar approaches to software development. Common software metrics are : Source lines of code, Cyclomatic complexity, Function point analysis, Bugs per line of code, Code coverage, Number of lines of customer requirements, Number of classes and interfaces, Cohesion, Coupling
Software Development Life Cycle or SDLC: "Software Development Life Cycle" or "System Development Life Cycle" or SDLC is a software development process, used by a systems analyst to develop an information system. It starts with activities like: 1) Project Initiation 2) Requirement Gathering and Documenting 3) Designing 4) Coding and Unit Testing 5) Integration Testing 6) System Testing 7) Installation and Acceptance Testing 8) Support or Maintenance
Software Quality:
Software Quality refers to the totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs.
Software Requirements Specification: Software Requirements Specification is a deliverable which describes all data, functional and behavioral requirements, all constraints, and all validation requirements for the software.
Software Testing: Software Testing is a set of activities conducted with the intent of finding errors in software.
Software Testing Life Cycle or STLC: "Software Testing Life Cycle" or STLC identifies what test activities to carry out and when (what is the best time) to accomplish those test activities. Main components of STLC are like: 1) Preparation of Requirements Document 2) Preparation of Test Plan 3) Preparation of Test Cases 4) Execution of Test Cases 5) Analysis of Bugs 6) Reporting of Bugs 7) Tracking of Bugs till closure
Software Usability Measurement Inventory or SUMI:
Software Usability Measurement Inventory or SUMI is a questionnaire based usability test technique to evaluate the usability, e.g. user-satisfaction, of a component or system.
Special Case Testing:
Special Case Testing is a testing technique using input values that seem likely to cause program errors; e.g., "0", "1", NULL, empty string.
Specification:
Specification are feasible or implementable requirements derived from various statements given by the customer. Customer describes his requirements stating as to what needs to be achieved by the software system. These requirements are then converted into specifications, which become the starting point for the product development team. Specification refers to a document, which specifies, ideally in a complete, precise and verifiable manner, the requirements, design, behavior, or other characteristics of a component or system, and, often, the procedures for determining whether these provisions have been satisfied.
Spiral Model: Spiral Model is a model of the software development process in which the constituent activities, typically requirements analysis, preliminary and detailed design, coding, integration, and testing, are performed iteratively until the software is complete.
Standard:
Standard refers to the measure used to evaluate products and identify nonconformance. The basis upon which adherence to policies is measured.
Stability:
Stability is the capability of the software product to avoid unexpected effects from modifications in the software.
State Diagram:
State Diagram is a diagram that depicts the states that a component or system can assume, and shows the events or circumstances that cause and/or result from a change from one state to another.
State Table: Standard refers to the measure used to evaluate products and identify nonconformance. The basis upon which adherence to policies is measured.
Stability:
Stability is the capability of the software product to avoid unexpected effects from modifications in the software.
State Diagram:
State Diagram is a diagram that depicts the states that a component or system can assume, and shows the events or circumstances that cause and/or result from a change from one state to another.
State Table refers to a grid showing the resulting transitions for each state combined with each possible event, showing both valid and invalid transitions.
State Transition:
State Transition refers to a transition between two states of a component or system.
State Transition Testing:
State Transition Testing is a black box test design technique in which test cases are designed to execute valid and invalid state transitions.
Statement:
Statement refers to an entity in a programming language, which is typically the smallest indivisible unit of execution.
Statement of Requirements:
Statement of Requirements is the exhaustive list of requirements that define a product.
Statement Coverage:
Statement Coverage is a type of "White-Box Testing" technique, involving execution of all statements at least once. Statement coverage is a simple metric to calculate & measure the number of statements in a method or class, which have been executed. Its key benefit is its ability to identify which blocks of code have not been executed. The chief advantage of this metric is that it can be applied directly to object code and does not require processing source code. Performance profilers generally use this metric. The main disadvantage of statement coverage is that it is not sensitive to some of the control structures.
Statement Testing:
Statement Testing is a white box test method which executes each statement in a program at least once during the process of program testing.
Static Analysis or Static Testing:
Static Analysis or Static Testing refers to the analysis / testing of a program performed without actually running the software. It includes Document review, code inspections, walkthroughs and desk checks etc.
Static Analyzer: Static Analyzer is a tool which carries out the static analysis.
Static Code Analysis:
Static Code Analysis refers to an analysis of program source code carried out without execution of that software.
Static Code Analyzer:
Static Code Analyzer is a tool that carries out static code analysis. The tool checks source code, for certain properties such as conformance to coding standards, quality metrics or data flow anomalies.
Statistical Testing:
Statistical Testing is a test design technique in which a model of the statistical distribution of the input is used to construct representative test cases. See also operational profile testing.
Status Accounting:
Status Accounting is an element of configuration management, consisting of the recording and reporting of information needed to manage a configuration effectively. This information includes a listing of the approved configuration identification, the status of proposed changes to the configuration, and the implementation status of the approved changes.
Storage Testing: Storage Testing verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.
Stress Testing:
Stress Testing is a test, which subjects a system, or components of a system, to varying environmental conditions that defy normal expectations. It involves subjecting the system to an unreasonable load while denying it the adequate resources needed to process that load. The load can be high transaction volume, large database size or restart / recovery circumstances etc. etc.. The resources can be RAM, disc space, mips & interrupts etc. etc. The idea is to stress a system to the breaking point in order to find bugs, which will make the break potentially harmful. The system is not expected to process the overload without adequate resources, but to fail in a decent manner (e.g., failure without corrupting or losing data). In stress testing the load (incoming transaction stream) is often deliberately distorted so as to force the system into resource depletion.
Structural Testing:
Structural Testing is a method of testing in which the test data is derived solely from the program structure. Structural testing compares test program behavior against the apparent intention of the source code. Structural testing examines how the program works, taking into account possible pitfalls in the structure and logic. Structural testing is also called path testing since you choose test cases that cause paths to be taken through the structure of the program. Structural testing cannot find errors of omission.
Stub:
Stub is a special code segment which when invoked by a code segment under testing, simulate the behavior of the designed and specified modules which have not been constructed yet. Stub is a skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.
Subpath:
Subpath is a sequence of executable statements within a component.
Suspension Criteria:
Suspension Criteria refers to the criteria used to (temporarily) stop all or a portion of the testing activities on the test items.
Suitability:
Suitability is the capability of the software product to provide an appropriate set of functions for specified tasks and user objectives.
Syntax Testing:
Syntax Testing is a black box test design technique in which test cases are designed based upon the definition of the input domain and/or output domain.
System:
System is a collection of components organized to accomplish a specific function or set of functions.
System Integration Testing:
System Integration Testing means testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange, Internet).
System Testing:
System Testing is a type of testing which attempts to discover defects which are the properties of the entire system rather than of its individual components. It is conducted on a complete, integrated system to verify that all-functional, information, structural and quality requirements have been met. It falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic. System testing is a more limiting type of testing; it seeks to detect defects both within the "inter-assemblages" and also within the system as a whole.
T
Table Coverage:Table coverage is a metric, which indicates whether each entry in a particular array has been referenced. This is useful for programs, which are controlled by a finite state machine.
Technical Review:
Technical Review refers to a peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. A technical review is also known as a peer review.
Test:
Test is an activity in which a system or component is executed under specified conditions, the results are observed or recorded and an evaluation is made of some aspect of the system or component.
Testability: Testability is the capability of the software product to enable modified software to be tested. It is a quality attribute pointing towards an effort required in testing a program to ensure that it performs in accordance with its intended function. It is the degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.
Testability Review:
Testability Review is a detailed check of the test basis to determine whether the test basis is at an adequate quality level to act as an input document for the test process.
Testability Review is a detailed check of the test basis to determine whether the test basis is at an adequate quality level to act as an input document for the test process.
Testable Requirements:
Testable Requirements is the degree to which a requirement is stated in terms that permit establishment of test designs (and subsequently test cases) and execution of tests to determine whether the requirements have been met.
Tester: Testable Requirements is the degree to which a requirement is stated in terms that permit establishment of test designs (and subsequently test cases) and execution of tests to determine whether the requirements have been met.
Tester is a technically skilled professional who is involved in the testing of a component or system.
Testing:
Testing is the process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.
Testware:
Testware refers to the artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.
Test Approach:
Test Approach is the implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.
Test Automation:
Test Automation refers to the use of software to perform or support test activities, e.g. test management, test design, test execution and results checking.
Test Basis:
Test Basis refers to all documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.
Test Bed:
Test Bed is an execution environment configured for testing. It may consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project enumerates the test beds(s) to be used.
Test Case:
Test Case is a commonly used term for a specific test. Test Case is a document which carries the detailed information pertaining to the inputs, expected results, and execution conditions of a defined test item to verify compliance with a specific requirement.
Test Case Generator:
Test Case Generator is a software tool that accepts as input source code, test criteria, specifications, or data structure definitions; uses these inputs to generate test input data; and, sometimes, determines expected results.
Test Case Specification:
Test Case Specification is a document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item.
Test Charter:
Test Charter is a statement of test objectives, and possibly test ideas. Test charters are amongst other used in exploratory testing. See also exploratory testing.
Test Comparator:
Test Comparator is a test tool to perform automated test comparison.
Test Comparison:
Test Comparison is the process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.
Test Condition:
Test Condition is an item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, quality attribute, or structural element.
Test Data:
Test Data is the data, which exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test.
Test Data Preparation Tool:
Test Data Preparation Tool is a type of test tool that enables data to be selected from existing databases or created, generated, manipulated and edited for use in testing.
Test Design:
Test Design is a set of documentation specifying the details of the test approach for a software feature or combination of software features and identifying the associated tests. See: testing functional; cause effect graphing; boundary value analysis; equivalence class partitioning; error guessing; testing, structural; branch analysis; path analysis; statement coverage; condition coverage; decision coverage; multiple-condition coverage.
Test Design Specification:
Test Design Specification is a document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases.
Test Design Tool:
Test Design Tool is a tool which support the test design activity by generating test inputs from a specification that may be held in a CASE tool repository, e.g. requirements management tool, or from specified test conditions held in the tool itself.
Test Design Technique:
Test Design Technique is a method used to derive or select test cases.
Test Documentation:
Test Documentation is a set of documents describing plans for, or results of, the testing of a system or component, Types include test case specification, test incident report, test log, test plan, test procedure, test report.
Test Driven Development: Test Driven Development refers to testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.
Test Driver: Test Driver is a software module used to invoke a module under test and, often, provide test inputs, control and monitor execution, and report test results. It is also known as a Test Harness.
Test Environment: Test Environment refers to the hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.
Test Evaluation Report:
Test Evaluation Report is a document produced at the end of the test process summarizing all testing activities and results. It also contains an evaluation of the test process and lessons learned.
Test Execution:
Test Execution is the process of running a test by the component or system under test, producing actual result(s).
Test Execution Automation:
Test Execution Automation is the use of software, e.g. capture/playback tools, to control the execution of tests, the comparison of actual results to expected results, the setting up of test preconditions, and other test control and reporting functions.
Test Execution Phase:
Test Execution Phase is the period of time in a software development life cycle during which the components of a software product are executed, and the software product is evaluated to determine whether or not requirements have been satisfied.
Test Execution Schedule:
Test Execution Schedule is a scheme for the execution of test procedures. The test procedures are included in the test execution schedule in their context and in the order in which they are to be executed.
Test Execution Technique:
Test Execution Technique is the method used to perform the actual test execution, either manually or automated.
Test Execution Tool:
Test Execution Tool is a type of test tool that is able to execute other software using an automated test script, e.g. capture/playback.
Test First Design: Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.
Test Harness: Test Harness is a program or test tool used to execute a tests. Also known as a Test Driver.
Test Incident Report:
Test Incident Report is a document reporting on any event that occurs during testing that requires further investigation. Test Infrastructure:
Test Infrastructure is the organizational artifacts needed to perform testing, consisting of test environments, test tools, office environment and procedures.
Test Item:Test Item is the individual software item, which is the object of testing. There usually is one test object and many test items.Testing: Testing is the process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component. In software engineering parlance it is the process of Locating or Identifying the errors or bugs in a software system. It must clearly differentiated from "Debugging" which refers to process of rectifying the syntax errors, logic errors and all other types of errors identified during the process of testing. Test Level:
Test Level is a group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test and acceptance test. Test Log:
Test Log is a chronological record of all relevant details about the execution of a test.Test Logging:
Test Logging is the process of recording information about tests executed into a test log.
Test Manager:
Test Manager is the person responsible for testing and evaluating a test object. The individual, who directs, controls, administers plans and regulates the evaluation of a test object.
Test Management:
Test Management is the process of planning, estimating, monitoring and control of test activities, typically carried out by a test manager.
Test Maturity Model or TMM:
Test Maturity Model or TMM is a five level staged framework for test process improvement, related to the Capability Maturity Model (CMM) that describes the key elements of an effective test process.
Test Object:
Test Object is the component or system to be tested. See also test item.
Test Objective:
Test Objective refers to the reason or purpose for designing and executing a test.
Test Oracle:
Test Oracle is a source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark), a user manual, or an individual’s specialized knowledge, but should not be the code. Test Performance Indicator:
Test Performance Indicator is a metric, in general high level, indicating to what extent acertain target value or criterion is met. Often related to test process improvement objectives, e.g. Defect Detection Percentage (DDP).
Test Phase:
Test Phase is the period of time in the software life cycle in which the components of a software product are evaluated and integrated, and the software product is evaluated to determine whether or not requirements have been satisfied. It is a distinct set of test activities collected into a manageable phase of a project, e.g. the execution activities of a test level.Test Plan:
Test Plan is a document describing an introduction to the client company, intended scope, overview of the application, test strategy, schedule of testing activities, roles and responsibilities, deliverables and milestones and any risks requiring contingency planning. It describes test items, features to be tested, testing tasks, details of the personnel performing each task and any risks requiring contingency planning. As a summary; Test plan is a Document usually developed by the Test Lead & it contains information like "What to Test", "How to Test", "When to Test" & "Who is going to Test".
Test Planning:
Test Planning refers to the activity of establishing or updating a test plan.
Test Policy:
Test Policy is a high level document describing the principles, approach and major objectives of the organization regarding testing.
Test Point Analysis or TPA:
Test Point Analysis or TPA is a formula based test estimation method based on function point analysis. Test Procedure: Test Procedure is a formal document developed from a test plan which provides detailed instructions for the setup, operation and evaluation of the results of one or more Test Cases. Test Procedure Specification:
Test Procedure Specification is a document specifying a sequence of actions for the execution of a test. It is also known as test script or manual test script. Test Process:
Test Process is the fundamental test process comprises planning, specification, execution, recording and checking for completion. Test Process Improvement or TPI:
Test Process Improvement or TPI is a continuous framework for test process improvement that describes the key elements of an effective test process, especially targeted at system testing and acceptance testing.
Test Readiness Review: Test Readiness Review is a review conducted to evaluate preliminary test results for one or more configuration items; to verify that the test procedures for each configuration item are complete, comply with test plans and descriptions, and satisfy test requirements. The aim is to verify that a project is prepared to proceed to formal testing of the configuration items. It is in contrast with code review, design review, formal qualification review, requirements review.Test Report:
Test Report is a document describing the conduct and results of the testing carried out for a system or system component.Test Result Analyzer:
Test Result Analyzer is a software tool used to test output data reduction, formatting, and printing.
Test Repeatability:
Test Repeatability is an attribute of a test indicating whether the same results are produced each time the test is executed.
Test Run:
Test Run refers to the process of execution of a test on a specific version of the test object.
Test Script:
Test Script is commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool. It specifies an order of sequential actions that should be performed during a test session. It contains expected results as well. Test scripts may be prepared manually using paper forms, or may be automated using capture / playback tools or other kinds of automated scripting tools.
Test Script is commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool. It specifies an order of sequential actions that should be performed during a test session. It contains expected results as well. Test scripts may be prepared manually using paper forms, or may be automated using capture / playback tools or other kinds of automated scripting tools.
Test Scenario:
The term "Test Scenario" & "Test Cases" are often used synonymously. Test Scenario are nothing but Test Cases or Test Scripts having the sequence in which they are to be executed. Test Scenario are Test Cases which ensures that all business process flows are tested from end to end. Test Scenario are independent tests, or a series of tests that follow each other , where each of them is dependent upon the output of the previous one. Test Scenarios are prepared by reviewing functional requirements, and preparing logical groups of functions that can be further broken into test procedures. Test Scenarios are designed to represent both typical & unusual situations that may occur in an application.
The term "Test Scenario" & "Test Cases" are often used synonymously. Test Scenario are nothing but Test Cases or Test Scripts having the sequence in which they are to be executed. Test Scenario are Test Cases which ensures that all business process flows are tested from end to end. Test Scenario are independent tests, or a series of tests that follow each other , where each of them is dependent upon the output of the previous one. Test Scenarios are prepared by reviewing functional requirements, and preparing logical groups of functions that can be further broken into test procedures. Test Scenarios are designed to represent both typical & unusual situations that may occur in an application.
Test Specification: Test Specification is a document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests. Test Strategy:
Test Strategy Is a high-level document, usually developed by the Project Manager & it contains information like what type of technique to be followed and which module to be tested.
Test Strategy Is a high-level document, usually developed by the Project Manager & it contains information like what type of technique to be followed and which module to be tested.
Test Summary Report:
Test Summary Report is a document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria.
Test Suite: Test Suite is a collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test. Test Suite Manager:Test Summary Report is a document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria.
Test Suite Manager is a tool, which allows testers to organize test scripts either by function or by any other way of grouping.Test Target:
Test Target is a set of exit criteria.
Test Tool:
Test Tool is a software product that supports one or more test activities, such as planning and control, specification, building initial files and data, test execution and test analysis. Test Type:
Test Type is a group of test activities aimed at testing a component or system regarding one or more interrelated quality attributes. A test type is focused on a specific test objective, i.e. reliability test, usability test, regression test etc., and may take place on one or more test levels or test phases. Thread Testing: Thread Testing is a variation of Top-Down-Testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels. Top Down Testing: Top Down Testing is an incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested. Total Quality Management: Total Quality Management is a company commitment to develop a process that achieves high quality product and customer satisfaction. Traceability:
Traceability is the ability to identify related items in documentation and software, such as requirements with associated tests. Traceability Matrix:
Traceability Matrix is a document showing the relationship between Test Requirements and Test Cases.
U
Understandability: Understandability is the capability of the software product to enable the user to understand whether the software is suitable, and how it can be used for particular tasks and conditions of use.
Unit Testing:
Unit Testing involves testing of individual programs, modules, or components to demonstrate that the program executes as per the specifications and it validates the design and technical quality of the application. The focus is on ensuring that the detailed logic within the component is accurate and reliable according to the pre-defined specifications. In Unit Testing, the Called Components (or Communicating Components) are replaced with Stubs, Simulators, or Trusted Components. Testing stubs or drivers are used to simulate the behavior of interfacing modules. Unit testing is typically done by the programmers and not by the testers.
Unreachable Code:
Unreachable Code is the code that cannot be reached and therefore is impossible to execute.
Usability: Usability is a quality attribute pointing towards an effort required in learning, operating, preparing inputs, and interpreting the outputs of a software program. It is the capability of the software to be understood, learned, used and attractive to the user when used under specified conditions.Usability Testing:
Usability Testing involves testing the software for its 'user-friendliness' or testing the ease with which users can learn and use a product. This is highly subjective, and will depend on the targeted end-user or the customer. The purpose is to review the application user interface and other human factors of the application with the people involved in the use of the application. This is to ensure that the design (layout and sequence, etc.) enables the business functions to be executed as easily and intuitively as possible. This review includes assuring that the user interface adheres to documented User Interface standards, and should be conducted early in the design stage of development. Ideally, an application prototype is used to walk the client group through various business scenarios, although paper copies of screens, windows, menus, and reports can be used. Programmers and testers are usually not appropriate as usability testers.
User Acceptance Testing:
User Acceptance Testing (UAT) is a formal product evaluation performed by a customer as a condition of purchase. It is conducted to ensure that the system meets the needs of the organization and the end user / customer. It validates that the system will work as intended by the user in the real world, and is based on real world business scenarios, not system requirements. Essentially, this test validates that the right system had been built.Use Case:
A use case describes the process as to how an end user uses a specific functionality in the application. It is a summary of user actions and system response to the user actions. Use cases tend to focus on operating the software as an end-user would conduct their day-to-day activities. It contains the flows like typical flow, alternate flow and exceptional flow. It also contains a pre condition and post condition.
Use Case Testing:
Use Case Testing is a black box test design technique in which test cases are designed to execute user scenarios
User Test:
User Test is a test whereby real-life users are involved to evaluate the usability of a component or system.
V
Validation:Validation refers to the determination of the correctness of the final program or software product produced from a development project with respect to the user needs and requirements. The techniques for validation are testing, inspection and reviewing. Validation typically involves actual testing and takes place after verifications are completed. As per the definition of ISO-9000, Validation is confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
Validation Protocol:Validation Protocol is a written plan stating how validation will be conducted, including test parameters, product characteristics, production equipment, and decision points on what constitutes acceptable test results.
Variable:
Variable is an element of storage in a computer that is accessible by a software program by referring to it by a name.
Verification:Verification is the process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. Verification involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications to confirm whether items, processes, services, or documents conform to specified requirements or not. The techniques for verification are testing, inspection and reviewing. This can be done with the help of checklists, issues lists, walkthroughs, and inspection meetings. The purpose of verification is to determine whether the products of a given phase of the software development cycle fulfill the requirements established during the previous phase or not. As per the definition of ISO-9000, verification means confirmation by examination and through the provision of objective evidence that specified requirements have been fulfilled.Vertical Traceability:
Vertical Traceability is the tracing of requirements through the layers of development documentation to components.
V Model or Life Cycle Testing:V Model or Life Cycle Testing is the process of carrying out verification of consistency, completeness and correctness of software at every stage of the development life cycle. It aims at catching the defects as early as possible and thus reduces the cost of fixing them. It achieves this by continuously testing the system during all stages of the development process rather than just limiting testing to the last stage. A separate test team is formed in the beginning of the project. When the project starts both the system development process and system test process begins. Both system development team as well as the Test team starts at the same point using the same information.
Volume Testing: Volume Testing is testing where the system is subjected to large volumes of data. It is the testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.
W
Walkthrough:Walkthrough is A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. It is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required. During a walkthrough, the producer of a product \"walks through\" or paraphrases the products content, while a team of other individuals follow along. The objective of the team remains to raise questions and raise issues about the product which may lead to the identification of defects.
Weak Mutation Coverage:
Weak Mutation Coverage is a metric quite similar to Relational Operator Coverage but it is much more general. It reports whether test cases occur which would expose the use of wrong operators and also wrong operands. It works by reporting coverage of conditions derived by substituting (mutating) the program's expressions with alternate operators, such as "-" substituted for "+", and with alternate variables substituted.
White Box Test Design Technique:
White Box Test Design Technique is a documented procedure to derive and select test cases based on an analysis of the internal structure of a component or system.
White-Box Testing:
White-Box Testing is based on an analysis of internal workings and structure of a piece of software. It is a testing technique which assumes that the path of the logic in a program unit or component is known. It usually consists of testing paths, branch by branch, to produce predictable results. This technique is usually used during tests executed by the development team, such as Unit or Component testing.
Waterfall Model:
Waterfall Model is a sequential software development model (a process for the creation of software) in which development is seen as flowing steadily downwards (like a waterfall) through the phases of requirements analysis, design, implementation, testing (validation), integration, and maintenance. To follow the waterfall model, one proceeds from one phase to the next in a purely sequential manner. In traditional waterfall model, testing comes at the fag end of the development process. No testing is done during requirements gathering phase, design phase and development phase.Workflow Testing:
Workflow Testing is scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.
Wide Band Delphi:
Wide Band Delphi is an expert based test estimation technique that aims at making an accurate estimation using the collective wisdom of the team members.