Software Testing terms Guide

We firmly believe in the importance of software testing in building and developing systems and software. In addition, to reinforce the concepts of sharing and awareness-raising, TestCrew provides the following short guide to the most important terms of QA software testing. Through this testing terms guide, we provide the most common terms used during the software testing process. By doing that, we aim to:

  • Raise awareness of the concept of QA software testing
  • Establish a reference guide to software quality terms
  • Facilitate quick access to software testing terms

For whom is this Testing Terms guide?

  • Companies and business sectors managers
  • Newly-formed software testing teams
  • Systems and applications programmers and developers
  • Those wishing to acquire ISTQB certification
  • Interested individuals and scholars

Testing Terms

Acceptance Testing
Testing conducted to determine whether the system is acceptable by the client or the user. Acceptance testing examines the degree to which the system satisfies the conditions, requirements, and needs of the customer.

Accessibility Testing
Testing to determine the ease by which users with disabilities can use the system or application.

Alpha Testing
Actual testing for a system or software by potential users/customers, or by a software testing team that is independent of the development team. Usually, this testing is conducted at the development site, and it fundamentally aims at new software. Alpha Testing is a form of internal acceptance forms.

Analyzability
The ability to evaluate the software or system to identify deficiencies, defects, and failures, or to identify the parts that will be modified.

Audit
Independent testing of software products or processes conducted by a third party to ensure the software’s compliance with required standards, specifications, and characteristics.

Availability
The degree to which a system is available, accessible, and usable when required for use. Often expressed as a percentage.

Beta Testing
Actual testing or simulation of the operation of a system or software. It is conducted by potential users/customers at an external site, without the presence of the main development team.

Black Box Testing
Software testing, either functional or non-functional, without reference to the internal structure of the system or software.

Branch Testing
A white box test technique where test cases are designed to execute testing to all software code branches.

Changeability
The capability of the software product to enable specific modifications to be implemented with ease.

Code Coverage
An analysis method that determines which parts of the software have been covered through a series of tests, and which parts have not been covered.

Compatibility
The degree to which a component or system is able to exchange information with other components or systems.

Component Testing
The testing of individual software components independently.

Data Driven Testing
A testing technique in which test inputs and expected outputs are stored in a table. All tests stored in the table are executed by another software. Data driven testing is often conducted to support test execution tools such as capture and playback.

Data Flow
An abstract representation of the sequence of possible changes of the state of data objects. The state of an object is one of the following: Creation, usage, or destruction.

Debugging
The process of finding, analyzing, and removing the causes of failures in software.

Defect
A flaw or deficiency in the software product as the product does not meet the requirements or required characteristics.

Efficiency
Use of resources to achieve appropriate performance, and to enable users to achieve specific goals.

Error
A human action that produces an incorrect result.

Fail
A test deemed to fail if the actual results do not match its expected results.

Failure Rate
The ratio of the number of failures of a given category to a given unit of measure.

Feature
An attribute of a component or a system specified by the requirements documentation such as reliability, usability, or design constraints.

Functional Integration
An integration approach that combines a set of components or systems for the purpose of getting basic functionality working.

Functional Testing
A test conducted to evaluate the compliance of components or systems with the functional requirements.

Impact Analysis
Determining all business products that were impacted by a change, including the estimation of resources necessary for executing the change.

Incident
Any event occurring that requires subsequent investigation and follow-up.

Integration
The process of combining components or systems into larger assemblies.

Integration Testing
Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.

Learnability
The capability of a software product to enable the user to learn how to use it correctly to meet the user’s needs and expectations.

Load Testing
A type of performance testing conducted to evaluate the behavior of a component or system with changing load. It is often performed with low, normal, and high load degrees.

Maintenance
The process of modifying a software component or system after delivery to users to correct defects, improve quality attributes, or adapt the software product to the modified environment.

Maintainability
The degree to which a software component or system can be modified to correct defects or adapted to the changed environment.

Milestone
A point in time in the project’s life where final or intermediate versions of the product are ready to be delivered.

Non-functional Requirement
A requirement that describes the method by which a software product performs a process or function. A non-functional requirement is not related to a specified functionality but rather to specified attributes such as efficiency, usability, maintainability, and portability.

Non-functional Testing
A test conducted to evaluate the compliance of a software component or system with the non-functional requirements.

Operability
The capability of a system to enable users to operate it, work on it, or control it.

Operational Testing
Testing conducted to evaluate the performance of a software component or system in the operational environment.

Performance Testing
Testing conducted to evaluate the performance of a software product.

Portability
The ease with which the software product can be transferred from one environment to another environment that may have different hardware and/or software.

Quality
The degree to which a software component or system meets user needs, requirements, and expectations.

Quality Assurance
Part of quality management focused on providing confidence and assurance that quality requirements will be fulfilled.

Recoverability
The capability of the software product to re-establish a specified level of performance and recover the data directly affected after cases of failures that may occur during operation.

Regression Testing
Testing of a previously tested software component or system following modifications to ensure that defects have not been introduced or uncovered.

Reliability
The degree to which the software product can perform its specified function under stated conditions for a specified period of time.

Requirement
A condition or capability needed by a user to solve a problem or reach an objective that must be achieved.

Scalability
The capability of the software product to be upgraded to accommodate increased loads.

Specification
A document that specifies the characteristics and requirements of a software product in a precise, complete, and ideal manner. The specification often includes the procedures that achieve these characteristics and requirements.

Stability
The capability of the software product to work efficiently and effectively without issues or defects after modification.

Stress Testing
A type of performance testing conducted to evaluate the performance of the software product at the extreme limits of loads or while reducing resources available for work.

Test Control
One of the testing management’s tasks concerned with developing and applying a set of corrective actions to bring the test project back on the right path if the testing control process shows a deviation during testing.

Testability
The degree of efficiency and effectiveness that makes the software product able to be developed and tested.

Usability Testing
Testing conducted to evaluate the capability of the software product to be used, understood, and easy to learn and operate by the users efficiently and effectively, in addition to being attractive to them.

Use Case
A sequence of processes between the use entity and the software system with a tangible result. The use entity is either an actual user or any other system that can exchange information with the software system.

Validation
The process of ensuring that the requirements and required characteristics present in the description of a software product have been fulfilled through testing and relying on objective evidence.

About us

TestCrew is a Saudi company specialized in providing software testing services for organizations on a global scale.

We have a team of experts specialized in providing high-quality software testing services across several industries, private and governmental institutions. We provide our services qualitatively and distinctively so that you can earn the trust and satisfaction of your customers for the products you provide. We have several professional ISTQB certified teams that provide various services such as advisory services, functional testing, non-functional testing, User Acceptance Testing (UAT), integration testing, performance testing, security testing, integration checking, and the automation of web services and software interfaces.

Share

Subscribe to our newsletter

Sign up for our newsletter to get regular updates and insights into our solutions and technologies