CHAPTERS
OVERVIEW
The analytical test strategy identifies the conditions to be tested after analyzing the test basis, whether it's risk or requirement. In risk-based testing, the team writes and prioritizes the test cases based on the level of risk, whereas in requirements-based testing, the team writes and prioritizes the test cases based on the level of requirements.
A test strategy answers how the software testing team will test the software application. This implies describing the process the team will implement when the development team provides the software application for testing.
There are seven types of test strategies: Methodical, Reactive, Analytical, Standards-compliant or Process-compliant, Model-based, Regression Averse, and Consultative. Out of these, the analytical test strategy analyzes a specific factor, which can be a risk or a requirement.
Before exploring the analytical test strategy, let’s begin with a brief overview of the test strategy.
Test strategy is an organizational document that outlines the general test approach, that is, what needs to be achieved and how to achieve it. Software testing requirements are not outlined in this document, as it is outside the Software Testing Life Cycle (STLC) scope. Instead, it establishes the testing principles for all projects within the organization.
The testing strategy document includes the roles and responsibilities of the testing team resources. Therefore, it needs to be aligned with the test policy of the organization.
The testing strategy document includes the following information:
In analytical test strategy, the QA team analyzes the test basis, which can be risks or requirements, and then defines the testing conditions. Further, the team designs and executes the tests in a manner that fulfills the risks or requirements.
After execution of the tests, the team creates a record of the tests pertaining to the risks or requirements using the following classification:
In the next section of this analytical test strategy tutorial, we will look at the two test strategy methods: requirements-based analytical test strategy and risk-based analytical test strategy.
The testing team uses several methods, such as analysis of ambiguities, test conditions, and generating graphs for cause and effect. The requirements document consists of a list of defects. The team uses this document to identify requirements and ambiguities, and then, the team removes the ambiguities. A comprehensive study of this document determines the test conditions one can consider.
In some projects, the requirements have a pre-defined order of importance. The team leverages this predefined order to distribute the effort and order test cases. When the requirements do not have a pre-defined order, the team combines the requirements-based analytical test strategy with the risk-based analytical test strategy to allocate the correct effort.
The testing team creates the Cause-Effect graph to cover the test conditions. This graph has multiple uses. The first is that a testing problem is reduced from a large problem into various test cases where the count of the test cases is easily manageable.
The second use is that this graph renders 100% coverage of the test's basic functionality. The third use is that the team can detect gaps while designing the test cases. The outcome is the detection of defects in the initial phase of the Software Development Life Cycle. Here, it is important to note that if you use manual tools to generate graphs, the generation of graphs becomes very complex.
A commonly detected problem in requirements-based analytical test strategy is the existence of confusing specifications that are incomplete, cumbersome to test, and sometimes not available for the testing team. If the organization fails to resolve this problem, the QA team should abandon the requirements-based analytical test strategy and opt for another one, i.e., a risk-based analytical test strategy.
The possibility of an unwanted occurrence, result, or adverse impact is termed a ‘Risk.’ Suppose the positive opinion of customers, users, or stakeholders about a project's successful completion or quality is decreased owing to some issue or problem. In that case, the conclusion is that the project has a risk.
If the risk can ruin the software quality, the problem is referred to as a ‘product risk,’ ‘quality risk’, or ‘product quality risk.’ If the risk can impact the software product's success, the problem is referred to as a ‘planning risk’ or ‘project risk.’
The QA team has to face a usual problem, which is the apt selection of a restricted set of test conditions from the unlimited set of tests. After selecting these test conditions, the team has to assign the appropriate resources for creating test cases. The next step is to finalize a sequence for executing the test cases to optimize the overall effectiveness and test efficiency.
Typically, Agile sprints last about two weeks. Indeed, that timeframe will only allow you to test some of the features of software applications. But testing software's functionality becomes more complex as development progresses because software applications become more complex. There is no way to run thousands of tests quickly, and testers must prioritize what they need to test. This is where a risk-based analytical test strategy can help you decide how to allocate time and effort in each sprint.
You can go for a requirements-based analytical test strategy when you have enough time to test. However, one thing to note is that even though all requirements are tested, you still need to do a software risk analysis.
In the further sections of this analytical test strategy tutorial, we will explore the risk-based testing approach and how it can implement Agile principles in your testing process.
In risk-based testing, the QA team observes some risks related to product quality. The team uses these product quality risks to select the test conditions, calculate the effort essential for the end-to-end testing, and prioritize the created test cases.
To execute risk-based testing, the team can select from various testing techniques. The prime intention of risk-based testing is to minimize the quality risks to an acceptable level. It is impossible to reach a situation with zero quality risk. The team can detect and review quality and product risks while performing a risk analysis of the product quality. In this task, the testing team collaborates with the stakeholders.
After completing the risk analysis, the team performs tasks such as test design, test implementation, and test execution. The goal of these tasks is to minimize the count of risks. In this context, the term ‘quality’ encompasses the features and the demeanor that can affect the satisfaction of the end users, customers, and relevant stakeholders. The QA team determines the defects before product release and the solutions to address them. This decreases the quality risks.
Some instances of quality risks are the following.
After conducting tests, if the product does not have defects, this implies that the tasks done during testing have decreased the quality risks by ascertaining that the product functionality is proper in the tested state.
This way, RBT lets you achieve the objectives of Agile development and testing. In addition, to reduce testing costs, always choose cloud-based testing platforms that allow your QA teams to access various browsers, devices, and platforms.
Continuous quality cloud testing platforms like LambdaTest helps devs and QAs by providing an online browser farm of 3000+ real browsers, devices, and OS combinations. It lets you perform manual and automation testing of web and mobile apps, saving operational and resource costs.
In this section of the analytical test strategy tutorial, we will look into the different phases of RBT.
Risk-based testing (or RBT) consists of four phases, which are not sequential but overlapping. These are the identification, assessment, mitigation, and management of risks. The expenses related to these phases are regarded as the ‘cost of quality.’
The team that handles the identification and assessment phases consist of resources from all stakeholder groups. This implies that the resources are from the Product Development team or the entire project team. In real-life scenarios, a few stakeholders accept the responsibility of standing for some additional stakeholders.
Let us consider that Product A has a wide range of customers. While Product A is undergoing development, a few of these customers help the organization to identify the defects. The testing team is involved in this risk identification phase because it can leverage its experience in defect identification and quality risk analysis. In this scenario, the small group of customers is considered representative of the total count of customers.
The testing team and the representative group of customers, along with some other stakeholders (if there are any), select any of the following methods for risk identification:
In this phase, the role of the stakeholders is critical. The more stakeholders count, the more the detection percentage of the crucial product quality risks. This phase generates some other outcomes as well. At times, there is an identification of issues that cannot be categorized as product quality risks. Some instances of such identifications are issues in documentation (such as requirements specifications) and generic issues pertinent to the product.
After identifying risks, the assessment phase begins. In risk assessment, the QA team analyzes and evaluates the identified risks. The tasks in this phase are
The testing team uses parameters; such as reliability, functionality, and performance; to classify all the risks. Software organizations are now embracing the ISO 25000 standard instead of the ISO 9126 standard for classification.
For risk classification, the team leverages the same checklist used in the identification phase. The resources of the organization draft checklists for their usage, and the testing team can capitalize on them. Sometimes, the team performs risk identification and risk classification at the same time.
The team confirms the possibility of the occurrence of each specific risk and the impact such an occurrence can generate. It uses this information to arrive at the risk level. If there is some possibility of the occurrence of a risk, this is indicative that a problem existed in the product while testing was in progress. You can also determine the possibility by evaluating the technical risk level. The following factors impact the possibility:
When a risk occurs, it has an impact on customers and users, which is very crucial. The following factors affect product and project risks:
Subscribe to the LambdaTest YouTube Channel and stay updated with the latest tutorials around Selenium testing, Playwright, Appium, and more.
You can evaluate the risk level on a qualitative and quantitative basis. You need to calculate the risk probability and its impact when these are multiplied to obtain the risk priority number. Such a number is quantitative.
Typically, the risk level is determined only on a qualitative basis. Therefore, you can categorize the probability of risk occurrence as very high, medium, low, and very low. However, it is impossible to compute the percentage value of probability to a specific accuracy level. Identically, you can classify the risk impact as very high, high, medium, low, and very low but cannot express it as a financial number. Nevertheless, the qualitative evaluation of risk levels is less significant than quantitative methods.
If your quantitative evaluation of the level of risks is wrongly used, the stakeholders get misguided about the level pertinent to the comprehension and management of risks. If the risk analysis is not based on risk data that is broad and statistically validated, then the risk analysis depends on the perspectives of the stakeholders, who are the programmers, testers, business analysts, architects, and project managers.
Each of these stakeholders has an individual subjective view of the probability of the risk and the impact of the same. Therefore, their opinion about every risk is different and, at times, extremely varied.
At a minimum, the risk analysis process must include a mechanism for reaching a consensus. You can reach this mutually agreed and prior level by computation of mathematical figures such as mode, median, and mean.
In a specific range, the risk level should be distributed appropriately so that the risk levels can render correct guidelines to decide the effort, priority, and sequence assignment for the individual test cases.
To mitigate risks, you have to begin with an analysis of quality risks. This includes identifying and evaluating risks regarding the quality of the products. This analysis of quality risks is the foundation of the test plans. When you perform test designing, test implementation, and test execution, you mitigate the risks by following the test plan.
The effort assigned to design, implement, and execute the test plan is directly proportional to the risk level.
For high-level risks, you have to design holistic techniques such as pair-wise techniques. For low-level risks, you have to design techniques such as equivalence partitioning. You can opt for less-detailed techniques, such as exploratory testing, for limited-time risks.
Based on the risk level, you can decide the development and execution priority of a test and the following decisions:
As the project continues, you get some extra information, which has the potential to alter the risk quality of the project and the impact level of the risks. The QA team must take cognizance of such information to tweak tests as per the changing scenario.
At the prime milestones of the project, it should implement adjustments, such as assessments of the efficacy of risk mitigation tasks completed, re-evaluation of the risk levels, and detection of new risks.
Following is an example of such adjustments:
Before you commence the execution of test cases, you can minimize the risks in product quality. During risk identification, if you detect the issues pertinent to the requirements, you can mitigate these issues using reviews immediately after you complete the detection.
In the product development life cycle, you can perform such mitigation before the subsequent phases. The outcome is that the count of tests essential during the subsequent quality risk testing processes is minimized.
In this section of the analytical test strategy tutorial, we will explore how to manage risks in SDLC.
During the entire Software Development Life Cycle, you have to input perpetual efforts to manage risks. The documents that throw light on test strategy and test policy should include the following points:
For product quality, if performance is a risk factor, the team tests the performance at several levels, such as integration testing, unit testing, and design testing.
Organizations that have experienced resources in this arena identify the risks quickly and move ahead to detect the sources and consequences of risks. Frequently, the team implements root cause analysis to have a detailed understanding of the source of the risks. Then, the team can plan the improvements that are essential to prevent the occurrences of defects in the future. The team executes the mitigation of risks throughout the complete life cycle.
The risk analysis in mature and experienced organizations includes the factors: analysis of liability risks, analysis of risks pertinent to end users, analysis of product risks, risk assessment based on expenses, system behavior analysis, and relevant work activities.
In such organizations, the horizon of risk analysis is much more than that of software testing. The QA team proposes the need for risk analysis and becomes a part of this analysis for the entire program.
A large percentage of risk-based testing techniques execute a combination of methods to leverage the risk level for deciding the sequence or priority of the tests. Through this process, the testing team verifies that a large percentage of defects are detected during the process of test execution and that a majority of the necessary parts of the product are included.
There are two types of risk-based testing techniques.
In the case of both the above testing techniques, the observation is that the time interval assigned to the testing process is completed before the entire testing is done. Therefore, when risk-based testing is performed, and the duration for testing gets over, this team furnishes a report to the management, which has information about the risk levels that are pending to be tested.
Leveraging this report, the management takes the decisions pertinent to continuing testing or otherwise.
Suppose the management decides not to perform more testing. In that case, the people responsible for addressing the remnant risks are the operational staff, help desk, technical support staff, end users, customers, or a combination of some or all of these people.
While testing is in progress, the Senior Manager, Product Manager, Project Manager, and other stakeholders can monitor and deal with the Software Development Life Cycle due to the risk-based testing. As these stakeholders monitor the development cycle, they can decide the proceedings of the product release based on the remnant risk levels.
To facilitate the stakeholders to make decisions in the life cycle, the Test Manager must present the risk-based testing results in a format that can be easily digested.
When a team plans software testing, including the potential risks within the software project is essential. The procedure to detect such risks is explained earlier in ‘The Identification Phase’ section of this analytical test strategy tutorial.
The team needs to share the identified risks with their Project Manager to chalk out the steps for their mitigation. In a practical scenario, it is generally observed that the testing team cannot mitigate all the risks.
However, this team can address the following risks:
When the testing team finalizes to address the project risks, the following steps become mandatory:
The testing team identifies risks and analyzes them. Further, the team can select one of the following four risk management methods:
When the testing team contemplates the four methods to finalize one, the team must consider the answers to the following queries:
When the team devises contingency plans, it must predetermine the plan owner and the risk trigger.
Risk-based testing supports many techniques. A large count of this is informal. In the case of some informal techniques, the tester performs exploratory tests during which risks to quality are detected. The tester implements the entire testing process in this manner, but the focus is on the probability of the occurrence of a defect instead of its impact.
In this informal method, the inputs from cross-functional stakeholders are not given due consideration. These methods are generally subjective and depend on the entire experience and expertise of the Tester. Overall, informal techniques cannot leverage RBT.
To leverage risk-based testing and minimize expenses, testers and software managers opt for the lightweight approach to risk-based testing. In this approach, the consent-building ability in formal techniques is blended with the flexibility of informal techniques. The types of risk-based testing are the following:
The above risk-based testing types have the following features.
The risk-based testing types can plummet the quality risks by a considerable count. The products of risk analysis affect the design specification and implementation. These types are considerably dependent on the group, a wide cross-section of people from all stakeholder groups, such as technical and business groups. The efficiency of these groups is maximum during the initial stages of the project.
In the case of techniques such as Systematic Software Testing, among others, the testing team can implement these techniques only when the team can use the required specifications as input. This specific condition ascertains that the risk-based testing has encompassed all the requirements.
In this case, the requirement specifications have probably missed out on some potential risks, particularly those risks that are non-functional. The Test Manager is responsible for verifying that the team does not ignore such risks.
When the requirements specification includes all the prioritized requirements and the good requirements, there is a robust relationship between the risk levels and the requirement priority.
The PRAM and the PRisMa types blend the requirement-based analytical test strategy with the risk-based analytical test strategy. The requirements specification is the main input to these RBT types. However, the stakeholders’ input can also be the input to the RBT types.
The stakeholders can leverage the risk detection and analysis process to reach a consensus about the correct RBT type they should adopt. For this to happen, the stakeholders must allocate time to conduct group discussions and one-to-one sessions.
In the group of stakeholders, if a sufficient count of stakeholders fails to participate, the outcome of the discussion and sessions is ‘gaps in risk analysis.’ Another point that creates an impact is that a specific stakeholder can have varying opinions about the risk level at varying times.
The risk analysis has lightweight techniques identical to formal techniques. These techniques judge the probability that a risk can occur and the factors that can impact this probability. These techniques have the output of identification of the business and the technical risks.
These lightweight techniques use two factors: the possibility of risk and the impact of the risk. Then, these techniques perform qualitative, simplistic judgments and scales to decide the risks.
The risk analysis has heavyweight techniques that are also formal. A Test Manager can select one of the following heavyweight techniques for identifying and analyzing risk.
For a successful RBT, the organization must have the best stakeholders to detect and analyze risks. Every stakeholder has a unique perception of the quality product, areas of significance, and priority items. Stakeholders are categorized into two types: technical and business.
After RBT is completed, it is time for test closure, during which the testing team can determine its level of success. They need to answer the following questions by leveraging pre-defined metrics:
If the answers to all the preceding questions are ‘Yes,’ the conclusion is that the RBT was successful. Lastly, the Test Manager has the following responsibilities:
In a methodical test strategy, you use a standard test basis on different applications. For instance, you may be performing payment testing, and the payment scheme requires mandatory tests for a specific type of transaction.
The following are different types of test strategies: Analytical Test Strategy, Model-based Test Strategy, Methodical Test Strategy, Standards Compliant Test Strategy, Regression-averse Strategy, Consultative Test Strategy, and Reactive Test Strategy.
Irshad Ahamed is an optimistic and versatile software professional and a technical writer, who brings to the table around 4 years of robust working experience in various companies. Deliver excellence at work and implement expertise and skills appropriately required whenever. Adaptive towards changing technology and upgrading necessary skills needed in the profession.
Author's Profile
Irshad Ahamed
Irshad Ahamed is an optimistic and versatile software professional and a technical writer who brings to the table around four years of robust working experience in various companies. Deliver excellence at work and implement expertise and skills appropriately required whenever. Adaptive towards changing technology and upgrading necessary skills needed in the profession.
Reviewer's Profile
Salman Khan
Salman works as a Digital Marketing Manager at LambdaTest. With over four years in the software testing domain, he brings a wealth of experience to his role of reviewing blogs, learning hubs, product updates, and documentation write-ups. Holding a Master's degree (M.Tech) in Computer Science, Salman's expertise extends to various areas including web development, software testing (including automation testing and mobile app testing), CSS, and more.