Skip to main content
[blockquote] For each satisfied customer there is a development team taking verification very seriously.
[/blockquote]

Product verification is in one way or another a fundamental part of any development project. In this post I will share some of our best practices, methods and findings in this domain. The definition of verification varies depending on the context. However,  the purpose can be defined as follows:

  • Uncovers conditions which might negatively impact the customer.
  • Ensures that the target fulfills the set requirements, regulations and specifications.
  • Measures the target maturity.

The first point is emphasized – the ultimate goal is to develop a product which maximizes the customer satisfaction and for this verification is one of the key elements. The purpose is not to find each and every defect of the product as this is nearly impossible with a bad return of investment. Poor customer satisfaction in the scope of the product is typically caused by:

  • Inaccessible features due to the system failures. This is like buying a Swiss knife with tens of utilities from which only three can be used while others cause wounds.
  • Regression. It was verified, the customer was pleased with it…. and now it is broken. Just bad.
  • Lost data. This is same as lost time and money.
  • Poor usability. No matter how defect free the system is – bad vibes are directly proportional with the bad UX.

Generic verification guidelines

Here are some generic verification guidelines which can be applied to all kinds products.

  • Plan your verification in the early phase of the development. Select the tools and methods which suite best for your product being developed.
  • Make a maintenance strategy. How is the product quality maintained after a first commercial release? How are the changes applied to the products already released?
  • Prioritize the defects. It might be required to have all of them fixed but just like features their impact varies, thus the order of the fixing is essential.
  • Defect occurrence probability is irrelevant. If a specific defect can manifest it will manifest.

Verification phasing

In traditional methodologies verification focuses heavily to the end of the project. In fact, some of the traditional development projects do not even consider any verification activities (in addition to its planning) until the project ends its separate development phase. There are some problems in this which I cover later.

Typically the total time spent on the verification activities in a development project varies from a tenth to as high as a third of the whole project. This time depends on the following aspects:

  • Developed product complexity.
  • Amount of applied requirement changes during the development.
  • Amount of related regulative requirements and standards.
  • Amount and quality of 3rd party dependencies (components and system interfaces).
  • Development team maturity.
  • Clarity of the final system needs.

Typically it is hard to accurately estimate this time, implying even more unpredictability to the total amount needed for the development. Distributing the verification activities more evenly through the development can significantly reduce this unpredictability and the total verification time. In addition, we gain the following advantages:

  • Intermediate product releases are more mature, making it possible to e.g. demonstrate the product in an early phase of the development to the stakeholders;
  • The deviations of the estimations can be seen much earlier;
  • The changes to the product are better introduced when the maturity is maintained throughout the development.

Agile methodoligies state that each increment (carried out in short development iterations) to the product is potentially shippable. This means that the increments developed in short iterations contain the needed verification activities to meet the set quality and customer targets. In fact, the iteration can be seen as a very short traditional development project with its planning, development and verification activities. Following this leads automatically to more evenly distributed verification load. You should consider

  • making clear product milestones with a subset of all features, e.g. minimum viable product;
  • carrying out the development in short (e.g. 3 weeks) iterations and including the verification activities in them increasingly;
  • plan your verification activities and methods in an early phase of the development project.

Verification methods

There are many categorizations for the verification methods. Perhaps the most common is to divide the used methods into white box and black box verification but we categorize these into three distinct verification levels; unit, integration and system.

Unit level

Unit level testing methods require that the internals of the product like individual software components and source code are accessed at the time of the verification. Although this doesn’t verify the required features, it contributes positively to the required verification time reducing defects caused by e.g. integration and changes.

Static software analysis contains verification methods like reviews, inspections and walk-troughs applied without actually using any of the product features. Studies imply that these methods do not contribute much to the product quality when performed by a person. However, better results are obtained with the following:

  • Use static analysis tools to catch (potential) flaws in the source code frequently.
  • Define practices for coding style and its documentation (commenting) to help readability and maintainability.
  • Change development responsibilities to have more eyes for all parts of the source code.
  • Apply the strictest of rules for the compilers and let them assist you with their warnings.

Unit tests verify that a specific part of the software operates as required. The unit tests are an essential part of the verification to reduce the defects caused by source code modifications. Although writing the unit tests is additional work, it is guaranteed to pay off in the total verification time. Some guidelines:

  • Ensure that unit tests are inducted as an equally natural part of the development work as the product feature development itself.
  • Apply unit tests frequently, e.g. before contributing new source code to the code base, to minimize the risk that the new code breaks existing functionality.
  • Automatize the unit test execution as a part of the release building.
  • Think about the ways that could potentially cause the unit being tested to fail and implement a unit test for this. Use also invalid inputs.
  • When encountering a defect try to manifest it with a unit test which fails until the defect is fixed. This way the unit test coverage extends throughout the development.

Unit tests are also used as a development method in so called test-driven development. In this, the developer first writes the unit tests which can be seen as the interface requirements for the unit being developed. Only after this, the actual unit is implemented to fulfill the unit test.

Integration level

On the integration level the verification methods verify that all the contributed components of the system operate as required. Typically the integration involves building a software into a complete system, during which unit tests are applied to the individual components. Guidelines:

  • Utilize continuous integration systems like Jenkins.
  • Build the software frequently. A good practice is to initiate building nightly and after each source code commit.
  • Apply unit tests automatically in each build.

System level

On the system level the internals of the product are hidden and the product is handled in a similar way as its end user would do. The verification methods on this level include dynamic testing like carrying out the predefined test cases, performing exploratory testing and evaluating performance with performance testing.

Test cases are designed by the testing engineers and they are based on the product’s individual features and requirements. A good test case defines the detailed steps on how the test case is carried out, what are the inputs and what is the expected outcome. Our guidelines:

  • Favor the use case/user story style in the requirements. This allows you to define the test cases based on a specific requirement more easily.
  • Some organizations hide the test case details from the development team to avoid fixing just the specific issue defined by it. There is little evidence that this contributes positively, so avoid doing this.
  • In the risk management you should identify the critical and risky areas/features. Focus more on these.
  • The user interface  – although the most visible part to the end user – is usually most prone for changes. Avoid designing test cases for purely UI elements (graphics, layout etc.).

Exploratory testing is a verification method with more degrees of freedom compared to test case based verification. It emphasizes constant learning as a result of e.g. testing a specific feature. It also tries to solve the problem in which a technical person in the development team tends to use the product always similarly, resulting in poor verification coverage. Think about the situation where a product is used by an end user for the first time close to its release. Quite commonly the product starts almost magically to manifest never-seen problems and defects. The reason is almost always the mentioned: the development team fell into a routine and started to use the product repeatedly in the same way, on the contrary to end user that had no experience with the product. It is the freedom of exploratory testing that helps fight this phenomenon: it tries to find new ways to use a specific feature whilst learning it more in detail. Guidelines:

  • Determine clear feature groups and break these into smaller sub groups. Mind map is a good tool for this.
  • Focus on one group at a time, going deeper and deeper into specific feature.
  • From time to time think about the new ways to use the feature. Think also of the “invalid” ways, targeting to find a way the software designer has not thought.

Performance testing focuses to verify how the system being tested performs in the context of stability and responsiveness when a specific stress or workload is applied. These methods can be used in various levels from a server software to e.g. a user interface and graphics rendering. Various tools are available for this, depending of the verified target.