Team members consult on the results of a discovery workshop, which is outlined on the board behind them.

A crucial but often overlooked aspect of software development is quality assurance (QA). If you have an app in progress, you will likely hear this term throughout the development life cycle. It may seem that coding is the brunt of the development work, since without code your app doesn’t exist, but quality assurance efforts often consist of up to 50% of the total project effort (1) (and part of the QA effort is coding). Without quality assurance, your app may exist but it is unlikely it will function well, meet the objectives of your users, or be maintainable in the future. QA is important, but what exactly is it?

QA factors

Software quality assurance is a collection of processes and methods employed during the software development life cycle (SDLC) to ensure the end product meets specified requirements and/or fulfills the stated or implied needs of the customer and end user. Software quality, or the degree to which a software product meets the aforementioned specifications, comprises the following factors as defined by the ISO/IEC Standard 9126-1: functionality, reliability, usability, efficiency, maintainability, and portability. The following sections will go over what these factors are in more detail, and how quality can be assessed for each.

Functionality

Functionality, as an aspect of software quality, refers to all of the required and specified capabilities of a system. High quality is achieved in this aspect if implemented functionality works as described in the specifications. Arguably, you could have a software product with high functionality that does not have any of the remaining aspects and is still useful to some extent. The same cannot be said for the other quality assurance factors.

The key to ensuring correct functionality in a software product is to start specifying functionality early, in the discovery phase. Requirements need to be teased out, defined, and recorded. This can be done in a discovery workshop or other forms of requirements gathering, and will continue to occur throughout the SDLC. Requirements often change throughout a project, and it’s important that any changes be documented and communicated to all parties.

With documented specifications, functionality can be assessed during development with white box testing techniques like unit tests or subtests and black box testing techniques like exploratory testing.

At Caktus, white box testing is primarily handled by our developers, while black box testing is the domain of our Quality Assurance Analysts. Functionality assessment occurs in every step of the development process, from initial discovery to deployment (and future maintenance).

Reliability

Reliability is defined as the ability of a system to continue functioning under specific use over a specific period. In order to assess reliability, it’s important to identify how the software will be used early in the development process. How many requests per second should the app support? Do you anticipate large spikes in traffic tied to scheduled events (e.g., beginning of school year, end of fiscal year, conferences)?

Expected usage can inform the technology stack and infrastructure decisions in the beginning phases of development. Reliability testing can include load testing and forced failures of the system to test ease and timing of recoverability.

Usability

Usability refers to whether end users can or will use the system. It’s important to identify who your users are and assess how they will use the system.

Questions asked and answered while assessing usability are: How difficult is it for users to understand the system? What level of effort is required to use the software? Does the system follow usability guidelines (e.g., comply with usability heuristics and UX best practices, or adhere to a style guide)? Does the system comply with web accessibility standards (e.g., Web Content Accessibility Guidelines or Section 508)?

Conducting usability testing with end users helps uncover usability problems within the system.

Efficiency, maintainability, and portability

Software efficiency refers to the measurement of software performance in relation to the amount of resources used. Efficiency testing evaluates compliance to standards and specifications, resource utilization, and timing of tasks.

Maintainability refers to the ease with which the software can be modified to correct defects, meet new requirements, and make future maintenance easier. An example of poor maintainability might be using a technology that is no longer actively supported or does not easily integrate with other technologies.

Portability refers to the ability to transfer the software from one tech stack or hardware environment to another. The requirements for these three aspects should be discussed by project stakeholders early in development and measured throughout development.

Important notes about quality

The above quality characteristics (functionality, reliability, usability, efficiency, maintainability, and portability) must be individually prioritized for each project, as it is impossible for a system to fulfill each characteristic equally well. Focusing on one aspect may mean making decisions that negatively affect another (for example, choosing to use technologies that make a product highly maintainable may make it much more difficult to port). Frequently, a specific product requires a very narrow focus on one aspect; a tool that has a very small number of users only needs to be usable for them, not the whole gamut of humanity.

Target quality for a product should be discussed among all stakeholders and agreed upon in writing as early as possible. This quality agreement should be stored somewhere easily accessible by all team members and referenced frequently during execution of quality assurance tasks.

There’s an unspoken tenet of software development that says no product can be defect-free. In order for software to be successfully developed and deployed into the wild, it’s important that all parties acknowledge this.

Striving for perfection and a 100% defect-free app will waste time and resources, and ultimately be futile. Similarly, it’s important to recognize that the absence of identified defects does not indicate a product is defect-free; more likely, the absence of defects indicates the product has not been thoroughly tested.

The goal of quality assurance is not to ensure there are no defects in the software, but to ensure that the agreed upon quality level is met and maintained. You should expect that some known defects will be low priority and not fixed before deployment. Additionally, you should expect that some defects will be very high priority and required to fix prior to deployment. Priority of defects should be determined by a combination of the quality agreement, severity of the issue, and stage in the SDLC. We’ll go into more details regarding prioritization of defects in a later post.

References

(1)Andreas Spillner, Tilo Linz, and Hans Schaefer. Software Testing Foundations, 4th edition.

Developers at computers working on upgrades project
blog comments powered by Disqus
Times
Check

Success!

Times

You're already subscribed

Times