Software development comes in many shapes and sizes. While there is no “one size fits all” software development methodology, you will find that testing is at the center (heart) of every good software development process. To use a sports analogy, since I’m currently testing the SportsClash application, testing is like the offensive line in football. They don’t get much recognition when they clear the way for the star running back to score, but without them, reaching the end zone would be much more difficult. The same goes for releasing software (aka: “finding the end zone”), without testing leading the way and doing the dirty work it will be much more difficult to have a successful implementation and user experience.
Software Development Methodologies
There are a lot of different software development methodologies and practices that have been used over the years. While there have been many different flavors along the way, most fall within a variation of either Waterfall or Agile.
The Waterfall or otherwise known as the “traditional” development method, takes a linear approach to software development with each event of the software process occurring after the previous step has been completed in full. This type of development methodology takes a more holistic approach to software with the goal of delivering a complete, full-functioning application to its end users.
The Agile method to software development takes a more iterative approach of delivering software by building and deploying smaller pieces or chunks of functionality to end users. This methodology allows users to have frequent and early looks at the software allowing them to make modifications or corrections to the preliminary design of the software.
Both methodologies have their pros and cons and each methodology still holds value depending on the situation and involvement of the customer. That said, Agile software development has become more mainstream with numerous frameworks including Scrum, Kanban, XP, and many more.
Software Development Process
While different methodologies make adjustments to the time intervals, techniques, and roles used in software development, the same basic process steps occur in all of them from gathering requirements all the way through deployment. Here is a list of the generally accepted steps of the development process, along with the way that testing is intertwined through the entire lifecycle.
In order to build an application to solve a business need, you must first understand what the problem is that you are trying to solve. This should come out in the requirements (or discovery) step of a project. The requirements step is used to ask all of the questions about the problem being solved to get the why, who, how and what, so that a solution can be identified and constructed. As requirements are identified, this is the time for test planning to begin.
Now that the problem has been identified, the next step is to determine how to build a system that will meet the needs of the users to solve that problem. This is where the vision of the end product is conceptualized, and system design specifications are created to achieve that vision. This is also a time for prioritization to occur by determining which pieces of functionality are the most vital to the success of the project or where dependencies exist. The design phase is a critical time for tester involvement so that they can come up with a testing strategy and have a deep understanding of the desired functionality so they can ensure it is meeting the business needs and solving the problem identified.
With a vision of what the end product may look like and some direction on the priorities, coding takes place to build the desired functionality. Development in any methodology inherently becomes an iterative process as developers uncover design inefficiencies or need to rework code based on testing feedback. Testing begins in the development step with the developers performing any variety of tests including unit tests of their code and general functional testing of their changes.
When the developers have completed coding a piece of functionality and a testable application is delivered then testing occurs. While there are multiple types of testing that can be performed (more on that below), the main goal of the testing phase is to ensure that the functionality is working as designed to solve the business problem identified and without errors.
After usable functionality is developed and successful testing has been achieved then the product is delivered to the customer to use. As issues are found or additional functionality is developed and tested then subsequent deployments of the product are delivered.
While this is a simplified view of the steps in the development process it does make up the core of most development methodologies. You can also see that testing is at the heart of the development process as a good testing process reaches throughout all of the steps in the development lifecycle.
How much Testing is needed?
Now that we have some background on the big picture of the development process and methodologies, let’s dig a little deeper on testing. At this point, I hope we have established that testing is a key component of the development process regardless of the development methodology that you use. But what types of testing should you perform? And, how much testing should you do? The answers to these questions come down to the size/scale of the application that you are testing and the time and money available. In project management, this is commonly referred to as the Project Management Triangle.
The same type of concept can be applied to what types of testing should be performed, depending on the application being built. A large scale, complex application will require a greater investment of time, money and resources/tools for testing. While a smaller, simpler application will require much less testing effort. While this seems obvious it is not always easy to determine where the breaking point is between useful, productive testing and excessive testing.
I personally subscribe to the 80/20 rule, otherwise known as the Pareto principle, when it comes to testing. The general definition of the rule states that 80% of the effects come from 20% of the causes. In testing terms, you could state that 80% of the coverage comes from 20% of the effort. Testing starts to get expensive(time/money) when reaching for that final 20% as it will take 80% of the effort and often goes after verifying something that may never even occur. That is not to say we shouldn’t strive for 100% but we need to make sure that we are smart about how to utilize our testing resources.
Types of Testing
Here are some types of testing that I feel are the bare minimum needed regardless of the size of your application. I have been successfully utilizing these types of testing as part of our SportsClash application.
Smoke Tests – Basic high-level tests that walk through the core pieces of the application. Smoke tests are typically executed after receiving an updated build from development. They assume a “happy path” through the system and are meant to be executed quickly with the goal of providing a fast and simple verification that the application is functioning.
Functional Tests – Functional testing is used to verify that a specific piece of functionality/code is working as expected according to the requirements and design. These tests are typically performed following a new build from development that includes ready to test pieces of the application. This can include new features, defect fixes, or other types of changes being introduced into the system. Once a functional test has been executed, issues should be logged that cause the system to break or not function as specified.
Positive and Negative Tests – While there is always the hope that users use the system as expected and walk down the “happy path,” that is not always the case. Or more accurately stated, “that is usually not the case.” Implementing and testing for both positive (expected) inputs and use of the system comes naturally and are always considered. However, it is the negative (unexpected) inputs and use of the system that typically cause the most problems and needs to be tested. This is the tester’s “demolition day” where they try to break the application by entering in random inputs in strange out of sequence order. Both types are necessary when testing any application.
System/Regression Tests – These tests are looking at the big picture as they should cover the entire system from end-to-end. They encompass all of the various individual functional tests that have been incorporated into the full suite of test cases. System tests follow multiple paths through the application, both positive and negative, to help ensure that the expected behavior remains consistent after new builds and before releasing the application to users.
Usability Tests – Usability testing is the one area that I feel gets overlooked but is very important to give the final product a polished look and feel for the users. Testers of an application typically have the deepest knowledge and understanding of the entire system as anyone throughout the whole development process. Good testers will always keep the end users in mind when they are performing their testing of the application and should bring up areas for improvement to make the application a good user experience.
Performing these types of tests will take you a long way toward reaching 80% coverage while only exhausting 20% of the effort.
Other Types of Testing
While every application should have the testing above performed at a minimum, especially for small to medium-sized applications without too much complexity, testing can be taken a lot further to try to get that last 20%.
Automated vs. Manual Testing – All of the testing that has been discussed so far is from a manual perspective, but if you have the time and tools to automate them, then by all means do. Building automated tests can be expensive and aren’t necessary for all applications, but after getting past the upfront costs to build and maintain them, they offer great savings down the road by executing much quicker and more accurately than manual testing. You can automate a variety of tests such as smoke tests, positive and negative tests, and, of course, system/regression tests.
Unit Tests – Unit tests are low-level tests that are built by developers to exercise each piece of code in the system. They test individual modules, classes, methods, and functions in the code and are automated, so they can be re-run very quickly. When unit tests are created as part of an application they provide a great safety net for developers to ensure their coding changes are not adversely affecting existing functionality or paths through the code.
User Acceptance Tests – Any time you have an opportunity to leverage an actual user or future user of your application to perform testing on it before it is released to the masses, you should take advantage. As much as business analysts, developers or testers think they may know how a user will use an application, they will still be surprised by the ways an actual user will think and work within an application. Get users to be part of the testing process early and often so that adjustments can be made to the requirements, if needed, and testers can incorporate more “real world” scenarios into their test suite.
Performance/Load Tests – While it is often one of the last considerations when building a new system, how that system performs with a lot of users on the it at one time should be incorporated into larger scale applications. There are a variety of tools and methods to perform performance testing from simply coordinating a time for the entire team to pound away on the system to automated tools that can simulate a variety of performance factors and load stress on a system.
There are no limits to the amount and types of testing you can perform. It all has its place and it is all valuable. However, we need to be smart about making the best use of our testing resources. Here are a few takeaways to consider before you begin building your next application.
Every application can benefit from having at least some testing performed by a person other than the one who is developing it.
Analyze each project and application individually to make decisions on how much testing to perform and what types of testing to use.
Even a little bit of testing goes a long way. Remember the 80/20 rule.
Usability testing is the unsung hero of the testing process as it helps make the system more user-friendly.
Testing doesn’t get much glitz or glamour when a new application is released, but it is the lifeblood of the Software Development Lifecycle.