Why Are We Testing?
Why did we get this cross-functional testing team together? Was it to see Ernie from Dev and Sally from Testing resume their blood feud over who ate the most of the release party cake in the office fridge? Sure that’s entertaining, but there must be a business reason why we organized a team of key people from various departments and pulled them away from their duties for testing. Were there botched deployments, or has it been smooth sailing and we just decided to stop living on the edge? Is there one particularly defect-heavy area that’s causing the company to lose money? You can see where these questions are headed. Let’s stop and think about why we are testing and in-turn what “really needs to be tested” versus automatically, blindly striving for one-hundred percent test coverage. It’s a nice thought to be idealistic and boldly proclaim that “everything will be tested”, but we live in the real world where resources are limited and timelines are tight so we need to get the most bang for our buck in our testing.
What’s the Problem?
Going overboard with tests clouds the actual state of the system under test and can result in our testing team spending too much time on the creation and maintenance of the tests themselves, reports, and/or other artifacts while neglecting the most important things like how the system under test behaves or performs.
The complexities of test creation and maintenance are increased when dealing with enterprise packaged software that is common in the WMS (Warehouse Management Software) world, as it can feel like “systems under test” as opposed to a “system under test”. It’s rare that all of the various WMS users will agree on a common “happy path” user journey, as each has their own unique perspective and pathway through the system. Packaged software also often comes with different interfaces for each component/package and this can seriously complicate maintenance when the tests are written at the UI level, and there can be a variety of tech stacks (web, RF terminal, etc…) for each component/package which makes test writing even trickier. All of these additional complexities intensify the importance of writing the right tests versus your team writing all the tests.
For testing a WMS, no one should be able to get a testing team to have a finely honed focus more than the Warehouse Manager. A classically trained Test Automation Engineer or Developer in Test will be eager to dig in and create a huge suite of tests that validates the system under test from different angles and layers, whereas a Warehouse Manager (or whomever is directly in charge of the warehouse) wants a realistic suite of the right tests that ensures the warehouse doesn’t explode from defects or high loads/stress on the system. While both are important members of the testing team, the former is looking to check off a traceability matrix and/or test coverage sheet and the latter – the person ultimately responsible – demands that the warehouse runs smoothly.
What is the Test Pyramid?
The test pyramid is what we use to help focus on what and how much should be tested to increase testing coverage of a system under test while reducing maintenance costs and time to delivery.
Image from Automation Panda https://automationpanda.com/2018/08/01/the-testing-pyramid/
Choosing the Right Layer
What can we learn from the traditional software developer in test that works on a scrum team in a product development department in order to alleviate the woes of today’s frustrating processes at work? There must be a way to validate these IT delivered solutions without having a multitude of humans bashing away on the keyboard in hurried UAT cycles. Do we just automate all of our manual tests? That seems like a maintenance nightmare.
The E2E layer is for running a user workflow within the user interface and relies on multiple components working together to achieve a satisfactory result. They are also the slowest to execute in most if not all cases. Due to their dependency on the browser or other platforms providing access to the content, which may or may not be under their control, the complexity explodes and the time to execute can vary between executions due to the large number of dependencies. They can also fail due to those content providing platforms like the browser running into bugs, slowdowns, incompatibilities with software or other technical issues.
These tests should be few in number. The focus should be on testing only critical user workflows to reduce your run time and get the most value from running your E2E tests. These tests are inherently flaky and the more workflows you test in the E2E layer, the more maintenance they will need later on which detracts your attention from focusing on how the system under test is performing.
The middle layer is our Integration tests. This covers Databases, File creation/read/write/deletion, internal and third party APIs and so forth. Tests in this layer will focus on validating the interactions within the system under test which must work properly so functions work as intended. They are straightforward and typically require less dependencies than the UI tests.This layer covers the individual integration tasks that take place when a portion of the user workflow is executed. It also covers integration points which are crucial to the system maintaining accessibility for the user. In this layer you can find issues, resolve bad assumptions and identify poorly performing components within your system that would have been harder to find through a UI test or otherwise may not be easily noticeable. A simple test could consist of sending a request to an API and then verifying that the returned data is what you expect. Like making a request for users profile information. A UI test would have to navigate to the platform, a basic user login, wait for the navigation to complete, hope everything loads properly to not block the test, and then navigate to the users profile page/section to check that the information is correct. When comparing the two it should be obvious which test can validate that user details are correct the fastest. Which means you’ll know sooner rather than later that your system is functioning as expected or if you have issue(s) you need to resolve. Remember that these Integration Tests will cover interaction points within the system to test them in isolation in order to give you a quicker and more focused idea on the status of the system under test.
The unit test layer is your focus on the smallest, testable piece of the system under test. This is where you can get a pinpoint detail on how the system is functioning when changes occur. If a unit test fails you know exactly where it failed and should be able to quickly identify why, resulting in fast identification, triage and resolution of bugs, in a very short window, before users are impacted. With unit tests there should be zero dependencies on other components. There are no dependencies, and so these are the purest and fastest type of tests. A unit test would be something like testing that the users name display field properly displays data correctly. You aren’t using real user profile data but examples of what user profile data could be which mimics what the profile data api would feasibly provide. You can cover a multitude of conditions to cover the proper usage and functionality of the user name display field. Keep in mind that these are the bottom of the pyramid and constitute the largest quantity of tests. There are a large number of unit tests, though they will still run faster in most cases than the UI tests which mean they can be run more frequently and all the while providing exact output of where the system is not operating as expected.
Let’s look at a real-world example of strategically choosing the best layer in the WMS world. When allocating an order in Blue Yonder WMS we should see picks drop into our work queue, and if that doesn’t happen then it means that there is either a defect or perhaps the culprit is something simple like not having enough inventory. You can write the appropriate tests for this that involve driving the Blue Yonder interface and reading the screen, but these are relatively costly GUI heavy tests with potentially many permutations. Alternatively we could more cheaply and quickly test the same thing with Blue Yonder’s WMS API (MOCA) commands and validate the result. We should still test our WMS user interface of course, but this coverage may be better suited elsewhere in our test suite and so we decided to write the order allocation test at the Integration layer.
To Wrap It Up
When you have each of the test pyramid layers executing against your system under test you can utilize the output to have an understanding on how everything is performing. They will require maintenance as systems change, requirements are updated and updates are released so you have to be vigilant to not go overboard on any one particular layer and keep asking yourself how far you can break down a test to its smallest components before you start automating. Don’t stress out too much if you go overboard on the UI side, just take the non-critical UI tests and find out if they are already tested in other tests, partially or wholly and fill in any gaps you identify. Your goal is to find that balance between cost and value of your test automation. We want just enough tests to get value out of them while keeping your maintenance, price, time costs as low as possible.
This post was written by:
Technical Pre-Sales Consultant
James has been working in software pre-sales and implementation since 2000, and has more recently settled into focusing on technical pre-sales. He takes care of our hands-on demonstrations, and eagerly awaits your request to see our Cycle test automation software in action. Drop him a line at: james.prior[at]tryonsolutions[dot]com.
Cylce Product SDET
Seth enjoys learning, working through code problems and automating. He has worked in different markets from Healthcare to Fintech and now focused on supply chain. Currently Seths focus is on supporting the Cycle platform from bug fixes to manual validation and automated testing. His purpose is to collaborate with the team to build and maintain quality for our users.