Tips for Writing Good Automated Tests
In a previous post, “What Makes a Good Test Case?”, we talked about the components that make up a test case. In this post, we will talk about what should be included in an actual automated test. First, however, to answer this question we need to understand the purpose of an automated test. The intent of an automated test should be to signal when something is behaving as expected or to provide an indication that an unexpected event has occurred. They should be reliable and the results should impart enough understanding, at a quick glance, as to the state of the system under test (SUT).
So what makes a good automated test? A good automated test is a quick, reliable set of instructions that sets up a state within the SUT. Then the test performs an action that produces a change to that system’s state. The portion of the test should be to assert that the change to the system was expected and meets the purpose of the test.
You set a group of pallets within a staging zone. Setting the state of the test’s start point.
Then you set about a list of instructions that dictates where those pallets should be picked up and where the destination should be. This is the action that changes the state of the pallets.
Assert Expected Change
The end of the test will now assert the pallets are in the correct location per the action steps instructions.
In this example, you are testing that the Perform Action aspect of the test successfully met expectations. Your state was set and your expectations are clear. Did the process in the middle behave how you expected it to? Our test allows us to verify that if we have a given set of conditions and we know our beginning state of the SUT, performing an action should be met with an expected outcome. The example provided is very basic. However, most of your tests should take this form as it maintains a somewhat quick and efficient way of communicating a starting point, what change needs to be performed, and then validating that action was successful.
The benefits are that it’s easy to understand with as little complication as possible to perform the action portion. When it fails, we should know how to identify the point of failure and begin to remediate it. A good test should be easy to understand and consistently provide the same results under expected conditions. Reducing complexity in the test allows us to provide that consistency while also hopefully speeding up the time the test execution takes. The faster a test can run, the quicker you can see your results.
Balance Between Execution Time and Readability
However, we cannot simply streamline a test or a test suite to maximize execution time. There must be a balance between execution time and readability. A test can execute within milliseconds, seconds, or minutes. When that test fails, can it still be understood by the one tasked with deciphering the reason for the failure? If troubleshooting a failing test, you lose your execution time benefits and have simply traded for longer troubleshooting time which then delays time until remediation can take place. It could also further complicate matters if the test has to be executed multiple times to understand its purpose. This not only adds time to troubleshooting but also has a higher chance of “muddying the water” within the SUT and could make it harder to identify and understand.
Automated Test Reliability
In some scenarios, it may be difficult to find that balance and construct a test in this manner. In these complex scenarios, we want to ensure we create tests that are readable and reliable. Test reliability allows us to quickly trust what the test is telling us about the state of our SUT. Which increases our response time to identify the issue, or issues, and develop a proper plan to address them.
There is no true magic bullet to the perfect automated test as all systems and their dependencies are variable. The focus is on understanding what the test’s purpose is and finding an agreeable balance between understanding the test after it has been written and how long it takes to understand whether the area the test is validating is in a good state or needs attention.
Are you interested in learning more about implementing test automation in your warehouse management system? Check out our success stories, more of our blogs, or learn more about the Cycle platform.
This post was written by:
Manager, Quality Engineering and Automation