While testing software that has already been tested during its development and is now packaged to be sold can seem like an unnecessary step, taking that packaged software straight off of the shelf and integrating it into our products ecosystem is not always straightforward and usually requires configuration before functioning as expected. In some cases this new software is a dependency of other off-the-shelf software or an extension of your current systems. The risk of application bugs, misconfiguration and other issues are high and we won’t know how the system will behave until we’ve plugged all of these interconnected applications together to determine how they operate as a whole. Testing commercial off-the-shelf-software (COTS) reduces that risk and ensures the best possible integration of COTS. Find out how we recommend testing packaged software in our latest blog.
INITIAL PURCHASE, INTEGRATION AND VALIDATION OF COMMERCIAL SOFTWARE
Upon purchase of commercial software we will begin integrating it into our products ecosystem and then validate any workflows or processes to ensure successful operation and correct output. Once the software is set up we then need to start manually validating that the software is working as expected and whether other components are working properly as well.
COMMERCIAL SOFTWARE IS UPDATED ON A TIMELINE
Commercial software is generally updated on a regular basis. Updates can be as frequent as daily, yearly or somewhere in between. If our system contains multiple services or components of commercial software we should expect that the updates are not going to be aligned. This means we should stay up to date with validation of each update to avoid a complex scenario of figuring out which update out of many is causing our issues. Each update should follow with a validation that the existing operation has not been negatively impacted and changes to that software are accounted for within the integration. Manual validation takes time to complete properly and if updates occur across a selection of software within a short period of time we will develop a backlog of queued validation requirements. Depending on the number of available personnel and their workload, running these validations could take an unexpected and extended amount of time. The validation backlog will most importantly delay bug and security fixes as well as new features from being readily available within our products ecosystem.
BUGS ARE INHERENT IN RELEASED COMMERCIAL SOFTWARE
All software has bugs that can occur for any number of reasons. Some of those could be development restrictions, lack of software requirements during development, simple oversight or development tool build artifacts. Now once a commercial software product is sold, that is generally not it’s final form. Updates can come frequently or infrequently but they will come regardless. This is expected in general as we wouldn’t necessarily want to purchase software that is no longer receiving updates. If we did so, we would have to accept the state of that software as is and work around it’s shortcomings as well as any defects it may contain. For software that is being supported with updates, these changes may have no impact on the integration of the software. The updates also may not require reconfiguration of the software. However the risk is there with any update. We won’t know whether an update has a positive, negative or neutral impact on our products ecosystem until we validate that software’s processes. This can be further complicated if we are looking at multiple commercial software solutions working together. How can we identify the point of failure with so many possible areas of origination? What if the bug manifests due to the combination of multiple integration points and configurations?
WHY MANUAL TESTING CANNOT BE THE ONLY SOLUTION
We need to identify these issues within the product as soon as it is viable in order to reduce the negative impact they could have on our business and the user. After an update, manually checking the software’s expected processes is generally the primary goal. If a bug is found, more time is needed to identify the cause and initiate communication with the software’s vendor. When we develop software we are our vendor and blockers are limited but with commercial software we are limited in how far we can debug a problem and generally can only make best guesses as to the root cause. Now the vendor will need to help us resolve our issue. With any vendor we hope for the best turnaround time but this dialogue can go back and forth and if not followed up on quickly can easily turn into a delay in resolution. During this time we could have multiple bugs present and in varying states of resolution, requiring we manage the ecosystem with the bug present or reverting to the previous version of the software. Managing the system with the bug present will be difficult depending on how critical the bug is. We will also need to track the impacted area when updates are applied to software integrated with it. If we wind up reverting to the previous version of the software then once finished, we will need to manually validate the previous operating behavior is back in place so we can resume normal operations.
AUTOMATING THE INTEGRATION OF COMMERCIAL SOFTWARE
When we start to think about investing time in automating the tests, we shouldn’t fall into the trap of trying to automate everything. As we know, code created is an investment which comes with a maintenance cost. Test automation is identical and automating everything increases this cost. However, automating nothing increases our time needed to identify risk since we have to manually check the processes everytime. Automation when done right can increase the speed, efficiency and repeatability of our testing process. This leaves more time for diving deeper in areas which are harder or not as critical to automate and expands the areas covered which further mitigates risk. The more areas we cover with testing the more likely we are to uncover unexpected behavior. We’ll have a better understanding of how the system is operating as a whole and feel confident in our ability to meet expectations and provide deliverables within a reasonable timeframe.
To get started in automating the right areas we first need to identify our critical flows within the systems integrations and try to identify any points of failure. These points are first to be automated as they can provide the quickest feedback on whether a system is operating as expected. They can also provide first signs of failure allowing for faster identification and remediation of conflicts. Once the points of failure are covered with automated tests we can turn to automating base functionality within our commercial software. This helps us with our next set of tests which rely on taking all or most of those small functional tests and combining them into a full end to end test allowing us to validate the full workflow of our products ecosystem. A point to consider when identifying tests to automate is the return on investment and covering areas where a failure would exceed the cost of time to automate. By having a healthy and focused Manual and Automated testing solution we can effectively find the right balance of test coverage and risk mitigation of our production system.
ULTIMATE RETURN OF INVESTMENT BY AUTOMATING COMMERCIAL SOFTWARE
We need to get our return on investment on track to maximum value from purchased commercial software. That means identifying areas where we can invest in processes to increase efficiency and speed of delivery. Test automation against commercial off-the-shelf software allows for increased efficiency and increases visibility into the production ecosystem allowing for better risk management and preparing for change at a faster pace.
This post was written by:
Cycle Product SDET
Seth enjoys learning, working through code problems and automating. He has worked in different markets from Healthcare to Fintech and now focused on supply chain. Currently Seth’s focus is on supporting the Cycle platform from bug fixes to manual validation and automated testing. His purpose is to collaborate with the team to build and maintain quality for our users.