We’ve paid homage to Jenkins in a previous blog post singing its praises for being open-source and relatively easy-to-use; now let’s look at requirements and a high-level overview of how to set it up. First, we should level-set on key definitions before diving in…
Continuous integration is a development practice where teams regularly merge code into a shared repository, and after each merge the code can be automatically built into a release candidate and then tested. Continuous testing is the process of executing automated tests, often after a code merge or timed test, to obtain feedback as soon as possible on any business risks associated with release candidates. And finally, Continuous Delivery is a development discipline where software is built in a way that allows push-button deployments of release candidates into the real world at any given time. Let’s smash these together into an unwieldy, but industry recognized acronym: CI/CT/CD.
The scale and power of your hardware is directly related to what you are testing. At the very least, there should be a dedicated master server where Jenkins resides and one-or-more dedicated agent machines where the tests are executed. You will also need a code repository like GitHub or BitBucket, and of course if the application in test talks to servers during the course of its operation then they will be in the loop as well. Containerized environments may be worth consideration for your CI/CT/CD setup, though we won’t be discussing them in this basic overview.
This blog post is in no way suggesting that Jenkins, or any similiar solutions, should be adopted by every single testing team for one-hundred percent of their testing. In the event it does make sense for your team’s testing needs, then you will need buy-in from the personnel involved and they of course will also need the technical know-how to implement and maintain any additional infrastructure and Jenkins.
There should be one or more compelling reasons for your team(s) to use Jenkins beyond “every tech company is doing it”. Valid reasons include: the ability to receive feedback on defects quickly, more efficient overall test execution, and a faster time to deployment as a result of more streamlined automated testing. CI/CT/CD sounds like a cool acronym, but is continuous testing with Jenkins fit for your testing purposes now and/or in the future?
Your team will need a common directory where the source code for the application in test is housed and shared. GitHub and BitBucket are popular cloud-hosted options. One of the few must-have configuration settings in Jenkins is a link to this repository in the “Branch Sources” section.
When should an automated test or suite of tests be kicked off? After determining which tests to automate in your CI/CT/CD setup, this is the next most pressing question for your testing team. The most common test execution trigger is right after a code merge, though there are other options like timed tests on a schedule or tests kicked off due to an event within an issue tracking tool like JIRA.
Installation and JenkinsFile Setup
There are a wide variety of operating systems and platforms supported by Jenkins. One style of installation is to extract the WAR install file (available at Jenkins.io) into a directory, run the application, and then browse to “http://localhost:8080“ to start the wizard for the Jenkins web app. Installing Jenkins on Red Hat Linux can be done by simply typing “yum install Jenkins” at the command line. There are roughly thirty-seven bagillion plugins available for Jenkins, the most essential of which is the Blue Ocean plugin that makes for an easy-to-use interface and a slick demo – especially if you are showing off Jenkins to non-technical personnel. If you need more information on installing Jenkins on your environment, refer to the installation documentation on the Jenkins website.
In your code repository, there should be a file called “JenkinsFile” in the root of the project directory. This file is where the Jenkins “pipeline” is created, and where you specifically define how to automate the building, testing, and/or deployment of the source code and which tests are run sequentially and which in parallel. Automate as much as possible, including the preparation of the test environment, verifying the environment and connection, configuration management, data setup and housekeeping, and of course the test suite itself and the resulting test metrics. As an example, licensees of our continuous testing solution called Cycle have the power to automate everything by executing the command line version of Cycle in the JenkinsFile for each task. Each execution of “cycle-cli.exe” specified in the JenkinsFile will have different command line switches and run different feature files depending on what task is being performed. A robust test automation solution worth its salt will let you do more than just run regression tests. Setting up the JenkinsFile requires more than simply skimming this handy overview, and so I encourage you to refer to “Using a Jenkinsfile” in the Jenkins User Documentation.
Jenkins, and especially Blue Ocean Jenkins, makes managing test runs easy. All tests are neatly organized under the “Pipelines” heading, and within each test you can visually see test runs along with their related information such as if the test was successful, how it was triggered, and a variety of analytics.
Your team should determine in advance which test metrics and artifacts are most important, and what medium should be used to receive updates on these results right after a test-run. In the JenkinsFile, you can setup where test reports and artifacts are published and how team members are notified. In Blue Ocean Jenkins, reports can be found under the “Artifacts” heading when looking at an individual test run.
What does this have to do with Cycle?
While Cycle is a system-agnostic continuous testing solution, many of our users in the supply chain space use it to test Blue Yonder enterprise warehouse management software.
This post was written by:
Technical Pre-Sales Consultant
James has been working in software pre-sales and implementation since 2000, and has more recently settled into focusing on technical pre-sales.