Back to Top

Getting Started: Setting up a unified DevOps Toolchain with ProGet, BuildMaster, and Otter

Finding the right tool for the job means using multiple tools. For software releases, linking these tools in a reliable and clear toolchain is essential for a modern DevOps implementation.


NOTE:
This tutorial involves multiple products and it is recommended that you run through the getting started tutorial for BuildMaster, ProGet and Otter before this tutorial to build a basic understanding of concepts that are mentioned but not explained in detail during this example.

In this example we start with Jenkins to build an application, then move that application through a simple release pipeline with BuildMaster. A copy of the artifact produced by Jenkins will also be archived with ProGet, and infrastructure synchronization between BuildMaster and Otter will be enabled for monitoring server configuration.

While this is a very simple example, it can easily be expanded to include other links in a toolchain. For example linking an issue tracker so that issues are closed and commented on automatically, or adding a server monitoring tool to every production server.

Storing an Artifact

We'll be using Jenkins to start the process. In many cases setting up a Continuous Integration server like Jenkins is the first step in automating releases. Unfortunately, for a lot of teams this is as far as they automate.

The process in Jenkins is set so that when a build is initiated, it will pull source from GitHub, build the code and then archive everything as an artifact. This artifact would then manually be transferred to another stand alone deployment tool, or sometimes just manually deployed to production servers. Because manual processes introduce opportunity for human errors, automating the release is advantageous.

We add a step in the Jenkins build operations so that Jenkins will automatically upload files to ProGet as a Universal Package. While this step isn't necessary there are several benefits to using a binary repository like ProGet.

  • Creates a locally stored version in case of Jenkins server failure or downtime
  • Ensures only production ready builds are moved to a release pipeline
  • Allows for setting of retention policies
  • Enables user restrictions for compliance needs

The set up for storing Jenkins artifacts in ProGet is straight forward and only involves a few steps, these steps are explained in greater detail in a short tutorial

  1. Create an API key in ProGet
  2. Install the ProGet plug-in on your Jenkins server
  3. Set the API key, ProGet URL, and credentials
  4. Create a new feed in ProGet for Jenkins to save artifacts to
  5. Add the ProGet package upload build step in Jenkins with the ProGet feed details

Now when we create a successful build in Jenkins the files are saved in ProGet as a universal package.

Releasing an Artifact to Production

While Jenkins (and other) CI tools compile and can do basic automated testing on code, they aren't the best solution for human testing or more detailed automated testing. The best value from CI comes when it’s coupled with other tools.

An Application Release Automation (ARA) solution like BuildMaster, is used for more than just deploying code to production, it coordinates all of the advanced components of a release through various stages in order to meet, regulatory, compliance, and business needs.

For this example, the release pipeline will be very simple. It will have four stages that a release package will be moved through, and will include a quality control gate and email notification.

BuildMaster is normally configured for a much more complex release pipeline with many different environments and stages. Some of the most used stages are:

  • Pre-Production staging step ready to go live at any time
  • Multi-server testing environments for human testing
  • Production back up

We start by creating a new application called Accounts.

Since this is a web application we also create a web configuration file as a BuildMaster asset that will be deployed along with our artifact.

Next a Plan called Import Package is created, to import the artifact from ProGet.

If you decide not to use a Binary Repository to store artifacts you can pull an artifact directly from Jenkins.

Clicking on the Import Package plan will open the BuildMaster visual editor and allow us to add steps to import the artifact from ProGet.

  • Add a general block and set the server context (servers can also be set at the pipeline step if desired)
  • In the general block add the get package operation. We set the operation fields so that it pulls the latest package stored for the specific ProGet group and feed, as well as setting the ProGet URL and credentials. You can also create a resource credential which will allow users to access ProGet packages without sharing credentials. We’re using the latest artifact created, you can also choose a specific package number from ProGet or set this as a variable with a release template.
  • Last we create an artifact in BuildMaster with a create artifact operation. Which will zip all the files and save it in the artifact library for later use.

In this example, the imported files are only the artifact we just created, and could have been deployed directly from ProGet. However, in normal release setups there would need to be much more included in a release than just one ProGet package. It’s common for several ProGet packages to be pulled individually and then packaged together for release.

After creating the Import plan we’ll need to adjust the release pipeline to use that plan.

BuildMaster pre-creates a simple pipeline with Integration, Testing, and Production stages. We’ll modify this pipeline first by adding a new stage.

We name the stage Import, and set the position to zero, so that it happens first in the pipeline. Last, we select the automatically deploy to the next stage option and save the stage.

Selecting a target for the Import stage allows us to set the plan to the Import Package plan we just created.

Note that the other three stages in this pipeline have a plan set called Deploy Accounts, BuildMaster has automatically created a plan for this application and assigned it to the different pipeline stages. We’ll edit this plan to deploy a release package.

Plans > Deploy Accounts gets us to the visual editor for the preset plan. The plan doesn’t do much so we can get rid of it and start over.

Again, we start with a general block and short description, except this time we won't set the server context.

We first add an Ensure App Pool operation, and then add a Stop App Pool operation to make sure that an application pool exists in IIS and then stop it so we can deploy to it. Next we’ll deploy our artifact, and then our configuration file. Last we’ll restart the Application Pool.

If this was a manual operation all of these steps would need to be done each time the application needed to be updated, which is tedious and creates opportunities for errors. For example, not stopping the application pool before deploying new files will crash the application which will have to be restarted creating extra downtime during redeployment.

Now, we’ll go back to the release pipeline to set up a quality gate, an email notifier, and set server context.

On the testing stage gate, we add an approval, and select user approval. This will set the stage so that before any release artifact can be promoted to that stage it must be approved by a specific user. In this an Admin user.

Next, we add an event listener that will send out an email to specific people when the application has been deployed to the production environment. Email notifications are most used at staging environments to let the Ops team know that there is a package ready for production, and at testing stages to let specific testers know that there is a new version of the software ready to test.

At the production stage adding a post-deployment event allows us to select email notification and enter an email address of the individual(s) we want to notify when a new version is successfully deployed to production.

Last we need to edit the integration, testing, and production stages to set server context. For integration and testing we continue to use the AccountSV server, but for production we'll use the option 'All servers in target environment'. This targets any server in BuildMaster with an environment set to "Production".

Now we have plans to import our artifact, and move a release package through a pipeline that deploys to multiple live servers, so it's time to create a release package. Clicking on Releases > Create Release does this. You can set a release number, BuildMaster defaults to 0.0.0.

After clicking Create Package, BuildMaster will pull the package from ProGet, and create an artifact from it, and then automatically promote it to the integration stage.

We can see that the next stage is orange indicating that an approval needs to be made before promoting the release package to the testing stage. We’ll click the approval box and then be able to promote to testing.

Once the package is moved to production we’ll receive an email notification.

Scaling Beyond Build and Deploy

Of course infrastructure involved with an application release needs to be monitored and maintained. Coupling BuildMaster with Otter helps manage these needs.

A key feature of Otter, unlike many other Infrastructure as Code tools, is that Otter can monitor servers for drift ensuring that servers are configured the way they are supposed to be. This is, perhaps, most important in Production environments since configuration drift there can lead to unexpected behavior, downtime, or security vulnerabilities.

In BuildMaster we can see some of the servers we were working with for the Accounts application, they include a BuildMaster server, two remote production servers and there is also an, so far, unused server.

To get this same infrastructure into Otter we export the configuration from BuildMaster and then import it into Otter. This creates synchronized infrastructure for an application between the tool that deploys it, and the tool that will monitor the servers it was deployed to.

We can use a simple ensure operation to monitor for drift. First we create a server role (Accounts) and then add in the two remote servers in production.

We then create a configuration plan for that role so that it applies to all servers with that role. In this case we use Ensure App Pool as an example.

After adding the configuration, Otter checks all servers in that role, and finds that they have the defined configuration (this is expected, as all servers in that role were used deployed to with the same Ensure App Pool operation). Otter will now monitor these servers continually, to make sure that AccountsAppPool matches it's defined configuration.

However, this can cause issues since BuildMaster also uses the Ensure App Pool operation.

If a change in the BuildMaster plan wasn’t also changed in the Otter plan then anytime a new deployment occurred the servers would be in drift. Likewise, if changes were made to the Otter configuration and not in BuildMaster, the servers would also be in drift as soon as a new deployment occurred.

This can easily be managed by setting up Automatic Infrastructure Sync with BuildMaster and Otter. This allows Otter to monitor and update the infrastructure that BuildMaster uses. More importantly, it allows us to delete the Ensure AccountsAppPool operation in the BuildMaster deployment plan because it will be monitored, and updated (when needed) by Otter.

Setting up Automatic Infrastructure Sync also allows us to add more production servers by assigning them the proper role and environment so that they are production ready for the next deployment. When a new server is added to the role the entire role is listed as drifted since one server is not configured as expected.

This is easily fixed by creating a remediation job. When the job runs, only the drifted server will be acted upon. Bring that server into proper configuration, at which point it will be ready for the next application release.

Of course, in this example we just used the production environment for simplicity, normally each environment would be represented in Otter. It's common to have individual roles based on environment, and / or type of data being deployed to the server (an api, a web app, or fail-over).