Latest News

How to Build a Continuous Integration and Continuous Deployment Pipeline for Your Enterprise Middleware Platform

Chanaka Fernando, Associate Director at WSO2, discusses how CI and CD are key to ongoing enterprise middleware platform development

Continuous integration (CI) and continuous deployment (CD) are much talked-about ideas in enterprise software development today. With the rise of microservice architecture (MSA), it has become a mainstream process within enterprises. If you are familiar with microservice architecture, you will no doubt have heard about greenfield and brownfield integrations, where you start your microservices journey from scratch or from an existing enterprise architecture (which is the case 80% of the time).

According to a recent survey, there are more and more organisations moving ahead with microservices architecture even though they accept that it is really hard to maintain and monitor. The survey highlights that the advantages of MSA outweigh the disadvantages.

Likewise, CI/CD is a tightly coupled concept along with MSA and adopting a DevOps culture. Due to the dominance of MSA within enterprises, CI/CD has also become an essential part of each and every software development lifecycle within enterprises. With this shift towards MSA, DevOps and CI/CD, other parts of the brownfield integration cannot stay out of these waves. These include:
• Enterprise Middleware (ESB/APIM, Message Broker, Business Process, IAM products)
• Application Server (Tomcat, WebSphere)
• ERP/CRM software (mainly COTS systems)
• Home grown software

It might not be practical to implement CI/CD processes for every software component mentioned above, therefore here I’ve outlined how you can leverage the advantages of CI/CD process within enterprise middleware components.

Leveraging CI/CD processes within enterprise middleware components

Let’s start with one of the most common enterprise middleware products, an Enterprise Service Bus (ESB). These ESBs provide the central point which interconnects heterogenous systems within your enterprise and adds value to your enterprise data through enrichment, transformation, and many other functionalities. One of the main selling points of ESBs is that they are easy to configure through high-level Domain Specific Languages (DSLs) like Synapse, Camel, etc.

If we are to integrate ESBs with a CI/CD process, we need to consider two main components within the product:
• ESB configurations which implement the integration logic
• Server configurations which install the runtime in a physical or virtualised environment

Of the above two components, ESB configurations go through continuous development and change more frequently. Automating the development and deployment of these configurations is far more critical. That’s because going through a develop, test, deploy lifecycle for every minor change takes a lot of time and results in many critical issues if you don’t automate it.

Another important aspect when automating the development process is that you assume that the underlying server configurations are not affected by these changes and are kept the same. It is a best practice to make this assumption because having multiple variables makes it really hard to validate the implementations and complete the testing. The process will automate the development, test, and deployment of integration components as follows:

1. Developers use an IDE or an editor to develop the integration components. Once they are done with the development, they will commit the code to GitHub.
2. Once this commit is reviewed and merged to the master branch, it will automatically trigger the next step.
3. A continuous integration tool (e.g. Jenkins, TravisCI) will build the master branch and create a Docker image along with the ESB runtime and the build components and deploy that to a staging environment. At the same time, the build artefacts are published to Nexus so that they can be reused when doing product upgrades.
4. Once the containers are started, the CI tool will trigger a shell script to run the Postman scripts using Newman installed in the test client.
5. Tests will run against the deployed components.
6. Once the tests have passed in the staging environment, Docker images will be created for the production environment and deployed to the production environment.

Automating the update of the server runtime component

The above process can be followed for the development of middleware components, but these runtime versions will get patches, updates, and upgrades more frequently than not given the demands of the customers and the number of features these products carry. Therefore you should consider automating the update of the server runtime component as well.

The method in which different vendors provide updates, patches, and upgrades can be slightly different from vendor to vendor, but there are three main methods:
• Updates as patches which need to be installed and restarted the running server
• Updates as new binaries which need to replace the running server
• Updates as in-flight updates which will update (and restart) the running server itself

Depending on the method by which you get the updates, you need to align your CI/CD process for server updates. The process flow for CI/CD process for server updates will happen less frequently compared to the development updates.

CI/CD process flow for server updates

Outlined below is the process flow:

1. One of the important aspects of automating the deployment is to extract the configuration files and make them templates which can be configured through an automated process (e.g. shell, Puppet, Ansible). These configurations can be committed to a source repository like GitHub.
2. When a new configuration change, update, or upgrade is required, it will trigger a Jenkins job which will take the configurations from GitHub and the product binaries (if required), product updates, and ESB components from a Nexus repository which will be maintained within your organisation. Using these files, a Docker image will be created.
3. This Docker image will be deployed to the staging environment and start the containers, depending on the required topology or deployment pattern.
4. Once the containers are started, the test scripts (Postman) are deployed to the test client and start the testing process automatically (Newman).
5. Once the tests are executed and the results are clean, it will go to the next step.
6. Docker images will be created for the production environment and deploy the instances to the environment and start the Docker containers based on the production topology.

With the above process flows, you can implement a CI/CD process for your middleware layer. Even though you can merge these two processes into a single process and put a condition to branch out into two different paths, having two separate processes will make it easier to maintain. And finally, if you are going to implement this type of CI/CD process for your middleware ESB layer, make sure that you are using the right ESB runtime with the following characteristics:
• Small memory footprint
• Quick start-up time
• Immutable runtime
• Stateless

For more information, please visit wso2.com