A late introduction to continuous integration and deployment

It's been one of the buzzwords around development offices in recent years, and everyone is wanting a piece of the efficiency generating DevOps, but what are the benefits of it, and how do you get started? It's not as difficult as it may seem, but might now always deliver the promised savings as quick as you might want.

Simply put, continuous integration is the regular pulling together of development streams, so branches aren't always miles ahead or behind, with the idea that there should be fewer conflicts when performing merge requests, or rebasing ahead of one. Continuous deployment is as it sounds, getting code out there as soon as possible after an integration.  Sounds dangerous, but it shouldn't be if you are doing things properly.

I should caveat this by saying continuous deployment isn't always about going to production as soon as an integration is complete (though it does happen in some places) but generally about getting it to a test or staging environment to be verified before going to production. Those places where the releases go straight to production have high% code coverage with their unit tests, and have a full suite of integration tests to run following a deployment. Not everywhere has this luxury , which is why there's an interim environment for deployments. Continuous deployment to production is also most common in places with a microservice architecture in place, where changes are usually small and non-breaking. And where any breaking change will run alongside the older version to allow consuming services time to update.

Continuous integration is covered in depth in a number of places, but I've found that star trying with a pipeline to run any unit tests as part of the merge request to ensure there are no failing tests as part of the merge a good place to start. A simple pipeline to run the unit tests (you have unit tests, right?) as part of the merge request, or just after as a worst case, will allow issues to be detected early on, before they become production issues. Again, this relies on there being good code coverage for the tests to be effective.  Failures can then notify developers via e-mail, slack, or other preferred communication method of any failure, and they can address the issue when it is fresh in their heads, rather than days or weeks down the line. The time saving for correcting issues with fresh working knowledge over what it would take after a weekend, holiday, or other sprint where they work is forgotten are a rapid realisation.

On the back of unit tests, the integration tests can run, if they exist, and highlight any issues with the interface; again alerting to any problems if there are any. If  these can run in parallel across browsers, devices, platforms etc., then an overnight build can be brought down to several times a day build, allowing a faster feedback loop on development and the honing in on the desired solution for the client faster.

Continuous delivery is where I've found a lot of time saving through my recent development and testing cycles. I've become a developer in a non-technical environment, where deployments to production and other environments are done by FTP. The company is an e-commerce firm who have historically used external agencies for development,  but have been hamstrung by turnaround times form small pieces of work due to other work pressures of the agency. Changes could be deployed as only the files which have been updated, which requires navigating several directories and selecting several files. This takes several minutes as a minimum, usually around a minimum of 2 minutes, but I've had some take 10 minutes due to the changes of that release.  As there's a test and production environment, this is 20 minutes for a release.

Using continuous deployment to the test environment, I've been able to drop this to an average 1 minute, 20 seconds. All it does is SSH into the server, navigate to the file location, and pull the updates from Git. An 86.6% time saving at extreme ends of the scale, and a 33% time saving at the lower end is certainly worth it. It took me around 4 hours to set up the script and iron out the nuances of disjointed documentation to get this up and running. If I average a saving of 1 minute per release, I will have to do 240 releases to get the time saving realisation from one environment. When added to production, there's at least 2 releases per change, so 120 changes or fixes. Not a lot at all, and there's no forgetting about one file which causes tests to fail, as it's in source control.

How I've implemented this is a topic for another post, but if the savings are real for non-tech firms, then they are potentially massive for development agencies of any size. And once you have one up and running, setting up others becomes faster, and therefore so does the ROI.