Containerisation: not just for shipping!

Categories: 
Blake Wilkinson
22/07/2021
I’ve been in I.T. for more years than I care to admit.  Consequently, I’ve seen software delivery practices and system management practices come and go.  However, none have been as seismic which has changed my approach to delivery and run software as containerisation. 

There are horror stories of trying to release a change to one program, only to discover that another program (which has happily been chugging away in the background) has fallen over because they share a dependency.  Or maybe a defect released with a piece of software (it happens!) which could not be rolled back because it might take too long, so the only conceivable way of fixing something is to carry on moving forwards, leading to a protracted period of downtime and unstable releases.  Or a change that worked perfectly in development but does not work in production! 

But it means so much more than just running production workloads.  Testers frequently jump around different releases as they compare functionality quirks to pin down when a particular change started impacting on the smooth running of software.  It used to be difficult to test software in a sterile environment, where you could eliminate differences in base configuration as being the driving factor in the unexpected behaviour of your software. 

Think of containerisation as an astronaut on a space walk in their space suit.  Their space suit has everything they need to maintain their life - oxygen, temperature regulation and flexibility so they can perform the task at hand without too much restriction.  Their space suit is sealed off from the outside world.  Containers are very much like this.  They too are encapsulated and come with their own environment. 

Containerisation also comes with a great degree of flexibility.  You can choose to be very light touch, using docker run to launch individual containers on your host.  You can group them into services and take advantage of pre-configured internal networking and check your configurations into source control or simply have things repeatable and shareable by means of docker compose files.  Or you could go the whole way and use a container orchestration to manage fleets of containers on your public cloud (most public cloud providers have container orchestration, usually Kubernetes). Exactly how far down the rabbit hole you want to go is your discretion. 

Continuing with the best practices of dev-ops, configuration is pulled forwards to the beginning of deployment and is stored as code, meaning it is repeatable –eminently testable.  Because containers are encapsulated, deployment and upgrade are greatly simplified, and the upgrade process is testable too.  Containers are immutable, meaning if one stops working, destroy it and launch a new one in its place.  Containers are specific.  When it does come to upgrading, launching a new container alongside the old and migrate traffic can be done at your own pace - big bang, or gradual and tested. 

A recent example was with investigating an issue with Panintelligence’s test team , who recently increased their adoption of docker.  They took a test image from a recent build and found it was not launching correctly.  They were seeing evidence in the logs that the application was not connecting properly to the repos database -which was a MariaDB configured to run externally from the application on another container, another win for containerisation!  Was the issue in the compose file? One line on the compose file was changed to point from the broken version to a previously released version and voila! the dashboard worked flawlessly, highlighting that the problem was in the new release.   

In the old world, uninstalling and reinstalling an old version might take 20 minutes, and there may be other hanging configuration which could sour the results of the test.  With containerisation, performing the switch with a simple edit of the compose file and a simple command, took just 2 minutes, confirming there was nothing left on the host which could have changed the results.  Moreover, because the configuration was stored in the compose file, it can be shared easily with anyone who wants it and they can easily reproduce the issue, meaning time to fix is reduced. 

If you’d like to try containerisation, you can ask to be whitelisted on Panintelligence’s “dockerhub” account, where each new release is published.  Upgrade can be as simple as pulling the new image and running it!  There are always best practices associated with containerisation, but the beauty part of it all is, it’s free and widely adopted, so there’s plenty of help out there… including the in-house containerisation evangelists! 

"There are horror stories of trying to release a change to one program, only to discover that another program (which has happily been chugging away in the background) has fallen over because they share a dependency. 
panintelligence is a leader in Embedded Business Intelligence on G2
panintelligence is a leader in Small-Business Business Intelligence on G2
panintelligence is a leader in Mid-Market Embedded Business Intelligence on G2
panintelligence is a leader in Europe Embedded Business Intelligence on G2
panintelligence is a leader in Business Intelligence on G2
panintelligence is a leader in Mid-Market Business Intelligence on G2
panintelligence is a leader in Embedded Business Intelligence on G2
panintelligence is a leader in Europe Embedded Business Intelligence on G2
panintelligence is a leader in Europe Analytics Platforms on G2
panintelligence is a leader in Embedded Business Intelligence on G2
Users love panintelligence on G2

Houston... we've got mail.

Sign up with your email to receive news, updates and the latest blog articles to inspire you and your business.
  • This field is for validation purposes and should be left unchanged.
Privacy PolicyT&Cs
© Panintelligence 2021