For an updated version of this article please read our 2020 post.
For at least the last decade, the best practice process of getting web applications from the developer's environment to the production environment has suffered due to a fundamental limitation imposed by resources, where a limited number of development environments constrain the ability to test and deploy. This applies to anything that lives on a server and talks to the Internet, including what we call websites, CMSs, and web applications. Continuous Integration and Deployment changes that, but not all CI&D tools are created equal…
The Four-Tier deployment model
Every web developer should familiar with the Four Tier deployment model of Development, Testing, Staging and Production. In most places, this is the “standard” for building, testing, and serving web applications, and looks like the following:
Development: This is where developers make changes to code, and is usually a local, single-tenant environment (e.g. a developer’s laptop).
Testing: This is an integration environment where developers merge changes to test that they work together. It may also be a Quality Assurance or UAT environment.
Staging: This is where tested changes are run against Production-equivalent infrastructure and data to ensure they will work properly when released.
Production: This is the live production environment.
This model has been around for a while and is often held-up as a kind of best practice for deployment architectures. It has a number of problems, however...
The Four-Tier model arose from a particular historical confluence of increasing complexity in web application design, testing, and packaging, and physical constraints on computing infrastructure. As software increased in complexity, developers started using more complex packaging methods for deploying that software. This enabled us to start breaking down the deployment model into a series of steps that more closely matched the kinds of testing that were required for complex applications. These steps became our actual environments. We started moving code through these tiers, with each tier professing to offer some kind of guarantee as to the increasing consistency of the data and environment, and the quality of the code.
At the same time, however, the ability to manage the deployment of that same software was constrained by the cost and difficulty of acquiring and managing computing resources (i.e. servers) to serve environments. If you wanted a new environment to test code, you had to buy it, build it, maintain it, and find a way to deploy to it. As a result, most development teams maintained the absolute minimum number of environments or servers necessary to meet their own workflow requirements. In a lot of cases, this was actually less than four, and sometimes as little as two (or one, if you did your development directly onto your production server).
Obviously, cutting back on environments makes it very hard to know if you are testing and deploying code safely and reliably, but even with four environments there are going to be challenges:
Code merging must be done at the Development tier, which leads to conflicts through a lack of visibility.
Changes can’t be easily tested in isolation, which makes tracking down and verifying those changes harder (this is a problem when you have more changes than integration environments, so usually starts at two changes!)
Unless those doing QA are technically adept, changes must undergo testing in shared environments. This can cause those environments to become blocked very easily and create issues with rework.
Broken test environments or changes take out testing for all changes, not just the one requiring rework.
Failure to keep environments up-to-date with Production leads to out-of-date testing data, incorrect operating system dependencies and other environmental factors, which can cause Production deployments to fail, requiring expensive and embarrassing rollbacks.
Version control offers us the ability to isolate changes, however, limited environments for testing nullify that benefit and force us to merge changes together early, which causes conflicts and bottlenecks.
Ultimately, the principal problem is that when you have a limited number of servers to deploy to, the chance of any one of those becoming blocked with broken code goes up significantly.
Continuous Integration isn’t the (whole) solution
We’ve known for some time about the limitations with this approach. In order to plug the gaps, various forms of automation have emerged which handle repetitive or easily programmed tasks such as running tests, moving code between servers, and provisioning the underlying infrastructure. At the same time, advancements in the technology have meant we have much more flexibility in what that infrastructure is. Essentially, with the development of Container-based virtualization, we can now bypass the environment limit that hamstrung the old Four-Tier system.
The combination of automation and virtualisation has given us Continuous Integration (CI), which is great idea in principle, but isn’t a complete solution on its own. The principal purpose of CI is mitigate risk in the build pipeline, and it can do this very effectively, however any build pipeline is only as good as its weakest link, and in many CI systems there are up to four of these…
1. The ability to match any environment with production architecture
Many CI tools use Docker or a similar virtualization tool to build an integration or testing environment for a specific change. This allows you to test your change in isolation, which is very important, but it also gives you the opportunity to test it against an identical operating environment.
Unfortunately, there are only a few CI systems that can match containers to production exactly, which can lead to minor (but sometimes critical) differences between what gets deployed. It is also often very difficult to test multiple, coupled applications simultaneously, especially if they run in separate containers. Most CI tools only allow you to deploy a single container for testing, which means you can only test one application, and your services must also run on that application, or be limited to whatever the CI provider offers. Bad luck if these don’t match your production system.
2. The ability to test code with live data at any time
CI tools are designed to build a system for development, however, they must also be able to test with a copy of live data. They must have access to this copy when deployed and they must be able to be updated with that data at any time. Your live instance is probably changing and you need to know that what you built last week is going to work with this week’s data, so you need the ability to sync on-demand, and you need non-technical users to be able to do this.
3. Environments may still limited by the Four-Tier model
Ironically, even once you add CI to the mix, if you’ve only got a limited number of environments to deploy to, you are still likely to have the same problems you had with the old Four-Tier model. Also, because you are deploying and testing faster, you are also likely to run into those limitations more frequently. Once again, you are blocked by your infrastructure.
This is particularly true where your Staging environment is also your Integration environment, since in many CI architectures, Staging is the last opportunity to test production data, infrastructure, and incoming changes, so becomes the default testing environment for these changes. As soon as that happens, the chance of it becoming blocked rises considerably.
4. Managing a CI pipeline requires people and knowledge
There is no CI pipeline in the world which does not require some knowledge of how it operates, yet most CI pipelines push the complexities of configuration back onto the developer. In fact, an entire discipline (“DevOps”) has evolved to manage the complexity of solutions required to deal with these problems.
While some CI providers have managed to partly simplify this process, it is still a non-trivial challenge in most cases. Developers need to be up-skilled and invested in, and since there are so many different ways of doing it only a portion of those skills are transferrable. Many businesses employ people full-time just to build and manage their build pipelines.
Wasn’t automation supposed to save us from this?
So what is “best practice”?
An ideal Continuous Deployment (CD) solution should overcome many of the limitations of self-managed or third-party CI pipelines by giving the developer a direct, consistent build-and-deploy pipeline that fully replicates the production environment, down to the byte level.
At Platform.sh, we've implemented several technical solutions which help developers mitigate the issues discussed above and push best-practice in new directions...
Every branch in your version control system (Git) can have a corresponding environment, and any of them can be merged into production at any time. Say goodbye to blocked environments or workflows dependent on infrastructure being available.
Development and testing environments are cloned from your production environment in under two minutes. With the exception of configuration changes made on that branch as part of your changes, new environments ALWAYS match their production environment. Not only are you able to test code with perfect infrastructure consistency, you can now test your infrastructure changes as well.
Environments can have their database, files and other services synchronised on-demand. Has your live data changed? You can test it in under two minutes. Non-technical user? There's a button for that.
All of this is fully automated. The only knowledge the developer requires is some information about the target configuration they require (for example, which version of PHP am I using, or which folders are serving static assets) – this is basic stuff.
Once a project is configured, it may never need to be changed again, and on-boarding for new developers is a no-brainer. Developers get to focus on what they are good at, and spend time building features that add value, rather than keeping environments up to date or debugging build failures.
This architecture blows away the constraints of the Four-Tier model completely. Instead of moving changes through tiers of limited environments, developers can isolate individual changes to an unlimited number of disposable environments and apply whatever management and testing workflow necessary to get that production ready. Changes can be released on-demand, in the order that they are ready, not the order dictated by a release schedule based on having to double-check all your changes on a staging server because you can’t guarantee they work. With proper CD in place, your velocity will increase and your error rate will go down (see DevOps gives you wings for more on this)
In the coming months we’re going to be looking at how you can maximize these benefits through two series of blog posts. The first will look at how Platform.sh’s best-practice CD model can supercharge different development workflows (such as Scrum), and the second is going to help you integrate Platform.sh with some of your other tooling.
---
Chris helps development teams, new customers and Platform.sh clients across the Asia-Pacific get the most from their use of the product. If you'd like to learn more about Platform.sh, find out how we can supercharge your development workflow, or just have a chat, drop him a line here...