Source Operations: automate maintenance, from single sites to fleets
As part of our continual effort to make managing sites your way even easier and more powerful, we’re pleased to announce our newest user-facing building block: Source Operations. This latest tool is part of our long-term plan to redefine site maintenance—like updating packages and upgrading versions. Simplifying site maintenance is important for single sites, of course. But it’s critical if you’re running thousands of sites, as some of our clients do.
Source Operations are scripts that you can run on your own code in an isolated, build-like environment. The main use case is for the source code to update itself. That’s best demonstrated with an example.
Consider the following block added to your
source: operations: download: command: | curl -O https://example.com/myfile.dat git commit -am "Update myfile.dat"
This example defines a source operation named
download. It has a
command that’s an arbitrary shell script, which in this case is just a single
On its own, the source operation doesn’t do anything unless triggered. The easiest way to trigger it is with the Platform.sh CLI:
platform source-operation:run download
That command will run the
download source operation on the current branch (use the
-e switch to specify a different branch/environment, if desired), causing the system to checkout a copy of the current branch into a build-like environment (meaning no database or other services), then run the
command for that operation.
The environment the source operation runs in is a complete Git checkout of that branch, but with no remote. As a result, you can commit values locally, but can’t push them to an arbitrary branch. (That’s a guard rail to avoid accidentally trashing your repository, which would be, er, bad.) However, after the command has run, if there are any new local Git commits, we push those new commits back to your repository.
If the branch is activated, then the presence of new commits will automatically trigger a new build and deploy just as if you’d committed those changes yourself. And now that environment is live with the updated
While downloading a file is nice and all, it’s not the most useful task. What would be more useful? How about updating dependencies? Consider this example:
source: operations: update: command: | npm update git commit -am "Update npm dependencies"
In this case, there’s an operation called
update that will run
npm update to fetch any npm packages the project depends on, then commit the result (which would be an updated
package-lock.json). Those changes will then be pushed and deployed automatically, causing that environment to be rebuilt with the new, updated dependencies.
Meaning you now have a one-command way to update all dependencies on your project on any environment, and get it redeployed immediately.
Of course, that works for any package manager: Composer, NPM, Yarn, Bundler, Pip, Go Modules, Maven, your own custom script . . . whatever. If your application has multiple package managers (Composer and NPM, for instance, which is quite common), you can put them in the same operation or make two separate operations you trigger separately: your choice.
So far we’ve just talked about updating dependencies. But what about updating the whole project?
A common use case for many organizations is to have a single code base that’s deployed many dozens or hundreds of times to different instances. The code itself needs to be centrally managed, but each instance is “owned” by some specific branch or department. Frequently, that’s a Drupal distribution like OpenY, Opigno, or Open Social (or even distributions that don’t begin with O), but it could be anything. How, then, can that central organization manage dozens or hundreds of code bases?
Recall that the operations environment doesn’t include a Git remote to the repository, but that doesn’t mean you can’t add your own Git remote in the source operation command itself.
First, add a project-level variable named
env:UPSTREAM_REMOTE with the Git URI of the central repository. That will make that repository available as a Unix environment variable in all environments, including in the Source Operations environment.
Now, add a Source Operation to that central repository like so, which will then also be included in every other site built from that template:
source: operations: upstream-update: command: | set -e git remote add upstream $UPSTREAM_REMOTE git fetch --all git merge upstream/master
Every time the
upstream-update source operation is triggered on a branch, that branch will get checked out, then add a Git remote for the central template repository, then merge the latest changes from its master branch. Because there are now local changes committed in the local repository, those changes will then get pushed, built, and deployed. If there’s a merge conflict, the script will fail without committing anything. (A more robust script would handle more merge conflict cases, but this is good enough for now.)
The upshot is that, with a single command, any satellite project can be brought up to date with an upstream master repository. Presumably you’d want to do that in a branch rather than directly on the
master environment, but that’s up to you.
But what if you have a huge number of projects? You don’t want to run a shell command for each one manually, or have to merge each one manually.
Remember that the Platform.sh CLI is simply a front end to our REST API. Anything you can do from the CLI or web console can be done from the API. That means you can build whatever automation tools you want to issue API calls against one or 1,000 sites, including triggering source operations, merges, or whatever else. That could be a command line tool, a web application, a desktop application, or all of the above.
We have some examples of that in our next post: three different approaches to managing your website fleet.