• Overview
    Key features
    • Observability
    • Auto-scaling
    • Multiframework
    • Security
    Frameworks
    • Django
    • Next.js
    • Drupal
    • WordPress
    • Symfony
    • Magento
    • See all frameworks
    Languages
    • PHP
    • Python
    • Node.js
    • Ruby
    • Java
    • Go
  • Industries
    • Consumer Goods
    • Media/Entertainment
    • Higher Education
    • Government
    • Ecommerce
  • Pricing
  • Featured articles
    • Switching to Platform.sh can help IT/DevOps organizations drive 219% ROI
    • Organizations, the ultimate way to manage your users and projects
  • Support
  • Docs
  • Login
  • Watch a demo
  • Free trial
Meet Upsun. The new, self-service, fully managed PaaS, powered by Platform.sh.Try it now
Blog

Solving multiple WordPress site management for organizations

wordpressautomationcicd
10 Feb, 2025
Paul Gilzow
Paul Gilzow
Developer Relations Engineer

We utilized ChatGPT to enhance the grammar and syntax.

Welcome, my name is Paul Gilzow. I am a Developer Relations Engineer here at Platform.sh and Upsun. Before that, I was a customer. I spent several years at the University of Missouri, where a colleague and I were responsible for managing the University's fleet of websites and developing the strategy for managing that fleet. I was specifically in charge of the WordPress sites. Today, in this webinar, I will share some tips, tricks, and strategies that you can use to manage many WordPress sites simultaneously without driving yourself crazy.

Before we get started, I have a few quick warnings. First, what I discuss today is not the only way to accomplish these tasks. There will likely be moments when you think, "Wait, can’t you do it another way?" There are probably going to be methods I show that may not be the best for your particular situation or even the best in general. The optimal approach depends on your restrictions, requirements, and the constraints under which you operate. Consider what I share as ideas to inspire you to build the strategy that works best for you based on my years of experience. Finally, this is not the official Platform.sh and Upsun method for these processes; these ideas are purely for inspiration and for developing your own strategies.

I will be working under several assumptions. First, I assume that you use Git repositories to manage your projects and websites. I also assume that you are using some type of cloud host—ideally, you are using Platform.sh, which I will feature throughout this presentation. If not, at least you have access to configure and control your infrastructure through a configuration file. In the code samples, I assume that the branch named main is your production branch and the branch named update is where most updates will be performed. If you are using a cloud host like ours, you have likely encountered the challenge with WordPress that the application container, once built, is read-only. Thus, we cannot easily access the WordPress GUI for updates, and we assume that is the case moving forward.

The very first thing we need to do is create an inventory. We must know what we are dealing with. Just as in security, we cannot secure what we do not know we own. We cannot build a solid global strategy to manage a fleet of sites until we know what we are dealing with. We must have an inventory, begin researching, and analyze the differences between projects. Some differences are expected; not every site is identical, and they may require different plugins and themes. However, we need to identify differences in configuration and setup. One of the strengths of WordPress is its configurability, but that can become challenging when managing a large fleet. We must analyze and understand why configurations differ—whether because of unique requirements or simply differences in opinions or preferences. In addition, we need to identify commonalities. Areas that are consistent across projects are candidates for standardization, which is our goal in managing a large-scale environment.

The next concept to understand is the idea of upstream and downstream repositories and upstream collections. The upstream repository serves as the base or global area from which all new projects originate; it is where our standardization lives. You might think of it as a boilerplate, and in many ways, it is—the starting point for a new project. However, unlike a typical boilerplate where the relationship ends after project initiation, we want to maintain the relationship between each new project and the upstream repository so that global standards can be incorporated and pushed down to all projects.

You might compare this to a fork, but there is a key difference. Most version control systems—GitLab, GitHub, Bitbucket—either allow only one fork of a project within a namespace or assume that a fork is meant to contribute back to the original repository. In our model, downstream projects should never push changes back to the canonical upstream source. Changes should always flow from the upstream repository down to the new projects. To visualize this, imagine an upstream repository containing all global standardization and downstream projects that pull or receive global changes.

This concept can be extended into layers. For example, in higher education, you might have different types of sites such as faculty sites, grant sites, and marketing brochure sites that share similarities. In that case, you could have one upstream repository at the top containing base standards, followed by category-specific upstreams that serve as intermediaries for more downstream projects. This layered structure allows for standards at multiple levels.

The next phase of our strategy is to consider migrating to using Composer to manage your WordPress sites. In a recent webinar with Greg Qualls, we discussed different strategies for managing WordPress sites—whether using the traditional approach of downloading WordPress core files and then adding plugins, or using Composer or Bedrock (which also uses Composer). I am a strong advocate for using Composer because it simplifies the adoption of later strategies.

Composer allows you to explicitly define the dependencies that a site requires. It changes our perception of how a WordPress site is built—from thinking of it as WordPress plus added plugins and themes to understanding it as a collection of dependencies, with WordPress being just one dependency among many. Using Composer and global standardization ensures that every project starts with all the necessary dependencies at specified versions.

A key advantage of Composer over the traditional approach is that if you encounter issues—such as a downstream site that has difficulty updating because a theme depends on a plugin that has been updated—you can pin or lock that particular dependency to an older version. This allows that unique project to remain stable while others continue to move forward. Another advantage is that Composer stores its configuration in a JSON file, which makes it much more auditable. Unlike having to rely on a centralized console, you can easily read and extract the dependency requirements and versions. Additionally, by using Composer to define dependencies instead of storing the full WordPress core and plugin files in your repository, the repository remains much slimmer.

I should note that all the sample code I demonstrate is available on my GitHub profile in the repository named WP Upstream demo. In this demo, the repository primarily contains configuration and a structural framework. For example, there is a config file (which we obviously need), a composer.json file to define dependencies, an infrastructure configuration file, a .gitignore for the public directory, a WP config that points back to the real index file, and even placeholder content and plugins. There are also additional infrastructure files, GitHub workflows that I want everyone to have, and testing configurations—all contributing to the overall framework.

Another major advantage of Composer is that you do not need a full WordPress instance to update dependencies. All you require is a PHP environment and Composer to check for updates, download them, and record the updated hashes. When you then use that configuration file in your WordPress instance, it incorporates all the new updates, which makes it much easier to integrate with a CI/CD process. This is especially useful when you are using a cloud host, where the application container is read-only. This approach automates what was previously a manual process.

There are some disadvantages, however. You cannot update dependencies through the WordPress admin console because the cloud infrastructure is read-only and because the dependencies are now managed by Composer. If you are not used to managing WordPress sites with Composer, there is a learning curve, so it may take some time to transition from traditional WordPress sites. However, if you utilize the upstream strategy and push these standards down, you only need to perform the initial work once, and it will propagate to all projects.

Another disadvantage is that Composer does not integrate well with must-use plugins. In WordPress, you can have a directory inside wp-content called mu-plugins where any plugin placed there is always loaded and cannot be disabled. These plugins require the bootstrap file to be in the root of that directory. Since Composer installs dependencies as directories, additional work is required to accommodate that. Bedrock, however, has already incorporated this functionality into its framework. Additionally, in my experience over six to seven years, the number of plugins that do not work with this approach is less than ten, and those are typically plugins that do not follow WordPress coding styles and guidelines. There is also a dependency on third parties; for example, if you use Composer directly, you depend on John P. Block, who has graciously set up a Composer-capable version of WordPress core. If you use Bedrock, you depend on their repository. In both cases, for plugins and themes you rely on WP packages. Although we have a dependency on a third party, because these sources are in repositories, you can still access them, even if it requires additional work. Finally, there is no official WordPress support for managing your site with Composer—even though there are many articles and guides on the subject.

The last advantage in this section is the concept of a meta package, which will recur throughout this presentation. A Composer meta package is a type of package that contains no code; it is simply a collection of requirements. For example, in my project-level composer file, the first requirement is for a package I call company name all site requirements. You can think of this as the global Composer requirements file. This file contains a collection of dependencies that every project must have. Because downstream projects never change this particular file, conflicts are minimized. Instead, they make changes in their own project-level composer file, while the global file defines the dependencies and versions that every project starts with.

There are some challenges with meta packages, however. Certain Composer properties—such as the config property or the scripts property—cannot be defined at the meta package level and must still be defined in the project-level file. The idea is to define as many global settings as possible at the higher level, allowing downstream projects to define additional settings as needed without causing conflicts.

We can apply the same modular approach to configurations for other parts of our infrastructure. Platform.sh uses YAML to define infrastructure configuration, and our YAML parser supports the inclusion of other files—whether they are additional YAML files, Bash scripts, or text files. This approach, similar to the Composer meta package, allows us to break out pieces from a central configuration into parts that downstream projects might need to change while keeping the global configuration stable.

For example, our infrastructure configuration file includes top-level keys such as the name and type. I reference another file for the Composer build flavor, which we leave at the global level. However, the app type setting—which defines both the PHP container we use and its version—could vary from project to project. In this case, we want every project to start with PHP 8.1, but downstream projects might need to use PHP 8.2, 8.3, or other versions. This method gives downstream projects the flexibility to make changes without causing conflicts with the upstream configuration. It also allows for situations where a particular project must remain locked to an older version while others update.

This philosophy can also be applied to other configuration sections, such as the WP config file. At Platform.sh, we expose most of the information WordPress needs as environment variables, so that no matter which environment you are running in, you can always access them. The WP config file is written to retrieve these environment variables and build the project accordingly. However, there is a high likelihood that a downstream project will need to add unique configuration—perhaps for a specific plugin or theme. In that case, we can add a separate file at the global level (for example, a “WP config project” file) that is empty by default but can be supplemented by downstream projects as needed.

This modular approach also applies to local development files. For instance, if you are using DDEV or Lando, you can adopt a cascading naming convention. Lando, for example, uses an upstream.lando file as the global configuration and a local.lando file for overrides. In this way, all projects use the same local development settings, allowing any developer to quickly set up a local instance with the complete configuration. The same philosophy applies to additional script files—whether Bash, Python, or others—that run during the build or deployment stages. It also applies to testing configurations, where you want a global set of tests for every site while allowing each downstream project to add its own specific tests.

At this point, we have an upstream repository, downstream projects built from it, and Composer integrated into the process. However, we still have not achieved full efficiency on a day-to-day basis. Although we can push and pull changes globally, we need to automate these updates.

If you are on Platform.sh on an Enterprise or Elite plan—or on Upsun, where all plans have access—you can use a feature called Source Operation. This unique capability brings up a writable instance of your application container (instead of the usual read-only state). As you make changes, you can commit them, and those changes are pushed back to the branch. This means you can bring up an instance of your PHP application running WordPress, run Composer update inside it, and have the updated lock file committed back to the repository.

Even if you are not using Platform.sh, you can implement a similar process with other CI/CD systems. For example, on GitHub you can use an action scheduled via cron to bring up a PHP environment, check out your branch, run Composer update, and commit the changes. When I was at the University of Missouri—before Source Operation was available—we used a local GitLab instance with a limited number of runners. I built an application that would download each repository, check out the branch, update the dependencies, and push the changes back.

In general, the goal is to have each project’s repository check out a branch for updates, ensure it is synced with the production branch, pull any changes from the upstream repository, run Composer update, and if any updates occur, commit the lock file and push those changes back. Then a pull request or merge request is initiated to integrate the updates into production.

On GitHub, for instance, you can use Dependabot to scan your project’s dependencies. By default, Dependabot will check for security issues and create pull requests to update them. You can also configure it to run on a schedule to update dependencies. However, Dependabot does not pull changes from the upstream repository, so that step must be incorporated into your process.

I have an example workflow in my “WP Upstream demo” repository called “updates” that demonstrates this process. In the workflow, the project’s repository is checked out, all branches are fetched, and the update branch is checked out (or created if it does not exist). Then, the branch is synced with production. Next, a remote is added for the upstream repository, changes are pulled and merged into the update branch, Git configurations are set, and a PHP environment is prepared to run Composer update. One important detail is ensuring that the PHP environment used for running Composer update matches the production environment. This can be achieved by reading configuration values from YAML or JSON files or by defining the requirements in the composer file. Once Composer update runs, if the lock file is updated, it is committed and pushed back to the update branch. This automated update process can be scheduled or run manually.

Once the updates are complete, a pull request or merge request is automatically initiated. Many repository management systems allow you to automate the initiation of a pull or merge request when changes are pushed to a specific branch. In my workflow, an action watches the update branch, and when a push is detected, it automatically starts a merge request.

It is important that even with CI/CD processes, you maintain the same philosophy: define what is needed at the global level for every project while allowing downstream projects to add their own configurations where necessary. For GitHub, you might use reusable workflows and actions; on GitLab, you might use components.

Once the update process and the pull/merge requests are in motion, it is critical to test these changes before they go into production. I strongly encourage you to implement lint testing, unit testing, visual regression testing, integration testing, and end-to-end testing. Visual regression testing, for example, involves comparing screenshots of your production website to those from your pull request environment. If any significant differences occur, a warning is triggered or the test fails. This type of testing simulates a user navigating through your site or application and performing various actions. It verifies that the code changes introduced in a pull request do not cause regressions in functionality.

It is important to test both the happy paths (where everything works as expected) and the unhappy paths (where errors occur and the appropriate error messages are shown). Globally, every site should have a baseline set of tests, and each individual site should have the flexibility to add its own extra tests as needed. Many testing frameworks use YAML or JSON for test configuration, which allows you to merge the global standards with project-specific configurations to create a comprehensive test suite.

Another critical component is the pull request environment. When managing a large collection of sites, it is essential that the testing environment closely mimics production. One of the biggest values that Platform.sh and Upsun offer is the integration with GitHub, GitLab, and Bitbucket. When you create a pull request or merge request, the system is notified and takes a complete clone of production—which includes the application, database, services, files, and configuration. An exact, instant clone of the production environment is created, and the pull request code is merged on top. An ephemeral URL for that environment is sent back to the repository management system so that your pipelines can run tests in an environment that is identical to production. This gives you high confidence that if you see all green check marks, the code you merge into production will not introduce regressions.

For example, I have a downstream project that, after pulling changes from upstream and running Composer update, automatically received a pull request environment. In that environment, visual regression tests and end-to-end tests ran. As a developer, seeing green checks across the board gives me high confidence that merging the pull request will not introduce any issues. Furthermore, the application container is built once, and if there is no change in the code between the pull request and production, the container is reused without a rebuild. This ensures that the container you tested is exactly the same one that goes into production.

Many repository management systems—GitHub, Bitbucket, GitLab—offer the ability to auto-merge when all tests pass. As you iterate on this system, you can reach a point where your tests provide sufficient confidence to auto-merge. All your sites, whether a hundred or a thousand, will then update continuously in a managed fashion. If any test fails, the auto-merge process is halted, an alert is sent, and manual verification can determine the cause of the failure.

Thus, we have developed a global strategy: changes are pushed or pulled into all downstream projects; dependencies and configurations are modularized; and a continuous update process keeps every site current on the schedule you choose.

The next step, much like any other programming project, is refactoring and iteration. It is critical to track whenever a project needs to deviate from the standard. Whenever you encounter a situation where the merge from upstream fails, or someone indicates that a particular section needs to deviate slightly, you must track this occurrence. If it happens once or twice, you can configure Git attributes with a merge strategy that allows for that exception without causing ongoing conflicts. However, if you see the same deviation repeatedly, it is a strong indication that you need to update the upstream configuration to incorporate those changes permanently.

This concludes the discussion of the major strategies. I now have about 10 to 15 minutes left for questions.

"What would you say are the most important tests for WordPress to include in your CI/CD pipeline?"
In my experience, downstream projects differ enough that testing requirements can vary. However, from a global standpoint, every project should at least load correctly. At the University of Missouri, we used a style guide that ensured that on every homepage there were specific HTML elements that were present and visible. We also had guidelines for the footer and its construction, so I could run visual regression tests on the homepage to ensure no unintended changes occurred. Additionally, every project had a search feature, so I would run tests to ensure that the search page looked consistent from pull request to production and that end-to-end tests simulated a user performing a search with expected results. Beyond these, you should identify your critical user paths. For example, in a higher education context, if the site includes a call to action for applications, it is imperative to test that a visitor can follow that call to action, reach the correct location, and complete any associated form without issues.

"How would you approach setting up a staging and production site, assuming it is a content-only website?"
The challenge with staging sites in content-focused environments is that content stakeholders often wear multiple hats. As a result, the staging site may need to remain static for an extended period while waiting for content approvals, even though updates must continue to occur. In our experience, we would have the production site continuously updated using the strategies I described, and then, on a routine basis—usually manually—we would sync production with staging. This works provided that no visual changes have occurred, such as theme updates or plugin changes in production, while waiting for content updates. Platform.sh is very agnostic regarding workflow; you are not required to deploy to staging before production. This flexibility allows you to continuously update production even if staging is on hold waiting for approvals.

"Is there a best practice approach for dealing with premium plugins and themes that are not available via Composer?"
There are a couple of ways to handle this situation. One method is to include a placeholder directory for plugins in your repository. For custom or premium plugins or themes, you could commit the code directly into that repository and, during the build stage, copy the code from that directory to the appropriate plugin or theme directory. Another approach is to maintain a separate repository for these premium plugins. You would periodically pull updates from the source of the premium plugin, commit the updated code into your repository, tag it, and then reference it via a separate Composer file so that you can still manage it using Composer. Alternatively, you can set up your own Satis instance—a private Composer package repository—that publishes the premium or custom packages for Composer to use.

If there are no further questions, I hope this presentation has given you some valuable ideas and inspiration.  If you have any follow-up questions, please feel free to contact me. I look forward to speaking with you online.

Get the latest Platform.sh news and resources
Subscribe

Related Content

CTO insights: lower costs, maintain high quality apps

CTO insights: lower costs, maintain high quality apps

Company
AboutSecurity and complianceTrust CenterCareersPressContact us
Thank you for subscribing!
  •  
Field required
Certified B CorporationLeader Winter 2023
System StatusPrivacyTerms of ServiceImpressumWCAG ComplianceAcceptable Use PolicyManage your cookie preferencesReport a security issue
© 2025 Platform.sh. All rights reserved.
Supported by Horizon 2020's SME Instrument - European Commission 🇪🇺