In the short term, to migrate, standardize, and manage 25 percent of 1,600 unmanaged, on-premises WordPress, Drupal, and static HTML websites to the cloud, applying industry best practices
Adopt Platform.sh to implement the cloud rollout while adhering to the university’s security and compliance mandates
- 300% increase in the number of requests Drupal and WordPress sites can handle
- 30% reduction in annual hosting costs
- Increased efficiency the ability to manage websites at scale, with fewer developers
- Faster, smoother workflows built into Git integration
- The ability to deploy new features and updates faster and more frequently
Groundbreaking research university embraces PaaS to manage security and compliance, maximize efficiencies across its website fleet
The first school of journalism. The first public US university west of the Mississippi River. The architect of homecoming, the long-standing, annual fall tradition of welcoming back the higher education community and its alumni to campus. The University of Missouri (Mizzou)—founded in 1839—takes pride not only in these firsts, but in its students’ and faculty’s contributions, achievements, and innovation.
With stepping up to tackle complex challenges in their DNA, the university recognized it needed to “put its arms around the technical debt, or organically grown web environments, we had acquired over the course of two decades,” explains Mizzou Director of Digital Service, Joint Office of Strategic Communications and Marketing, Kevin Bailey. In aggregate, an array of different content management systems, hosting platforms, lack of standards, inconsistent processes, interdepartmental dependencies, and “people just doing things their own way” prompted Bailey and the university’s Central IT Group to formulate a strategy to move away from local infrastructure to a cloud-based approach. An approach that would enable them to enforce standards, security, and compliance as well as brand and user-interface alignment.
And so, the search for a cloud platform provider began.
How to balance business agility and control
For Mizzou’s marketing and Central IT teams, adopting and implementing standards, security, and compliance policies across their web properties meant that everyone would (over time) use the same kinds of tools, (including content management systems) and processes. Of utmost importance? The mandate to protect student and university data. These criteria became the foundation of the university’s requirements and hosting-provider search.
“A lot of the systems our teams (the marketing division’s web development team, Central IT, procurement, and legal) evaluated allowed programmers and content managers to easily invite people to work on their sites, which is efficient in an agency environment, but would violate our security protocols,” Bailey explains.
To add a weighty layer of complexity to the university’s hosting-provider search, the marketing web dev team was then handed a monumental task: to move 25 percent of its 1,600 on-premises websites to the cloud. In concert, the university dictated that these 400 – 500 unmanaged WordPress, Drupal, and static HTML sites be migrated, standardized, and managed according to industry best practices. The conundrum: how could their team efficiently and cost-effectively move the sites to a managed platform, where updates and changes could be deployed to all sites quickly, securely, and consistently while minimizing downtime?
The university’s team scoured the marketplace and assessed vendors based on their defined requirements, finally selecting Platform.sh.
Looking back through a web dev team lens
Let’s take a step back and look at Mizzou’s environment from a web dev team perspective. Historically, the university’s websites had been completely decentralized. Any school, any division, any department with funding could hire their own developer. Or set up their own server. Or purchase third-party hosting, then set up a website. “The servers themselves were decentralized,” explains Mizzou Programmer Analyst Principal (WordPress) Paul Gilzow. “We had servers everywhere. I even had one under my desk that served up websites. We had no visibility into how many sites the university owned, but we projected it was around 2,000. Anyone with a purchasing card could go out and buy hosting, and we had no record of those sites.”
The proliferation of websites created another challenge: support. The web dev team wasn’t notified if a site had an issue nor did they have any permission to access it. And if a site had been compromised, there was no way to shut it down, potentially tarnishing the university’s brand and putting its security posture at risk. These scenarios prompted the Mizzou Central IT office to offer centralized web hosting, surfacing yet another challenge.
Within Central IT, various teams managed different pieces of the technology equation—from accounts and databases to physical servers, domains, networking, and development frameworks. The time between submitting a site request and putting it to use could take several days or weeks, depending upon the complexity of the request, and it was difficult to support.
Gilzow explains, “We had massive amounts of sites, and 13 different content management systems (which was like re-creating the wheel). From an institutional standpoint, we couldn’t support that much diversity. When we were asked to step in and offer support, it could take days or weeks to parse through how an individual site worked.”
In some cases, the team’s developers needed to update components of the stack more quickly to keep up with business requirements; change lower-level components more easily, without disrupting all sites on the infrastructure; and isolate a compromised site before it affected other sites. The university wanted the web dev team to quickly spin up new sites and new features. “The university assumed our team and Central IT all worked in perfect synchronicity, moving in the same direction. But those of us on the ground knew we had all these different sites going in different directions, and we were trying our best to wrangle them,” Gilzow recalls.
After carefully considering the challenges of continuing to support a diverse web environment, the university chose to invest in systems that tightly integrate operations and development. Systems that could facilitate agile development, deliver high levels of performance, and allow a more standardized stack from the codebase down to the hosting layer.
The search for—and discovery of—development flexibility
Gilzow and the team knew that they needed to establish standards for far fewer content management systems (they ultimately chose Drupal and WordPress), DevOps practices, authentication, and backups. They wanted to update different components in the stack, set up new sites, and roll out new features quickly. And they wanted not only developer flexibility, but efficiency to manage more sites with fewer developers, working across departments to support one another.
More performance and more uptime were also key criteria for the marketing web dev team. They wanted the ability to push updates with confidence and to know things weren’t going to break. Developers felt that without automated tools to keep instances in sync, it was just too much work to test. The result? “A lot of cowboy coding directly on production websites, which would break things,” Gilzow shared. As for performance, the university’s central web hosting could sustain about 266 requests per second and was “pretty stable.” But with big bursts, performance dropped significantly.
With their technical requirements set, the team sent out their RFP, then weighed their options.
Gilzow chimes in, “What we were running into with other providers was that many didn't have a build process and expected us to provide it, then just ship them the artifact. Which is fine, except that means we have to then find another provider to do the build process; that’s more expense and more time. In higher ed, an RFP can take 6 – 18 months to complete for each service we want to procure. So, when we look at a hosting provider that says, ‘Oh, well, we do this, but you have to go get your own build process,’ then we have to initiate another RFP. And makes everything really, really difficult. We also looked at systems that were extremely opinionated. If you agreed with their opinion on it, then things worked wonderfully. But we deal with so much variability, we just couldn't figure out how to mold all of it into that opinionated state. We ended up spending 80 to 90 percent of our time in those systems fighting against that opinion.
Gilzow and Boyer shared when tasked with migrating websites—some of them smaller with only dozens of hits per month to those with millions of hits per month—they wanted to find a single solution to manage all of it. “Platform.sh does a good job of having many of the nuts and bolts for most sites built in,” says Boyer.
One of the team’s goals: to work together to manage their Drupal and WordPress content management systems, using approximately 90% the same process to leverage efficiencies. “We’re looking for commonalities to make sure that certain tools can be reused across each stack, gaining the flexibility to provide support for each others’ groups,” Boyer explains. “With Platform.sh adoption, this effort has already begun. A lot of the tickets we get are below the CMS level. So adding memory or checking a certificate, for example, should be the same across each CMS. Platform.sh lets us do the troubleshooting for each CMS across groups, so we can provide coverage for vacations and the like.”
Getting the developer community onboard
With a decentralized campus came a diversity of developer skill sets. And educating the broader development team was more involved than the marketing team had anticipated. “We had a lot of confused looks, especially when we started talking about containerized hosting,” explains Gilzow. “It's a complete shift in how you think about your site. People said, ‘What are you talking about? Where do I have to feed this code?’ They were accustomed to being able to touch the production server and make changes directly to it. Even when I shared the diagram from Platform.sh documentation, which is really, really thorough and good, by the way, the response was ‘OK, you no longer have just servers, right??’”
Most developers didn’t have a local development structure in place. To standardize local development on Lando, the team built a way to test and generate SSH keys, so developers could use them to deploy to Platform.sh, pull in the database and content, and connect to GitLab.
Learn more about local development with Lando and Platform.sh
Oh, and not all developers on campus had used Git. To get developers across campus up to speed, the team created a workflow based on merge request. It helped to get the broader development team over the hurdle, “feeling good about tools that open up a new world of flexibility,” says Gilzow.
Mizzou team calls out their favorite Platform.sh features
Development environments that speed and smooth workflows
Development environments are inexpensive and easy to build up and tear down. With our Central IT-managed infrastructure, if we wanted a test environment, we had to write scripts to sync everything, and make sure code was synced to the dev environment before it went live. People with smaller sites rarely used testing environments. Platform.sh enables us to put people in a workflow that’s pretty much built into our Git integration, seeing that whole process through syncing the code, database, and files between development environments. It just makes our lives easier.
Establish standards, automate, manage at scale
Platform.sh gives us the ability to create standards and move all these extremely variable sites into a standard workflow, a standard build process, a standard setup, so that we can manage things at scale. Platform.sh has a really good CLI tool—and behind that an API— allowing us to automate a lot of these things.
Here’s an example: per security policies, all applications are required to be integrated with an approved, centrally managed authentication service (e.g., Active Directory) for authentication. If the application is hosted off-premises, the only approved method is via Shibboleth. However, there are numerous ways and configurations that integration can take, and previously, everyone did it differently. Now, we have a single, standardized method of installing and configuring the Shibboleth Service Provider for each site. And we have a single module/plugin that’s used for each CMS. This approach enables our users to have a consistent authentication experience no matter which site they use—even if they switch between WordPress and Drupal. For developers, because there’s a consistent setup, it’s much more efficient to troubleshoot any errors that occur. And from an institutional standpoint, when there are security issues related to Shibboleth, because each site is configured the same, we’re able to deploy those fixes in a much more expedient manner. Once we know it works on one site, we can confidently deploy to all the others.
One thing we found really refreshing is the Platform.sh team’s honesty about the strengths and limitations of the service. When we’ve experienced issues, Platform.sh staff have told us exactly what’s going on and owned up without any runaround. In those instances, Platform.sh specialists in those areas have actually jumped in to help make our stuff work.
Managing Drupal and WordPress sites at scale
For the Mizzou team, getting a single site up and running on Platform.sh was easy. The seeming challenge? To determine how to do the same at scale for the “hundreds and hundreds of Drupal and WordPress sites that weren’t identical, were set up differently, and that we didn’t have staff authority over,” says Gilzow. “The advantage with Platform.sh was that there’s no strong opinion of the current state or how things should work; instead, we had the flexibility to develop an approach that worked for us.”
The team began by rolling the sites they had jurisdiction over to Platform.sh. Quickly, they determined they needed to connect the sites to an upstream—a boilerplate and scaffolding of directories, files, and other elements that would enable developers to work within Platform.sh. So, the core team could easily incorporate any changes into the upstream, then push them downstream to all sites.
All the sites the marketing team oversees are set up to use SimpleSAMLphp, so they can implement authentication through Shibboleth. When SimpleSAMLphp announced a fairly critical security patch, Platform.sh enabled the team to update 70 sites—patched, pushed out, and back to live—in a day. “When we had a major security issue prior to adopting Platform.sh, it may have taken us weeks to contact site owners, get credentials, make updates (or force them to make the updates!), or even contact their manager to make the updates,” explains Boyer. “Today, we can say to site owners, ‘because your site is standardized, you can do the update yourself. But if you haven’t updated it yet, we’ll just push the change for you.’ That gives us extreme flexibility. From weeks to a day, and then using that information moving forward to develop tools to speed that process even further, is a really big win for us.”
Find out about managing a website fleet on Platform.sh
Reported, real-world results
Performance: faster sites, faster provisioning
With Platform.sh, we're way faster. We rely on our IT counterparts for account creation. But once the accounts have been created, we can have a new site up, completely synced, ready to go—between GitLab and Platform.sh and all the pieces—in literally two minutes. We can also keep the stack components up to date much faster.
I had a developer the other day that said, ‘Hey, I can't use [PHP] 7.2; I need 7.1. What do I have to do?’ And I said, "Just go change your YAML file, redeploy to master, and you're done." Boom. So much faster. We're able to create features now and deploy them with confidence into production because we've had those multiple instances to test them and make sure they're going to work.
With Platform.sh, I can deploy features and updates much faster and more confidently. I feel like I’m delivering more frequently, too.
Our performance and uptime are much better. The 300% increase in WordPress performance is mirrored in our Drupal projects (based on Google website audit tools). We also have consistent backups across every site, regardless of technology. In the past, we had no idea whether or not a backup existed. And because our departments were siloed, the backups were in multiple different locations. Now, we have one single place. We know that if something happens, we can easily grab that snapshot, redeploy it, and have a site back up in literally minutes. Platform.sh is literally driving our digital web presence into the future.
It's very beneficial to us to be able to go out and talk to our customers and speak with authority to them about, 'Once we move here, when I get your site to Platform.sh, this is the kind of support you'll be able to get, this is the kind of uptime you can expect. You'll see faster response times on your websites than you were getting locally.'
From the end-user standpoint, we’ve seen a lot more speedy access as we look at Google Analytics; people are spending less time just clicking around. They're actually getting what they need quickly just because Platform.sh makes things so much faster.
Flexibility: future-proof support for multiple languages
We have flexibility to change components in the stack on a site-by-site basis, and we have the flexibility for the future. If we need to do a PHP or Node.js or Ruby application, now we can. We don't have to go through months and months of work between other departments, buy a server, and get these things set up.
Efficiency: more sites per developer, with faster dev onboarding
We're way more efficient. We're now able to manage many more sites with fewer developers than we were previously. Because, with the help of Platform.sh, we've standardized on all these things related to local development, I can give a developer who's never touched a site access to the repository, they clone it down, start up Lando, run through the script, and have a working, exact clone of the Platform.sh site on their local machine. They can start working on it and figuring it out. They’re able to support other sites, too, even if they haven't seen them.
Cost savings: eliminate duplicate efforts, lower expenses
The main thing it boils down to is when we purchase an account from Platform, we're able to have the hosting environment and the database environment together on a single bill. With local servers through our IT department, we had a LAMP stack: Linux, Apache, MySQL, PHP. The MySQL support charges for our internal service were quite expensive because it's kind of a one-off from what they normally support, so it costs them more. A lot of our reduction in cost revolves around the fact that we're bundling database hosting with Platform.sh, and we can get the database services at a significantly lower cost than we can internally.
Explore MySQL services on Platform.sh
30% Decrease in hosting costs
Most of our sites are small tier and are seeing a 30% decrease in their annual hosting costs with Platform.sh.
It's just been helpful to show our user community that they're getting a lot of value from a cloud-based hosting vendor like Platform.sh. We can honestly say to them, ‘These are the benefits that you're getting.’ And most of our websites are costing us less than they did just on our local hardware.
We project that once we get fully implemented on Platform.sh, with just two content management systems, our overall effort to create and maintain websites will be reduced. That will shorten our time to market—from when someone needs a website or rebuild or redesign to the time it actually becomes live. We needed a hosting system that would support those strategies, and Platform.sh does just that.
Meeting the stringent security requirements and aspirations of higher ed
Today, Gilzow and Boyer manage hundreds of websites very efficiently in contrast to the university’s past, when it was a slice of everyone's job to manage website back ends. "It’s just going to be fantastic once we get all of the sites moved over; we’re about four or five months away from completing Phase I (shutting off all the IT servers), and then we'll spend a year probably working on servers that departments still have running under their desks or in their closets,” says Bailey.
“I can see us maybe consolidating down to one content management system eventually, and gaining even more efficiencies going that way. It's just going to be more and more automation, so that our web developers and our content managers—who are doing all the content on the websites—can focus more and more on the UI elements we need on a website and how they manage the content, rather than worrying about all the back-end stuff.”
By the end of 2020, Bailey projects his team’s time will move from managing and migrating sites to improving the efficiency of current sites and rebuilding them to map to the university’s new brand. “Our team will focus their time on how to automate those design system elements into the themes and templates that we use across all our websites,” Bailey explains.