• Overview
    Key features
    • Observability
    • Auto-scaling
    • Multiframework
    • Security
    Frameworks
    • Django
    • Next.js
    • Drupal
    • WordPress
    • Symfony
    • Magento
    • See all frameworks
    Languages
    • PHP
    • Python
    • Node.js
    • Ruby
    • Java
    • Go
  • Industries
    • Consumer Goods
    • Media/Entertainment
    • Higher Education
    • Government
    • Ecommerce
  • Pricing
  • Featured articles
    • Switching to Platform.sh can help IT/DevOps organizations drive 219% ROI
    • Organizations, the ultimate way to manage your users and projects
  • Support
  • Docs
  • Login
  • Request a demo
  • Free Trial
Meet Upsun. The new, self-service, fully managed PaaS, powered by Platform.sh.Try it now
Blog
Cover image

Source Operations Sorcery for multiheaded Gatsby apps

source operationsdecoupled
10 March, 2021
Chad Carlson
Chad Carlson
Manager, Developer Relations

Source Operations is a new Platform.sh feature that lets you specify commands to commit changes to your project's repository. The Source Operations Sorcery series offers step-by-step guides to the magical tricks you can perform with Source Operations. In this article, we’ll spell out how to use Source Operations to expand on the decoupled or “headless” CMS pattern.

Decoupling your data sources from presentation allows you to develop additional applications that consume the same data. Sanity.io’s Simen Skogsrud likens an application to a human body with the CMS as its head. “The point of headless systems,” he says, “isn’t actually that you don’t want heads. It’s that you can pick and choose [your head].”

Once you lob off the presentation layer of a traditional monolith CMS to make it headless, you're in effect allowing the possibility for as many heads as you like to be added to the now exposed content API.

Using Source Operations to trigger fleet updates

Platform.sh offers multi-app support for the decoupled pattern. Since the frontend and backend applications exist in the same codebase within the same project, retrieving the updated content is as simple as redeploying that project. But each application you add demands more resources, not to mention the additional mental awareness from anyone trying to debug that project. The more applications in the cluster, the more your developers need to know in order to debug a problem.

Separation of concerns with the decoupled pattern relies on good abstraction between your apps. If you’re not careful, your decoupled multi-app can run into the same monolithic CMS problems you were trying to avoid in the first place by decoupling.

Instead, you could elect to place each head on its own project, making the backend (headless) CMS an isolated content store serving an API. If you’re really looking to decouple your applications, it’s about as decoupled as it gets. Resources are isolated to individual projects, and content sits elsewhere away from each frontend app. Our typical decoupled multi-app project becomes instead a fleet of presentation apps (heads) all consuming a common data source.

However, taking this process up to the fleet level presents a new problem. How do all of these presentation apps get updates from the datastore? Assuming that each of them are under active development, we could very well leave new content retrieval to those regular updates. Each time a deployment happens (commits, merges, etc.), that newest content will be retrieved anyway.

But let’s assume that’s not enough—that we also want to have a mechanism to trigger updates. As soon as new content becomes available in the data store CMS application, the event triggers each site in the fleet to retrieve and then present that new content.

Seems like something Source Operations can help us out with.

 Source Operations Sorcery: summoning the multiheaded Gatsby fleet

Setting up the Drupal 9 content store

We’ll start by assuming that our content is being served from a central Drupal project and that our fleet is composed of a few Gatsby presentation apps (again, each on their own projects). You can quickly deploy a Drupal 9 site from our template.

Once that’s deployed, we can enable the JSON API & Serialization modules and then add a few pieces of content. Keep in mind, the examples below leverage path aliases for Drupal, typically using the Pathauto module. Remember to manually assign a "URL alias" for each article for now, or you can add the module with Composer before writing your content:

$ platform get <PROJECT_ID>
$ cd <PROJECT_DIR>
$ composer require drupal/pathauto
$ git add . && git commit -m "Add Pathauto."
$ git push platform master

Setting up our Gatsby fleet

Gatsby is very flexible when it comes to consuming content. So long as you instruct Gatsby how to ingest, request, and display data, you can consume content from as many sources as you'd like. We've already covered one way to consume that data, which is through Gatsby's source plugin ecosystem. During builds, Gatsby requests the content sources specific to that plugin (Drupal in this example, but also WordPress, Strapi, and many more). It adds the content sources to Gatsby’s data layer with GraphQL, which can then be used to construct new pages.

This is the more common case, but there are other ways to consume content in Gatsby. You can forego a plugin entirely and instead rely on a flat committed file as a source for your content data. In this case, Gatsby doesn't necessarily request content from a data source during its build. Rather, it uses that local content file, same as if you defined an "About" page with a local about.js file.

We can imagine many other scenarios where we would want to regularly update a committed file in a repository: dependency lock files, a search index, a list of authors or YouTube videos you want to generate individual pages for. Both of the following scenarios for our hypothetical Gatsby fleet can be found in our example repository.

Case 1: the committed flat file datasource

In this case, we start off with a Gatsby starter. We create a project, push, and, once it's deployed, add Drupal's Master environment URL as an environment variable on the project.

$ platform variable:create -l project --prefix env: --name CONTENT_URL --value "<DRUPAL_URL>/jsonapi/node/article" --json N --sensitive N --visible-build y --visible-runtime y

We can see what the content looks like at that endpoint by downloading it locally to the same file we’ll end up using as our primary content data source going forward:

$ curl "<DRUPAL_URL>/jsonapi/node/article" -o content/My-JSON-Content.json

Now in that file we have every article on our Drupal site ready for Gatsby to consume. We just need to tell Gatsby where the file is and how to display its data. One change we can make is to simply list all of the content titles on our homepage, along with the date they were created. You can add the following to your src/pages/index.js:

...
import JSONData from "../../content/My-JSON-Content.json"

const BlogIndex = ({ data, location }) => {
 ...

 return (
   <Layout location={location} title={siteTitle}>
     <SEO title="All posts" />
     <Bio />
     <ol style={{ listStyle: `none` }}>
       {JSONData.data.map((data, index) => {
         return (
           <li key={`content_item_${index}`}>
             <article
               className="post-list-item"
               itemScope
               itemType="http://schema.org/Article"
             >
               <header>
                 <h2>
                   <span itemProp="headline">{data.attributes.title}</span>
                 </h2>
                 <small>{data.attributes.created}</small>
               </header>
               <section dangerouslySetInnerHTML={{ __html: data.attributes.body.processed }}></section>
             </article>
           </li>
         )
       })}
     </ol>
   </Layout>
 )
}

export default BlogIndex

The last thing to do here is to actually add the source operation that will allow our project to place the same curl we did before, but then commit it to the project. To do this, we add the following to our .platform.app.yaml file:

source:
  operations:
    update:
      command: |
        curl $CONTENT_URL -o content/My-JSON-Content.json
        echo Fetching JSON data from $CONTENT_URL
        git commit -am "Source Operation: Updated content from backend."

Once we push those updates, we'll be able to call platform source-operation:run update at any time to update our Drupal data file.

Case 2: Using

We've previously shown how to deploy Gatsby and Drupal together in a single project. We’ll start off with that same template, cleaning out all of the in-project Drupal configuration.

$ git clone git@github.com:platformsh-templates/gatsby-drupal.git
$ cd gastby-drupal && rm -rf drupal

Empty the existing services.yaml file, and remove all of the drupal app's routes from routes.yaml. In gatsby, we can replace the .platform.app.yaml file with the following simplified configuration:

name: "app"

type: "nodejs:14"

hooks:
  build: npm run build

web:
  commands:
    start: npm run serve -- -p $PORT

disk: 512

mounts:
  "/.cache":
    source: local
    source_path: cache
  "/.config":
    source: local
    source_path: config

The current gatsby-config.js file expects a drupal relationship for content, so let's replace that instead with the same CONTENT_URL environment variable covered in the last example.

var backend_route = process.env.CONTENT_URL;

Create a project for the site, as well as the CONTENT_URL environment variable. The plugin already expects the jsonapi/node/article endpoint, so there's no need to include it here:

$ platform variable:create -l project --prefix env: --name CONTENT_URL --value "<DRUPAL_URL>" --json N --sensitive N --visible-build y --visible-runtime y

Finally, add the source operation:

source:
  operations:
    update:
      command: |
        echo Last Content Update:  $(date) > counter.txt
        echo "Create dummy commit to force rebuild for updated content."
        git commit -am "Source Operation: Updated content from backend."

Instead of committing to a data file like in the previous example, all we're doing is printing the current timestamp to a random counter.txt file. Doesn't matter what it is, the commit will trigger the full rebuild required to grab any updated Drupal data on the backend.

Same as before, once we push those updates we'll be able to call platform source-operation:run update at any time to grab any updated content.

Next steps

Now that we have the same update source operation endpoint on all of the sites in our fleet, keeping them up-to-date is a point of preference. You could add cron jobs to each project that runs the operation, effectively creating a publish date shared by all of the presentation apps in your fleet. You could even move the responsibility to the Drupal app, writing an activity script that listens for the environment.push event on Drupal's Master environment. If you have a list of project IDs or an API token that can look them up, it can then call the operation on every site in your fleet.

The choice is yours, now get out there and start experimenting. And keep an eye out for our next Source Operations Sorcery article, where we’ll peer into our crystal ball for the secret to setting up editorial workflows with the headless CMS, Strapi.

Get the latest Platform.sh news and resources
Subscribe
Drupal 9

Deploy our Drupal 9 template for free

Deploy on Platform.sh

Related Content

Iterate faster with Improved Cancelable Activities

Iterate faster with Improved Cancelable Activities

Company
AboutSecurity and complianceTrust CenterCareersPressContact us
Thank you for subscribing!
  •  
Field required
Leader Winter 2023
System StatusPrivacyTerms of ServiceImpressumWCAG ComplianceAcceptable Use PolicyManage your cookie preferencesReport a security issue
© 2024 Platform.sh. All rights reserved.
Supported by Horizon 2020's SME Instrument - European Commission 🇪🇺